Posted on Leave a comment

Bringing Clients Online: Software Update-Based Installation

When it comes to deploying the client to domain-joined devices, you’ve got a few choices. Software Updates, Client Push, or custom script. Deploying via software updates is definitely my preference, as any machine joined to the domain will get the SCCM client package pushed via the WSUS server the client device is pointed to via GPO.

Assuming we already have a healthy SCCM environment with a Software Update Point role somewhere, it’s a straightforward process. For proper usage of this client-deployment strategy, you’ll also want to verify that you have extended your AD schema, published your site to your AD forest (from SCCM console), and created at least a single boundary and boundary group configured to be used for site assignment. Else, your clients will not find a source from AD for the package nor a site code/MP to be assigned to during install.

You need to publish the client package to WSUS through the console. It generally will take a few minutes before both version boxes populate.


Next, you’ll need to configure a GPO to point targeted machines to WSUS. Be sure to include the port designation in the path, else you’ll likely see errors in the Windows Update checking process once the client processes the new GPO. Ask me how I know.

You can hop on a targeted system, run “gpupdate”, and verify that this policy applies with gpresult. Opening Windows Update and clicking “Check For Updates” should show that updates are “being managed by your administrator”, and if all goes well- you should have one update available, that update being the SCCM client package.


Once these steps are completed, you’ll have live and active clients to manage, and they’ll receive their Windows Update items through Software Center alongside your other SCCM deployments.


Posted on Leave a comment

Some guidance on Software Update Points

I’ve heard some confusion, especially from people who are just starting to implement Configuration Manager in their environment, over the SUP role and how it looks in practice.

Obviously, you’re under no obligation to use the WSUS integration or Software Updates functionality in SCCM. You can continue to use your standalone WSUS, but in the eyes of a user, I’d much rather find my Windows Updates in the same place and being deployed with the same constraints as other applications and packages being released for my machine.

When you add the WSUS role to your target server, you’ll want  to complete only the initial configuration window where you’re asked where to store the updates content. Don’t proceed with the next bit, which will have you choose classifications and such. All of this is to be done within the console. The last time I did a rebuild, with v1607, I found that I had to perform a manual synchronization with Microsoft Update once after adding the role.

Once that’s done, you can add the Software Update Point role to your site server in the console. In my last corporate environment, this process was repeated for the CAS and three primary sites. The primary sites were configured to synchronize with the CAS, so essentially, CAS communicated with Microsoft Update and notified downstream/child servers when it retrieved new items. The idea here is that your WSUS database is complete, and then you can narrow down product selection and update classification from the SCCM console.  This is done during the addition of the Software Update Point role.

It’s a good idea to enable the WSUS cleanup tasks (I have had to deal with the results of not doing this), as well as enable alerts on synchronization failures so that you can be sure that the SUP is successful in what should be a mostly-automated process when you’re done, with the help of automatic deployment rules.

You should get an understanding for the entire process from CAS Microsoft sync to client experience before you implement this in production. You’ll want to lay down your intended Sync schedules, Software Update Groups, ADRs, target collections, available/deadline preferences, and probably create staggered deployments to allow a “beta” group to receive updates prior to being dropped into full production.

Posted on Leave a comment

Is your SCCM installation taking ages in a Hyper-V guest?

Rebuilding the SCCMChris lab as time permits, I ran into an issue during installation of tech preview v1703 — the installer would hang during the database setup for many, many hours. It didn’t seem to completely stall, but after a day, installation was still chugging along. Thankfully, there’s a simple solution! For your guest machine, disable “Dynamic Memory” in Hyper-V manager, uninstall the site to reverse your failed installation, then kick it off again.

Posted on Leave a comment

“The SQL server’s Name in sys.servers does not match with the SQL server name specified during setup”

I think my non-DBA background got me on this one today. I renamed my Primary box this morning after doing my SQL 2016 installation last night. Tidied up issues in the Pre-req check for SCCM installation, so I kicked it off and came back to this:

Within the ConfigMgrSetup log, I found:

[code]ERROR: SQL server’s Name ‘[WIN-1NOPPABSENJ]’ in sys.servers does not match with the SQL server name ‘[CM-PRIMARY]’ specified during setup. Please rename the SQL server name using sp_dropserver and sp_addserver and rerun setup.  $$<Configuration Manager Setup><04-06-2017 15:37:01.692+420><thread=1932 (0x78C)>[/code]

No doubt, this was due to my rename. It’s unfortunate that this isn’t checked a little sooner, as you’re left to do a site uninstall before you can rerun the installation properly. This is a great error, because it’s clear and even provides the solution.

Sort of. SP_DROPSERVER ‘MININTWHATEVERTHENAMEWAS” worked just fine, but apparently SP_ADDSERVER is no longer supported in SQL 2016 (maybe even earlier?). You’re instructed to use “linked servers” instead. In SSMS, I expanded “Server Objects”, then right clicked “Linked Servers”, and clicked “New Linked Server”. I entered CM-PRIMARY as Server Name and chose SQL Server as server type… and I’m greeted with a message stating you can’t create a local linked server. Switching back, I ran:
[code]"SP_ADDSERVER "CM-PRIMARY", local;[/code]
…and it executed without issue. I restarted the SQL service for good measure.

I confirmed the change worked by running the following, which returned my new system name, CM-PRIMARY:

[code]SELECT @@SERVERNAME[/code]

I was then able to uninstall the site server and rerun the install again, this time successfully.

Posted on Leave a comment

Hardware Inventory Implosion After v1610 Upgrade!

Note: This post is adapted from my working notes, so I apologize for being a little all over the place. I didn’t find this issue described online, so I thought it was important to get something posted to hopefully save someone else the trouble.

Naturally, my first routine servicing upgrade caused an implosion of hardware inventory across the hierarchy. My first indication of an issue was the SMS_MP_CONTROL_MANAGER being in warning status in console for all MPs. Logs full of this:

I confirmed that virtually all clients had last submitted hardware inventory the night of the v1610 upgrade. My clients are set to inventory nightly, so something has to give.

I went to a client and initiated a full hardware inventory in Client Center. Confirmed the InventoryAgent.log indicated successfully collecting Hinv and submitting it to MP.

So clients are submitting inventory to the MP, but it’s not processing properly. At this point, So, let’s look at a Management Point.

Checking out (installpath)\SMS_CCM\Logs\MP_Hinv.log, it’s loaded up with these:

OK…. So there’s the date error. This has some discussion around the internet (thanks Google) but I don’t see anyone saying it’s forcing their hardware inventory to cease…

The “cache is still obsolete” is probably related to our issue. Unlike a lot of error messages, I can’t find anything specific online.

It says it is making a retry file with this Hinv submission. Let’s see how bad the retry files are.. Looking at (installdirectory)\inboxes\auth\\retry\

Not good. 5200+ files. I quickly check my other MPs and find the same.

Going back to the original error about reloading the hardware inventory class mapping table. Our Hardware Inventory is extended to include LocalGroupMembers (Sherry K.) and additionally I’ve enabled the default Bitlocker class. My impression here is that the clients are submitting these extra classes, but the site servers aren’t expecting them now.

There’s an easy way to test this… let’s take the error at face value and “trigger a HINV policy change. I hopped onto the default Client Settings policy and I disable the LocalGroupMembers class, wait a few minutes, and then re-enable it.

Giving my nearest Primary a break here – I move all of the RetryHinv files from \inboxes\auth\\retry to a temp folder called “old”.

New diagnostic clue: after this change, the “obsolete cache” errors stop appearing in the MP logs. Additionally: no more retry files are being generated. I take 8 RetryHinv files and paste them back into the retry directory. After about 10 minutes, all of them disappear. Dataldr.log shows this:
5I check Process and they’re gone, they’ve been dealt with. Fantastic. 5,000 to go. I cut 500 of these back into the Retry directory. I suspect a number of these will be rejected because they are now too out of date. This is confirmed by some of them being moved to delta dismatch directory.
6Look at that. I verified that these ran through the process folder OK. I checked the BADMIFs directories to make sure I didn’t have 500 rejected MIFs. Only a few marked as delta mismatch. I’m guessing that’s not too bad considering that these machines have been submitting and hung up completely since the 27th. I move the remaining retry files back into the retry directory….

Caught in the act- the 4,77x RetryHinv files are disappearing in front of me. Looks like they are converted from HML to .MIF and then placed back in the directory. This directory ballooned up and the logs are going nuts.

Processed in a couple of batches- “Finished processing 2048 MIFs SMS_INVENTORY_DATA_LOADER”

There are about 1,000 “deltamismatch” in BADMIF. This is almost certainly systems that have submitted multiple delta reports that have been caught in the RETRY queue for the past week. Not surprising.

I checked all other inboxes to verify I don’t have backlogs anywhere else.

In summary, the “Obsolete Cache” error looks to have generated a Retry for every client hardware inventory submission. There was this long loop because every inbound Hinv generated a retry and every retry failed (and generated a replacement retry). This explains behavior I saw earlier: all of the retry files were continuously having their “date modified” updated to within a few minutes of each other (and no more than about 15 minutes from current time). So, in short, the dataldr inbox was stuck in an endless loop trying to process Hinv submissions.

The issue was obviously caused during the upgrade. 50% of my clients are offline, so there’s no way that fixing the clients was the solution to get the server processing the Retrys (and inbound new submissions) without error. No, updating the Client Policy must have replaced a configuration file or setting somewhere that corrected the issue.

I can’t be more specific than that at this point, but I’ve got a grasp on the situation, it appears.

As expected, the date error is not a showstopper, just a warning. It also appears to be a common thing. I can visit it at a later time since it appears to have a simple fix. See description here:

36 hours later, almost all of my Active clients have Hardware Inventory Scan dates listed after the upgrade date.

Text copies of relevant messages for Google’s use:

MP needs to reload the hardware inventory class mapping table when processing Hardware Inventory.

Hinv: MP reloaded and the cache is still obsolete.


Posted on 6 Comments

Windows 10 Enterprise Deployment Tips and Scripts

It’s about time, finally ready to roll Windows 10 in a production environment! For me, this process had a simple workflow (but a lot of effort for each step of the process). I’m not going into great detail on the entire process here, but I figured I’d share my project task list as well as the scripts I used in de-bloating the Windows 10 v1607 image!

My end-goal is to deploy Windows 10 via task sequence with as little bloat as possible, non-enterprise apps removed, and do it as securely (GPT, UEFI, Secure Boot) and efficiently as possible. My benchmark for Windows 7 deployment is 26 minutes from PXE to Windows logon screen.

  1. Upgrade SCCM to v1610 for added BIOS to UEFI conversion task sequence steps
  2. Create Windows 10 driver packages for models I intend to support.
  3. Build and capture an unmodified reference image WIM in Hyper-V.
  4. Implement USMT to allow for profile migration from Windows 7 to Windows 10. Switching from MBR to GPT = disk wipe.
  5. Create task sequences for Windows 10 deployment (in-place as well as bare metal)
  6. Create scripts to include in task sequences to strip the untouched reference WIM of what I consider non-essential applications and features.
  7. Switch from DHCP options to IP Helpers for WDS / PXE
  8. Implement Network Unlock for Bitlocker and enable Bitlocker during task sequences for all systems.
  9. Create Group Policy set for Windows 10, use the DoD STIG and Microsoft Security Baseline.

I’m still in the process of hammering out the final GPO, but the image is clean and I’m deploying successfully to a pilot group already. I’ve been forced to push Network Unlock to a later time due to logistics, but I do enable Bitlocker on laptops during OSD. USMT is on my radar for the coming weeks, but for now I have a functional TS and the Help Desk is happy.


Here’s a screenshot of my task sequence. I captured it on the Partition Disk step as I stumbled a bit getting the partitioning done correctly, so hopefully it’s helpful to someone else.

It was important to me to only modify the reference image in task sequences to allow for complete customization and more importantly, transparency. I need others to be able to understand each step I’m taking, why, and how I’m doing it.

It’s a lot easier to modify task sequence steps than a reference image, and I believe this is the best way to go about it. I have other SCCM users who will administer task sequences and being able to pick and choose what you do or don’t change is crucial to my environment.

So, here are the scripts I am using in my Customizing the applied OS step. I found most of this through many hours of research and experimentation.

  • Apply Customized Start Menu This step imports a start menu XML template that I captured from a Windows 10 machine. The end result is a start menu that is free of ads and remains completely customizable by the end user. There are three pinned items: Internet Explorer, Software Center, and File Explorer. It was important that I do this in the task sequence, as all other methods I read about would result in some amount of “lockdown” on the start menu. I want my users to all start with the same template, but be able to customize as they see fit. You can do this via GPO (with the caveat I just described). Following the one-line import is my Start.xml file.

    [code language=”powershell”]Import-StartLayout –LayoutPath Start.xml -MountPath $env:SystemDrive\ [/code]

    [code language=”xml”]<LayoutModificationTemplate Version="1" xmlns="">
    <LayoutOptions StartTileGroupCellWidth="6" />
    <defaultlayout:StartLayout GroupCellWidth="6" xmlns:defaultlayout="">
    <start:Group Name="Default Applications" xmlns:start="">
    <start:DesktopApplicationTile Size="2×2" Column="0" Row="0" DesktopApplicationLinkPath="%ALLUSERSPROFILE%\Microsoft\Windows\Start Menu\Programs\Internet Explorer.lnk" />
    <start:DesktopApplicationTile Size="2×2" Column="2" Row="0" DesktopApplicationLinkPath="%ALLUSERSPROFILE%\Microsoft\Windows\Start Menu\Programs\Software Center.lnk" />
    <start:DesktopApplicationTile Size="2×2" Column="4" Row="0" DesktopApplicationLinkPath="%APPDATA%\Microsoft\Windows\Start Menu\Programs\System Tools\File Explorer.lnk" />

  • Customized Start Menu, Part 2: Shortcuts The other consideration here is that the tiles do not like to work with their default start menu entries. Without creating the shortcuts in the root Programs folder, the tiles will not appear! I create these in Powershell.

    [code language=”powershell”]#Create an all users Iexplore shortcut in root of Start menu (programs directory).
    #Without this shortcut the pin will not display on the customized start menu (laid down during task sequence)
    $WshShell = New-Object -ComObject WScript.Shell
    $Shortcut = $WshShell.CreateShortcut("C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Internet Explorer.lnk")
    $Shortcut.TargetPath = "C:\Program Files (x86)\Internet Explorer\iexplore.exe"
    #Create a Software Center shortcut in root of start menu (programs directory).
    #Same reason as above. We have to specify the icon to display, else it will be blank
    #as iexplore points to an exe and software center does not (directly)
    $WshShell = New-Object -ComObject WScript.Shell
    $Shortcut = $WshShell.CreateShortcut("C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Software Center.lnk")
    $Shortcut.TargetPath = "softwarecenter:"
    $Shortcut.IconLocation = "%SystemRoot%\CCM\scclient.exe,0"

  • Disable Edge Browser Looking to the future, I opted not to enable this step at the sites I’m an administrator of, but I left it in place for use at other locations utilizing my task sequences. This is a really simple “run command line” step.
  • Disable Cloud Content / Consumer Experience / OneDriveSetup / Contact Support Cloud content and Consumer experience are settings you can publish through GPO, but at least in testing, I found myself getting logged in before they were in place. I decided to go ahead and place them during OSD, just to be safe. This will prevent “suggested apps” and such from showing up on the start menu. Consumer Experience is similar, I believe you’ll get things like “Candy Crush” without that setting. Contact Support obviously has no place in an Enterprise deployment. The Fix First Logon bit references an article that can explain what it’s for. I ran into an error and had two choices to fix it: insert quick command line correction or insert another reboot during OSD. Obviously, went with the former.
  • Remove provisioned applications This was a huge one for me. Most of the Apps included in Windows 10 are completely unnecessary in my environment. My ultimate goal in mind, I had to find a way to prevent these from appearing in my users’ otherwise uncluttered start menus. I found this script online (Google for it if you’d like) in many variations, but ultimately I chose one that removes everything that I don’t specify wanting to keep. I modified the script to output a log file of each entry it uninstalls to C:\Windows\Logs\Software. This just happens to be the default log directory for Powershell Application Deployment Toolkit, which I use heavily on all systems.Also important to me: I am not entirely confident that when I deploy a servicing update to my clients these applications won’t be re-provisioned. I’ll find out in testing, but this script can be re-run at any time to re-deprovision the apps if necessary. That’s huge for me. I would recommend you start out with a machine you’ve laid your reference image onto and pull a list of the currently provisioned apps with: “Get-AppxProvisionedPackage -Online”. Note what you *do* want in your production build before proceeding.

    [code language=”powershell”]
    # Get a list of all apps
    $AppArrayList = Get-AppxPackage -PackageTypeFilter Bundle | Select-Object -Property Name, PackageFullName | Sort-Object -Property Name

    # Start a log file for apps removed successfully from OS.
    $Location = "C:\Windows\Logs\Software"
    If((Test-Path $Location) -eq $False) {
    new-item -path C:\Windows\Logs\Software -ItemType Directory
    get-date | Out-File -append C:\Windows\Logs\Software\OSDRemovedApps.txt

    # Loop through the list of apps
    foreach ($App in $AppArrayList) {
    # Exclude essential Windows apps
    if (($App.Name -in "Microsoft.WindowsCalculator","Microsoft.WindowsStore","Microsoft.Appconnector","Microsoft.WindowsSoundRecorder","Microsoft.WindowsAlarms","Microsoft.MicrosoftStickyNotes")) {
    Write-Output -InputObject "Skipping essential Windows app: $($App.Name)"
    # Remove AppxPackage and AppxProvisioningPackage
    else {
    # Gather package names
    $AppPackageFullName = Get-AppxPackage -Name $App.Name | Select-Object -ExpandProperty PackageFullName
    $AppProvisioningPackageName = Get-AppxProvisionedPackage -Online | Where-Object { $_.DisplayName -like $App.Name } | Select-Object -ExpandProperty PackageName
    # Attempt to remove AppxPackage
    try {
    Write-Output -InputObject "Removing AppxPackage: $AppPackageFullName"
    # Write the name of the removed apps to a logfile
    $AppProvisioningPackageName | Out-File -append C:\Windows\Logs\Software\OSDRemovedApps.txt
    Remove-AppxPackage -Package $AppPackageFullName -ErrorAction Stop
    catch [System.Exception] {
    Write-Warning -Message $_.Exception.Message
    # Attempt to remove AppxProvisioningPackage
    try {
    Write-Output -InputObject "Removing AppxProvisioningPackage: $AppProvisioningPackageName"
    Remove-AppxProvisionedPackage -PackageName $AppProvisioningPackageName -Online -ErrorAction Stop
    catch [System.Exception] {
    Write-Warning -Message $_.Exception.Message

So, there it is. The customization of the reference image took me a significant amount of time to nail down to my own specifications, but I think my environment will benefit from the time invested for a long time. I hope that these centralized scripts are useful to someone in their quest to deploy Windows 10 Enterprise in a way that minimizes confusion to end users, reduces Help Desk inquiries, and ensures that client systems are as secure, efficient, and uniform as possible.

Posted on Leave a comment

PS App Deployment Toolkit: User Logged On / Off Deployment Types

I started using PSADT a year or two ago for my commonly updated applications. Flash, Java, Reader, etc.

One of the first issues I encountered was having a single deployment type. Per PSADT documentation, your deployment type should be deployed with “Allow users to view and interact with the program installation” ticked. Unfortunately, if you set “Logon Requirement” to “Whether or not a user is logged on”, this field greys out, unticked.

So, with this box unticked, PSADT proceeded in Noninteractive mode. Instantly closing Internet Explorer and whatever other apps I had specified. This didn’t make me (or anyone else) happy.

My workaround is quite simple. I have two identical deployment types with different User Experiences. Additionally, I have created a Global Condition to determine whether the workstation is currently in use or not (whether locally or via RDP). This Global Condition is set as a requirement on each Deployment Type.

You can create the Global Condition under Software Library -> Global Conditions. I named mine “Workstation in Use”. The discovery script is incredibly simple:

[code language=”powershell”][bool](query user)[/code]

On your “User Logged On” deployment type, configure as such:
User Experience Tab
Installation Behavior: Install For System
Logon Requirement: Only when a user is logged on
Installation Program visibility: Normal
Tick the Allow users to view and interact box.
Requirements Tab
Add -> Custom -> Condition -> Workstation In Use -> Value -> Equals -> True

On your “User Logged Off” deployment type, configure as such:
User Experience Tab
Installation Behavior: Install For System
Logon Requirement: Only when no user is logged on
Installation Program visibility: Normal
The “Allow Users to View and Interact” will be greyed out automatically.
Requirements Tab
Add -> Custom -> Condition -> Workstation In Use -> Value -> Equals -> False

This setup will allow you to give your users the PSADT experience, but also leverage PSADT (in noninteractive mode) to perform installations while no users are logged into the system(s).




Posted on 1 Comment

Scripting bulk client actions.

I had about 11 applications rolling out this weekend. Tonight, I saw about 200 systems hung up in the “Content Downloaded” status. They were well past the deadline date/time but had not yet enforced. I couldn’t find a common denominator, if I connected to any of them in Client Center and ran the App Deployment Cycle, they installed immediately. My maintenance window was closing, so I needed to focus on resolution rather than going CSI: on the issue.

Current Branch allows you to right click a collection and Notify clients to evaluate Application Policies, but there is not yet the same functionality in the Monitoring tab for particular groups with same reported status on a deployment.

Double Clicking the “Content downloaded” header gives you a copy-pastable list of clients. I stripped out the client names and saved them in a text file. I mass-triggered the Application Deployment Cycle with the following script:

[code language=”powershell”]
$clients = Get-Content C:\users\chris\desktop\clients.txt
ForEach ($client in $clients)
Invoke-WMIMethod -ComputerName $client -Namespace root\ccm -Class SMS_CLIENT -Name TriggerSchedule “{00000000-0000-0000-0000-000000000121}”

I suppose you could take the same list and create a collection, then trigger the client notification from the console. I use this simple script frequently for quickly getting things run on clients. Things like cycling the ccmexec service (or changing cache value and cycling ccmexec), where SCCM would not be practical.

SystemCenterDudes has an extensive list of triggers you can plug into this script HERE. You could also use this with the CCMCache script I posted HERE earlier this month.

Posted on Leave a comment

Deploying ccmcache location and size changes, part two.

If you didn’t read part one, you can find it at this link.

My original issue was with systems during migration defaulting back to incorrect ccmcache location and size values. Rather than continuing to deploy to specific systems to resolve, I went ahead and created a configuration item to ensure all client systems are set to intended values.

If you’re only looking to change the ccmcache size, there is an item for this now in Client Settings policies. Unfortunately, that doesn’t allow for changing location. I haven’t vetted this in a lab, but I wouldn’t be surprised making this change in client settings also locks you from modifying the values client-side. This won’t work for me, I have a small number of applications that break the 5120MB barrier. Not enough that I want to increase all systems cache sizes, but enough that I don’t want to revert to manual installations.

Using a compliance item will allow for me to make changes in the future as needed during installations, but also ensure that the value is changed back to standard afterward.

First, we create a Configuration Item with both a discovery script and a remediation script.


When you click Next, you’ll be prompted for a list of applicable operating systems. I selected all as in my environment, I’d like all my configmgr clients using the same values and thus enforced in the same manner. In an environment where separate values may be needed based on demand, you’d likely create multiple configuration items to include in multiple configuration baselines.


For discovery of the value we’re looking for, we use the following Powershell bit to query the SoftMgmtAgent WMI namespace for the CacheConfig class values. We simply return the values of the cache size and location to console.

[code language=”powershell”]$Cache = Get-WmiObject -Namespace ‘ROOT\CCM\SoftMgmtAgent’ -Class CacheConfig
write-host $Cache.Size $Cache.Location[/code]


The value returned, in my environment (and by default), should be: “5120 C:\Windows\ccmcache”. We know that if a client system returns anything else, it does not conform to our desired values for ccmcache and we need to run another script to fix that. In the rule creation below, I’m setting the expected returned value from the discovery script and enabling remediation where necessary.


Our remediation script is pretty simple, too. We know the client’s settings aren’t 5120MB and C:\Windows\ccmcache, so I correct that with the following:

[code language=”powershell”]$Cache = Get-WmiObject -Namespace ‘ROOT\CCM\SoftMgtAgent’ -Class CacheConfig
$Cache.Location = ‘C:\Windows\ccmcache’
$Cache.Size = ‘5120’
Restart-Service -Name CcmExec[/code]


The CcmExec service restart is necessary to apply the new values. I was not able to find a documented alternative to this.. so systems that run the remediation script will have their CcmExec service restarted. Implications from this: Software Center instances will close automatically. Potential policy evaluations and subsequent balloon notifications upon service initialization.

I opted to only remediate during maintenance windows for my Configuration Baselines, so this is less of a concern but still something to be aware of.

Once you’ve created your CI, add it to a pilot Configuration Baseline deployed to a small batch of test systems. I generally use a few different Win7, Win10, and usually even an XP system. In this case, I left a few default and changed the ccmcache size and/or location on the bulk of them. Their values were all uniform during the next maintenance window.



Posted on 2 Comments

Deploying ccmcache location and size changes

I was working on some deployments today and discovered a large chunk of systems that have had ccmcache location set to c:\ccm\cache and size set to 250MB since I migrated to a Current Branch hierarchy.

I did not want to deploy something to all systems in the target collection for this deployment as the ccmexec service would have to cycle, and I’m not sure what would happen if an install were in progress when that happens. My other option would have been to create a collection with only the machines failing with the same “not enough temporary space is reserved” message, and deploy a fix app/package to it.

In the end, I had the list of clients with “not enough space reserved” and just ran the following from my system:

[code language=”powershell”]Invoke-Command -ComputerName PCNAMEHERE,ANOTHERPC,ANDANOTHER,YETANOTHER -ScriptBlock { $Cache = Get-WmiObject -Namespace ‘ROOT\CCM\SoftMgmtAgent’ -Class CacheConfig
$Cache.Location = ‘C:\Windows\ccmcache’
$Cache.Size = ‘5120’
Restart-Service -Name CcmExec }

In the future, I’ll probably look at spending some time creating a configuration item with this script utilized for remediation.