I’ve heard some confusion, especially from people who are just starting to implement Configuration Manager in their environment, over the SUP role and how it looks in practice.
Obviously, you’re under no obligation to use the WSUS integration or Software Updates functionality in SCCM. You can continue to use your standalone WSUS, but in the eyes of a user, I’d much rather find my Windows Updates in the same place and being deployed with the same constraints as other applications and packages being released for my machine.
When you add the WSUS role to your target server, you’ll want to complete only the initial configuration window where you’re asked where to store the updates content. Don’t proceed with the next bit, which will have you choose classifications and such. All of this is to be done within the console. The last time I did a rebuild, with v1607, I found that I had to perform a manual synchronization with Microsoft Update once after adding the role.
Once that’s done, you can add the Software Update Point role to your site server in the console. In my last corporate environment, this process was repeated for the CAS and three primary sites. The primary sites were configured to synchronize with the CAS, so essentially, CAS communicated with Microsoft Update and notified downstream/child servers when it retrieved new items. The idea here is that your WSUS database is complete, and then you can narrow down product selection and update classification from the SCCM console. This is done during the addition of the Software Update Point role.
It’s a good idea to enable the WSUS cleanup tasks (I have had to deal with the results of not doing this), as well as enable alerts on synchronization failures so that you can be sure that the SUP is successful in what should be a mostly-automated process when you’re done, with the help of automatic deployment rules.
You should get an understanding for the entire process from CAS Microsoft sync to client experience before you implement this in production. You’ll want to lay down your intended Sync schedules, Software Update Groups, ADRs, target collections, available/deadline preferences, and probably create staggered deployments to allow a “beta” group to receive updates prior to being dropped into full production.
If you didn’t read part one, you can find it at this link.
My original issue was with systems during migration defaulting back to incorrect ccmcache location and size values. Rather than continuing to deploy to specific systems to resolve, I went ahead and created a configuration item to ensure all client systems are set to intended values.
If you’re only looking to change the ccmcache size, there is an item for this now in Client Settings policies. Unfortunately, that doesn’t allow for changing location. I haven’t vetted this in a lab, but I wouldn’t be surprised making this change in client settings also locks you from modifying the values client-side. This won’t work for me, I have a small number of applications that break the 5120MB barrier. Not enough that I want to increase all systems cache sizes, but enough that I don’t want to revert to manual installations.
Using a compliance item will allow for me to make changes in the future as needed during installations, but also ensure that the value is changed back to standard afterward.
First, we create a Configuration Item with both a discovery script and a remediation script.
When you click Next, you’ll be prompted for a list of applicable operating systems. I selected all as in my environment, I’d like all my configmgr clients using the same values and thus enforced in the same manner. In an environment where separate values may be needed based on demand, you’d likely create multiple configuration items to include in multiple configuration baselines.
For discovery of the value we’re looking for, we use the following Powershell bit to query the SoftMgmtAgent WMI namespace for the CacheConfig class values. We simply return the values of the cache size and location to console.
The value returned, in my environment (and by default), should be: “5120 C:\Windows\ccmcache”. We know that if a client system returns anything else, it does not conform to our desired values for ccmcache and we need to run another script to fix that. In the rule creation below, I’m setting the expected returned value from the discovery script and enabling remediation where necessary.
Our remediation script is pretty simple, too. We know the client’s settings aren’t 5120MB and C:\Windows\ccmcache, so I correct that with the following:
The CcmExec service restart is necessary to apply the new values. I was not able to find a documented alternative to this.. so systems that run the remediation script will have their CcmExec service restarted. Implications from this: Software Center instances will close automatically. Potential policy evaluations and subsequent balloon notifications upon service initialization.
I opted to only remediate during maintenance windows for my Configuration Baselines, so this is less of a concern but still something to be aware of.
Once you’ve created your CI, add it to a pilot Configuration Baseline deployed to a small batch of test systems. I generally use a few different Win7, Win10, and usually even an XP system. In this case, I left a few default and changed the ccmcache size and/or location on the bulk of them. Their values were all uniform during the next maintenance window.
Hi all! I wrote this some time ago back before my environment rebuild and content migration. As a result, some of this is not necessary (for instance, Report Builder worked out of the box on Server 12 R2 for me), but thought this was worth sharing with you guys.
I was asked to create a report to show what accounts and groups were inside of the Administrators group on all of our client systems. I found a post by Sherry Kissinger back with SCCM’07 I believe, which lead me to the post below, updated with logging for SCCM2012.. Shout out to Sherry for this post, please reference it for the MOF for extending hardware inventory and further information: https://mnscug.org/blogs/sherry-kissinger/244-all-members-of-all-local-groups-configmgr-2012 .
This was written for a CAS running Configuration Manager 2012 SP1 on a Windows Server 2008 R2 host.
Preparing the console host for report builder 3.0
Please note that I only had to do this when running SQL ’08 on Server ’08 R2. Leaving instructions here for reference. Worked fine with Server ’12 R2 and SQL ’16 out of the box. The first thing we want to do is get SQL Server 2008 R2 Report Builder 3.0 working. By default, you’re going to get a message saying whatever MP you’re connected to is missing the click-to-run application. You will want to do this on your MP, indeed, as we’ll use it to build our report later. Contrary to the message, you need to do this on any client running the console that you intend to edit reports from in the future as well.
Under “ReportBuilderMapping”, change the last two (of four) lines from 2_0_0_0 to 3_0_0_0 and save.
Launch the SCCM console and go to Monitoring -> Reporting -> Reports, then verify that clicking “Create Report” successfully launches Report Builder 3.0
Creating the configuration item
We need to create a custom WMI namespace to hold the information we’re looking to obtain. We do this by creating a configuration item.
This script was written by Sherry Kissinger and can be found via the link to her blog I mentioned earlier.
I’m not currently enforcing any compliance rules with this since I am looking to monitor, not remediate with this setting. This should work as long as WMI isn’t broken, in which case, you’ll have bigger issues with the client than just compliance settings.
Creating the configuration baseline
Local Group Membership is the only configuration item/baseline in use in my environment right now. This can contain all of your configuration items in the future, I have created one for top-level items (company-wide) and one for each site with their own SCCM admin as I use RBAC and allow the site admins to do their own thing with daily operations in general. This allows me to scope this out to keep it out of the hands of less-experienced admins. Note that you can include other configuration baselines in a configuration baseline, so you can create additional baselines for settings/configuration items that only apply to subsets of your All ConfigMgr Clients collection.
Add your configuration item and any other desired items (software updates, other configuration baselines, config items)
Deploy the newly created configuration baseline to your desired collection. I originally set our schedule to be simple, run every 1 days. I found that there was a dire need for this data to be current when Software Inventory ran (we schedule for once daily, but that time can vary with randomization and laptops), so have since changed to every 6 hours. When deployed, it’s my understanding that the clients will run it immediately, then your schedule will take effect.
Configuring the Hardware Inventory Classes
We now need to populate the new namespace with data. This data is gathered by the configuration item and submitted to the management point during Hardware Inventory. Please reference Sherry’s post for the MOF, you’ll use it to add the cm_LocalGroupMembers class. Afterward, you’ll just need to make sure the class is enabled in your client settings policy.
We’re collecting the data now, we just need to create the report to view it. You can also create a subscription to this report. Note: I did this on the MP directly (or wherever your SQL is hosted)… if you create the report from a client, from my understanding you’ll have to import the sql server certificate and such… easier to just create on the MP.
The report is created and opens in Report Builder 3.0
Right click Datasets, Add Dataset. On the Query tab, click “Use a dataset embedded in my report.”
Select the AutoGen datasource
Enter the following as a text query
[code language=”powershell”]select sys1.netbios_name0 ,lgm.name0 [Name of the local Group] ,lgm.account0 as [Account Contained within the Group] , lgm.category0 [Account Type] , lgm.domain0 [Domain for Account] , lgm.type0 [Type of Account] from v_gs_localgroupmembers0 lgm join v_r_system_valid sys1 on sys1.resourceid=lgm.resourceid where lgm.name0 = ‘Administrators’ order by sys1.netbios_name0, lgm.name0, lgm.account0 [/code]
Click “refresh fields”. You’ll be prompted for credentials, I used my reporting services account
The “Fields” tab will populate with the localgroupmembers0 fields. OK out of this dialog.
Let’s setup the report to display the data now.
Click “Table or Matrix”, then Dataset1, then Next
Drag all available fields to Values
(next, next, finish). Save your newly created report.
I created this report on my CAS to report across my 3 primaries.
You may have to play with the report to get the display to your liking, but the above steps gave me the following results from the console.
That’s it. I recently built a new hierarchy from the ground up and performed a content migration, which meant extending hardware inventory again and recreating all my custom reports. Still working with ConfigMgr CB v1607, Server ’12R2, and SQL 2016.