Installing the aftermarket Rostra cruise control kit in a Ford Transit van

Difficulty: 2/5, Cost: $300

Cruise control is one of those things that we take for granted, so much so that I didn’t even consider the possibility that it wouldn’t be included in my base model Transit van. No auto headlights, that’s annoying but tolerable. No cruise control, how is this even a thing? Luckily, a company named Rostra makes an aftermarket kit that makes adding cruise control a minor project.

Parts and Tools Required:

Rostra 250-9636 Cruise Control Kit ($299.95)
Soldering Iron, Heat Shrink, Lighter, Wire Stripper, T25 Torx bit, Flathead Screwdriver, Zip Ties, 3/8″ Drill Bit, Drill

Installation

You’ll need to remove the lower dash panel, which is only secured with clips — you’ll literally grab it and pull outward. The other trim piece that you’ll want to remove is the lower ignition column cover. This one was trickier, there are two T25 screws to remove, and then I wedged a screwdriver in between the upper and lower covers and pried it apart.

Completely disregard the installation instructions included with this kit. They make it seem more complicated than it is, even telling you to cut two wires on the OBDII harness, which is no longer necessary!

I recommend wrapping the harnesses in electrical tape. There are two pass-through type connectors to deal with. You’ll want to pop the OBDII connector from its place (push tabs), then connect to the Rostra harness, then put the Rostra OBDII connector in the original’s place.

Carefully route the accelerator pedal connector to the pedal, keeping in mind the movement of the steering column. I kept my harness tight to the plastic trim, using zip ties where appropriate. Disconnect the accelerator pedal connector, connect to the harness, then connect the Rostra connector in its place, much like you did with the OBDII connector.

Now, the only hard part. The red power wire — will need to tap into the brown/yellow ignition wire. Per Rostra, this must be a solder joint — any other will void warranty. Soldering is very easy to learn, affordable to get into, and a worthwhile skill to have! That said, there is no reason that a tap connector would not work here, it’s just a matter of longevity, and solder joints cannot be beat.

I stripped back some of the black wrap and cut the brown/yellow wire in half, then stripped both ends (plus the red Rostra wire), and soldered the joint together, then used heat shrink over it. Be very cautious routing this red wire (with 1 amp fuse) down the steering column. It’s a thin gauge wire, and there are moving parts and snap-together pieces to snag it. I did my best to route the fuse holder near the fuse panel for future access.

The next step is to drill a 3/8″ hole through the lower steering column cover. You’ll run the un-pinned harness from the cruise control handle through this hole, then push the connectors into the included Molex connectors. This is the one part that I referred to the installation instructions for (getting the correct pinout).

I simply tucked the controller into the dash, as it seemed secure enough. Another option would be to adhere it to the removable panel, but I felt confident enough with the placement that I chose. The kit includes zip ties, which I used on the harness where appropriate to avoid moving parts and potential snags.

Evaluation

I bought the Transit that I selected due to its steeply discounted price, and while the Cruise Control option from the factory is affordable, it was a feature that was not included in my base model. The Rostra kit adds functionality that I quickly found myself wishing for on my drive home. It’s not incredibly intelligent – for instance, it will not downshift to maintain a speed down a steep grade, however — it will hold your speed on flat or moderate grades (uphill). My only complaint is that at certain speeds, it seems overzealous in regards to downshifting. This is the only aftermarket kit that I am aware of, and I’m more than satisfied at the $300 price point. My official recommendation would be to select a van with factory cruise control, and if that’s not an option, this is a viable alternative.

Head Unit Replacement in a 2008 BMW X3 3.0SI

When I’m hunting for vehicles, one of my main evaluations is: what will I change and how difficult will it be to do it? Modern vehicles with integrated systems, diagnostic tools, and “infotainment” systems spell larger expenses and more complexity for even once simple things like replacing head units.

2008 was the last year that BMW offered the X3 without the X-Drive system, which made replacing the factory stereo incredibly straightforward. I left the factory amp in place, just as I did with the head unit in my 1998 328i and my 2006 Z4 Roadster.

From my reading, if you have the factory unit with navigation, you’re going to have a much tougher time going aftermarket. Tough enough that “no nav” was one of my criteria when I was hunting for the right X3 to purchase.

The head unit installation is pretty standard. The only further consideration required in the X3 was retention of the steering wheel controls, which I accomplished by utilizing the ASWC-1 module I’ve used in the past. It’s cheap, reliable, and simple to setup.

parts list

Single DIN Head Unit, I chose the Pioneer MVH-S501BS.
Scosche BW2337B Dash Kit
Scosche VW03B Wiring Harness
AXXESS ASWC-1 Steering Wheel Control Module
Metra 40-EU10 European Antenna Adapter

removing the old unit

The HVAC vents can be removed with a trim removal tool, credit card, or in my case: bare hands. Pull outward, there are no physical fasteners. There is a climate control cable that is attached to this – as seen in photos, no need to detach, just flip it up and rest it on the dash. The factory head unit is secured by two screws that will be accessible once the vents are removed.

Physically, all you’ll have to do is attach the side “wings” to the head unit you’re installing, then it will secure with a screw on either side. The fit and finish of this dash kit is on par with Metra kits I’ve used in the past. While not a perfect match, the color and finish is pretty close to the Schwarz interior. Much better fit than the equivalent Z4 option from Metra, which is the worst fit I’ve ever seen.

wiring: much easier than it looks

The wiring was essentially color-to-color between this Pioneer head unit’s harness and the Scosche wiring harness. Regardless, just compare the wire colors between your unit’s harness and the VW03B colors to be sure. The gray. white, purple, and green sets are your speakers. Yellow is “constant 12v” which is used for memory and such, while red is your switched power. Red is what is powering your head unit when your key is in the on/run positions.

Note: There is a Metra harness for the X3, the 70-9003, and I advise against it. It has a separate power wire for some reason (not pinned out), and additionally is lacking the CANBUS wires that you’ll need to retain steering wheel controls. The Scosche adapter has all pins populated and works well. You won’t use all pins, I simply popped out the extra leads from the VW03B adapter to make less wire spaghetti.

Other than the color-for-color connections, you’ll want to solder in additional wires from the ASWC1. The harness for this unit is intimidating, but we are actually only using 4 wires from it. Black, red, pink, and the black 3.5mm cable. You can unpin or simply tape up the remaining wires.

The black wire is soldered or tapped in with the stereo ground, the red wire is soldered or tapped in with the red stereo wire, and the pink wire is soldered or tapped into the brown wire corresponding with pin #9 on the vehicle. This was labeled “mute” or similar.

The programming process for the ASWC-1 is pretty simple, here is an illustration of pin #9 as well as the complete programming steps from Metra: https://metradealer.com/files/aswc/ASWC-1_INST_102.pdf

in summary

I was able to fish the microphone wire through the leather trim above the adjustable steering wheel, and then clip the microphone to the instrument cluster bezel. This made for good voice fidelity in calls, as well as looking relatively polished. The factory sound system, as I’ve discovered in the past, sounds much better with the new head unit. The total cost for this project was around $125, including the head unit, which I highly recommend (if you don’t need a physical CD slot) and have installed now in three of my vehicles.

Creating a headless DNS-based adblocker with PiHole on a Raspberry Pi Zero W

Required Hardware

  • Raspberry Pi Zero W ($10 – https://www.adafruit.com/product/3400)
  • Power adapter (5 volt, 2.5 amp) – I bought mine on EBay for $3.82
  • Optional: PI Zero W Case ($6 – https://www.adafruit.com/product/3446)
    MicroSD Card, I’m using a cheap class 10 8GB card, which should be plenty if this Pi will only be used for PiHole.

Required Software

  • Win32DiskImager – Use this to write our OS image to MicroSD
  • Putty – Use this to connect via SSH into Pi
  • Notepad++

Preparing the MicroSD

  1. Download the OS image, Raspbian Stretch Lite, from here:
    https://www.raspberrypi.org/downloads/raspbian/
  2. Extract the .IMG file from the ZIP to a convenient location.
  3. Prepare your SD card for writing. I like to clean the disk before proceeding to ensure old partitions and such are wiped out:
  4. Write the .IMG file to disk with Win32DiskImager or similar. Say “Yes” to warning prompt.
  5. When this finishes, you’ll probably get at least one error about unreadable file system. Don’t worry about it. Do not eject the MicroSD card yet. You should have a new drive listed as “Boot” now.

Enabling SSH and Provisioning Wireless Connectivity

One of the things I love about the Pi is how easy it is to turn it into a headless system. To do this, we need to specify our wireless network credentials ahead of time, as well as enable SSH.

SSH is now, by default, disabled in Raspbian Stretch Lite out of the box. Enabling it is as simple as literally creating a blank text file in the root of the boot partition named “ssh”. Your save menu in Notepad++ should look like this:

Similarly, to provision network connection, we’ll create a file in the same partition named “wpa_supplicant.conf” containing the following:

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
ssid="Network Name”
psk="Password"
scan_ssid=1
}

You’ll need to set EOL Conversion to Unix to make sure the file is parsed correctly. Something to do with line break formatting.

Your save menu should look like this:

Verify both wpa_supplicant.conf and SSH are present on your boot partition, then eject the MicroSD card and install in the Pi Zero W.

Moving to the Pi…

Power up the Pi Zero W, ensure you connect the MicroUSB cable to the port labeled “PWR IN” instead of “USB” – you should see a small green LED flickering.

Our Pi Zero W is now booting, enabling SSH access, and connecting to the wireless network we specified in the file above. We need to know the IP address that DHCP assigns to it. There are a few ways to do this, but it’s simplest to just login to your router configuration (assuming your DHCP is hosted there) and look for a device named Raspberry Pi.

I login to my Ubiquiti EdgeRouter X and go to my DHCP area. Essentially, you’re looking for your DHCP client list. This varies based on router manufacturer. I used to have an Apple Airport Extreme and it didn’t even allow you to view your own DHCP client list (laughable). In that case, you could use a software tool like N-Map to scan your network and identify your Pi.

I went ahead and reserved the IP .82 for the PiHole. You should do something similar. Best practice would be to move the Pi outside of my DHCP client scope and configure it (client side) to a static IP address.

Now, armed with our Pi’s IP address, we’ll open Putty and connect to it via SSH:

You may get a security warning here, click Yes to proceed. If you get a “login as:” screen, you’re golden.

The default login credentials are: User – Pi  / Password – Raspberry

You can (and should) change the “Pi” user’s password by running “passwd”

Run “sudo raspi-config” and choose Update, let the utility update itself.

Let’s also get the latest updates for Raspbian by running:

sudo apt-get update
sudo apt-get dist-upgrade

Now, finally, we can install PiHole by running:

curl -sSL https://install.pi-hole.net | bash

After some time, you’ll be greeted with this screen:

Follow the prompts. I use OpenDNS for upstream provider. IPv4 is default and most likely what you’ll want to operate on. Since I reserved 172.26.16.82 in DHCP, I will tell it to keep its current address and configure itself with that as a static address.

Leave the rest of the values as default (logging, web interface)

At the end of the install, make sure to note the default login password:

You can close putty and your SSH session now. Hop over to your browser and visit your PiHole’s web interface. Mine is http://172.26.16.82/admin/

If you forgot to save the default password, you can change it by opening an SSH session and running “pihole -a -p”.

There are a lot of areas to explore in the web interface, but at this point — you have a functional PiHole DNS ad-blocker with a basic list of 125,000 or so blacklisted domains.

You can put this into production now by configuring your DHCP server to assign clients to the PiHole’s address for DNS resolution.

This will differ from router to router, here’s how I do it on Ubiquiti hardware.. notice that I am using PiHole as the first DNS preference, and OpenDNS’s IP directly for some redundancy should my spiffy new $10 network appliance fail.

On a client PC, you will likely have to wait for the lease to expire or DHCP to notify the clients of the DNS configuration change. I haven’t tested how long this takes. You can trigger a refresh with “ipconfig /renew” on a client PC, then “ipconfig /all” should show your PiHole’s IP in the first DNS entry afterward.

That’s it. All of your clients will begin sending DNS requests through the PiHole now and the PiHole will actively block requests to known blacklisted addresses. There are various sites online to get additional lists to add to your PiHole, but the basic list does a decent enough job to get you started.

Bringing Clients Online: Software Update-Based Installation

When it comes to deploying the client to domain-joined devices, you’ve got a few choices. Software Updates, Client Push, or custom script. Deploying via software updates is definitely my preference, as any machine joined to the domain will get the SCCM client package pushed via the WSUS server the client device is pointed to via GPO.

Assuming we already have a healthy SCCM environment with a Software Update Point role somewhere, it’s a straightforward process. For proper usage of this client-deployment strategy, you’ll also want to verify that you have extended your AD schema, published your site to your AD forest (from SCCM console), and created at least a single boundary and boundary group configured to be used for site assignment. Else, your clients will not find a source from AD for the package nor a site code/MP to be assigned to during install.

You need to publish the client package to WSUS through the console. It generally will take a few minutes before both version boxes populate.

publish1publish2

Next, you’ll need to configure a GPO to point targeted machines to WSUS. Be sure to include the port designation in the path, else you’ll likely see errors in the Windows Update checking process once the client processes the new GPO. Ask me how I know.
gpo1gpo2

You can hop on a targeted system, run “gpupdate”, and verify that this policy applies with gpresult. Opening Windows Update and clicking “Check For Updates” should show that updates are “being managed by your administrator”, and if all goes well- you should have one update available, that update being the SCCM client package.

client

Once these steps are completed, you’ll have live and active clients to manage, and they’ll receive their Windows Update items through Software Center alongside your other SCCM deployments.

active

Some guidance on Software Update Points

I’ve heard some confusion, especially from people who are just starting to implement Configuration Manager in their environment, over the SUP role and how it looks in practice.

Obviously, you’re under no obligation to use the WSUS integration or Software Updates functionality in SCCM. You can continue to use your standalone WSUS, but in the eyes of a user, I’d much rather find my Windows Updates in the same place and being deployed with the same constraints as other applications and packages being released for my machine.

When you add the WSUS role to your target server, you’ll want  to complete only the initial configuration window where you’re asked where to store the updates content. Don’t proceed with the next bit, which will have you choose classifications and such. All of this is to be done within the console. The last time I did a rebuild, with v1607, I found that I had to perform a manual synchronization with Microsoft Update once after adding the role.

Once that’s done, you can add the Software Update Point role to your site server in the console. In my last corporate environment, this process was repeated for the CAS and three primary sites. The primary sites were configured to synchronize with the CAS, so essentially, CAS communicated with Microsoft Update and notified downstream/child servers when it retrieved new items. The idea here is that your WSUS database is complete, and then you can narrow down product selection and update classification from the SCCM console.  This is done during the addition of the Software Update Point role.

It’s a good idea to enable the WSUS cleanup tasks (I have had to deal with the results of not doing this), as well as enable alerts on synchronization failures so that you can be sure that the SUP is successful in what should be a mostly-automated process when you’re done, with the help of automatic deployment rules.

You should get an understanding for the entire process from CAS Microsoft sync to client experience before you implement this in production. You’ll want to lay down your intended Sync schedules, Software Update Groups, ADRs, target collections, available/deadline preferences, and probably create staggered deployments to allow a “beta” group to receive updates prior to being dropped into full production.

Is your SCCM installation taking ages in a Hyper-V guest?

Rebuilding the SCCMChris lab as time permits, I ran into an issue during installation of tech preview v1703 — the installer would hang during the database setup for many, many hours. It didn’t seem to completely stall, but after a day, installation was still chugging along. Thankfully, there’s a simple solution! For your guest machine, disable “Dynamic Memory” in Hyper-V manager, uninstall the site to reverse your failed installation, then kick it off again.

“The SQL server’s Name in sys.servers does not match with the SQL server name specified during setup”

I think my non-DBA background got me on this one today. I renamed my Primary box this morning after doing my SQL 2016 installation last night. Tidied up issues in the Pre-req check for SCCM installation, so I kicked it off and came back to this:
error

Within the ConfigMgrSetup log, I found:

[code]ERROR: SQL server’s Name ‘[WIN-1NOPPABSENJ]’ in sys.servers does not match with the SQL server name ‘[CM-PRIMARY]’ specified during setup. Please rename the SQL server name using sp_dropserver and sp_addserver and rerun setup.  $$<Configuration Manager Setup><04-06-2017 15:37:01.692+420><thread=1932 (0x78C)>[/code]

No doubt, this was due to my rename. It’s unfortunate that this isn’t checked a little sooner, as you’re left to do a site uninstall before you can rerun the installation properly. This is a great error, because it’s clear and even provides the solution.

Sort of. SP_DROPSERVER ‘MININTWHATEVERTHENAMEWAS” worked just fine, but apparently SP_ADDSERVER is no longer supported in SQL 2016 (maybe even earlier?). You’re instructed to use “linked servers” instead. In SSMS, I expanded “Server Objects”, then right clicked “Linked Servers”, and clicked “New Linked Server”. I entered CM-PRIMARY as Server Name and chose SQL Server as server type… and I’m greeted with a message stating you can’t create a local linked server. Switching back, I ran:
[code]"SP_ADDSERVER "CM-PRIMARY", local;[/code]
…and it executed without issue. I restarted the SQL service for good measure.

I confirmed the change worked by running the following, which returned my new system name, CM-PRIMARY:

[code]SELECT @@SERVERNAME[/code]

I was then able to uninstall the site server and rerun the install again, this time successfully.

Implementing Microsoft’s Local Administrator Password Solution

Many environments I’ve worked in fall into the same habit. They set the same local administrator password on all client systems across the domain and rarely, if ever, reset it. When you consider the number of ex-employees that have that password and knowledge of the fact that all non-servers sometimes use it, coupled with the potential for Pass-The-Hash attacks, you see quickly why Microsoft created the Local Administrator Password Solution. It’s really easy to implement. Easy enough that the documentation alone will probably get you there. Regardless, here’s my guide for implementation. As usual, your mileage may vary.

On your system, you’ll need to install the LAPS package with the management tools component to have the appropriate PS cmdlets and GPO template.
Download LAPS here: https://www.microsoft.com/en-us/download/details.aspx?id=46899

laps1
Choose to install management tools (and GPO extension if you intend to apply LAPS to the system you’re working from)

We need to accomplish 5 things to successfully deploy LAPS. Adjust paths as necessary, mine used as an example. I would suggest going through all of the motions with a test OU and a couple of test systems before deploying to a broad range of systems.

1. Extend the AD schema. This is a forest level change and cannot be reversed.

[code language=”powershell”]Import-Module AdmPwd.PS
Update-AdmPwdADSchema[/code]

2. Allow computers in target OU(s) to update their password fields.

[code language=”powershell”]Set-AdmPwdComputerSelfPermission -OrgUnit “OU=Computers,DC=sccmchris,DC=com”[/code]

3. Allow specific users to retrieve the content of password fields for computers in target OU(s). Here, let’s assume we have a group called “Desktop Support Staff” and we’d like members of that group to be able to retrieve local admin passwords for any system within the Computers OU.

[code language=”powershell”]Set-AdmPwdReadPasswordPermission -OrgUnit “OU=Computers,DC=sccmchris,DC=com” -AllowedPrincipals “Desktop Support Staff”[/code]

4. Configure GPO and link to appropriate OU. Below is my configuration. Note: Until you enable the setting “Enable local admin password management”, regardless of extension install or GPO application, nothing will be changed in re: to local admin password. If you leave “Name of admin account to manage” not configured, it will manage the default Administrator account. This is nice because you can roll the client MSI in advance of actually enabling LAPS.

5. Deploy LAPS CSE (client side extension) on target systems. This is the same MSI that you used to install the management tools. If you run this MSI with the silent switch, it will install only the GPO extension for the client (no management tools). This makes it incredibly easy to deploy in SCCM or you can even script it on non-SCCM clients.

Only once a client has the GPO extension installed, the GPO applied, and the “enable local admin password management” setting enabled, the management will begin.

That’s it. You’ve deployed LAPS. Of course, you’ll want to do some auditing to ensure systems are successfully submitting their passwords. Options for reading back local passwords: 

1. The MSI’s management tools component includes a LAPS UI for retrieving local admin passwords and forcing resets.
laps5

2. I use a LAPS Password plugin for SCCM. Find it here: https://gallery.technet.microsoft.com/LAPS-Extension-for-SCCM-e8bd35b1

3. PowerShell option:

[code language=”powershell”]Get-AdmPwdPassword -ComputerName W10L1234[/code]

4. You can retrieve the passwords for *all* computers in an OU (assuming you were granted Read). This is especially useful for your initial test deployment and verifying passwords are being submitted (accurately):

[code language=”powershell”]Get-ADComputer -Filter * -SearchBase “OU=Computers,DC=sccmchris,DC=com” | Get-AdmPwdPassword -ComputerName {$_.Name}[/code]

I had very few issues with this deployment. If I could give you one piece of advice it’d be to use options #4 to generate a list of systems that have submitted a password and compare it to a list of computers that have supposedly installed the client side extension already. Troubleshoot the delta. The few systems I had trouble with were generally experiencing group policy application issues. I had two systems (out of 1,000) that required manual reinstalling of the CSE.

Hardware Inventory Implosion After v1610 Upgrade!

Note: This post is adapted from my working notes, so I apologize for being a little all over the place. I didn’t find this issue described online, so I thought it was important to get something posted to hopefully save someone else the trouble.

Naturally, my first routine servicing upgrade caused an implosion of hardware inventory across the hierarchy. My first indication of an issue was the SMS_MP_CONTROL_MANAGER being in warning status in console for all MPs. Logs full of this:
1st

I confirmed that virtually all clients had last submitted hardware inventory the night of the v1610 upgrade. My clients are set to inventory nightly, so something has to give.

I went to a client and initiated a full hardware inventory in Client Center. Confirmed the InventoryAgent.log indicated successfully collecting Hinv and submitting it to MP.

So clients are submitting inventory to the MP, but it’s not processing properly. At this point, So, let’s look at a Management Point.

Checking out (installpath)\SMS_CCM\Logs\MP_Hinv.log, it’s loaded up with these:
2nd

OK…. So there’s the date error. This has some discussion around the internet (thanks Google) but I don’t see anyone saying it’s forcing their hardware inventory to cease…

The “cache is still obsolete” is probably related to our issue. Unlike a lot of error messages, I can’t find anything specific online.

It says it is making a retry file with this Hinv submission. Let’s see how bad the retry files are.. Looking at (installdirectory)\inboxes\auth\dataldr.box\retry\

Not good. 5200+ files. I quickly check my other MPs and find the same.
3

Going back to the original error about reloading the hardware inventory class mapping table. Our Hardware Inventory is extended to include LocalGroupMembers (Sherry K.) and additionally I’ve enabled the default Bitlocker class. My impression here is that the clients are submitting these extra classes, but the site servers aren’t expecting them now.

There’s an easy way to test this… let’s take the error at face value and “trigger a HINV policy change. I hopped onto the default Client Settings policy and I disable the LocalGroupMembers class, wait a few minutes, and then re-enable it.

Giving my nearest Primary a break here – I move all of the RetryHinv files from \inboxes\auth\dataldr.box\retry to a temp folder called “old”.

New diagnostic clue: after this change, the “obsolete cache” errors stop appearing in the MP logs. Additionally: no more retry files are being generated. I take 8 RetryHinv files and paste them back into the retry directory. After about 10 minutes, all of them disappear. Dataldr.log shows this:
5I check Process and they’re gone, they’ve been dealt with. Fantastic. 5,000 to go. I cut 500 of these back into the Retry directory. I suspect a number of these will be rejected because they are now too out of date. This is confirmed by some of them being moved to delta dismatch directory.
6Look at that. I verified that these ran through the process folder OK. I checked the BADMIFs directories to make sure I didn’t have 500 rejected MIFs. Only a few marked as delta mismatch. I’m guessing that’s not too bad considering that these machines have been submitting and hung up completely since the 27th. I move the remaining retry files back into the retry directory….

Caught in the act- the 4,77x RetryHinv files are disappearing in front of me. Looks like they are converted from HML to .MIF and then placed back in the dataldr.box directory. This directory ballooned up and the logs are going nuts.

Processed in a couple of batches- “Finished processing 2048 MIFs SMS_INVENTORY_DATA_LOADER”

There are about 1,000 “deltamismatch” in BADMIF. This is almost certainly systems that have submitted multiple delta reports that have been caught in the RETRY queue for the past week. Not surprising.

I checked all other inboxes to verify I don’t have backlogs anywhere else.

In summary, the “Obsolete Cache” error looks to have generated a Retry for every client hardware inventory submission. There was this long loop because every inbound Hinv generated a retry and every retry failed (and generated a replacement retry). This explains behavior I saw earlier: all of the retry files were continuously having their “date modified” updated to within a few minutes of each other (and no more than about 15 minutes from current time). So, in short, the dataldr inbox was stuck in an endless loop trying to process Hinv submissions.

The issue was obviously caused during the upgrade. 50% of my clients are offline, so there’s no way that fixing the clients was the solution to get the server processing the Retrys (and inbound new submissions) without error. No, updating the Client Policy must have replaced a configuration file or setting somewhere that corrected the issue.

I can’t be more specific than that at this point, but I’ve got a grasp on the situation, it appears.

As expected, the date error is not a showstopper, just a warning. It also appears to be a common thing. I can visit it at a later time since it appears to have a simple fix. See description here: https://technet.microsoft.com/en-us/library/dn581927.aspx

36 hours later, almost all of my Active clients have Hardware Inventory Scan dates listed after the upgrade date.

Text copies of relevant messages for Google’s use:

MP needs to reload the hardware inventory class mapping table when processing Hardware Inventory.

Hinv: MP reloaded and the cache is still obsolete.

 

Windows 10 Enterprise Deployment Tips and Scripts

It’s about time, finally ready to roll Windows 10 in a production environment! For me, this process had a simple workflow (but a lot of effort for each step of the process). I’m not going into great detail on the entire process here, but I figured I’d share my project task list as well as the scripts I used in de-bloating the Windows 10 v1607 image!

My end-goal is to deploy Windows 10 via task sequence with as little bloat as possible, non-enterprise apps removed, and do it as securely (GPT, UEFI, Secure Boot) and efficiently as possible. My benchmark for Windows 7 deployment is 26 minutes from PXE to Windows logon screen.

  1. Upgrade SCCM to v1610 for added BIOS to UEFI conversion task sequence steps
  2. Create Windows 10 driver packages for models I intend to support.
  3. Build and capture an unmodified reference image WIM in Hyper-V.
  4. Implement USMT to allow for profile migration from Windows 7 to Windows 10. Switching from MBR to GPT = disk wipe.
  5. Create task sequences for Windows 10 deployment (in-place as well as bare metal)
  6. Create scripts to include in task sequences to strip the untouched reference WIM of what I consider non-essential applications and features.
  7. Switch from DHCP options to IP Helpers for WDS / PXE
  8. Implement Network Unlock for Bitlocker and enable Bitlocker during task sequences for all systems.
  9. Create Group Policy set for Windows 10, use the DoD STIG and Microsoft Security Baseline.

I’m still in the process of hammering out the final GPO, but the image is clean and I’m deploying successfully to a pilot group already. I’ve been forced to push Network Unlock to a later time due to logistics, but I do enable Bitlocker on laptops during OSD. USMT is on my radar for the coming weeks, but for now I have a functional TS and the Help Desk is happy.

Untitled.png

Here’s a screenshot of my task sequence. I captured it on the Partition Disk step as I stumbled a bit getting the partitioning done correctly, so hopefully it’s helpful to someone else.

It was important to me to only modify the reference image in task sequences to allow for complete customization and more importantly, transparency. I need others to be able to understand each step I’m taking, why, and how I’m doing it.

It’s a lot easier to modify task sequence steps than a reference image, and I believe this is the best way to go about it. I have other SCCM users who will administer task sequences and being able to pick and choose what you do or don’t change is crucial to my environment.

So, here are the scripts I am using in my Customizing the applied OS step. I found most of this through many hours of research and experimentation.

  • Apply Customized Start Menu This step imports a start menu XML template that I captured from a Windows 10 machine. The end result is a start menu that is free of ads and remains completely customizable by the end user. There are three pinned items: Internet Explorer, Software Center, and File Explorer. It was important that I do this in the task sequence, as all other methods I read about would result in some amount of “lockdown” on the start menu. I want my users to all start with the same template, but be able to customize as they see fit. You can do this via GPO (with the caveat I just described). Following the one-line import is my Start.xml file.

    [code language=”powershell”]Import-StartLayout –LayoutPath Start.xml -MountPath $env:SystemDrive\ [/code]

    [code language=”xml”]<LayoutModificationTemplate Version="1" xmlns="http://schemas.microsoft.com/Start/2014/LayoutModification">
    <LayoutOptions StartTileGroupCellWidth="6" />
    <DefaultLayoutOverride>
    <StartLayoutCollection>
    <defaultlayout:StartLayout GroupCellWidth="6" xmlns:defaultlayout="http://schemas.microsoft.com/Start/2014/FullDefaultLayout">
    <start:Group Name="Default Applications" xmlns:start="http://schemas.microsoft.com/Start/2014/StartLayout">
    <start:DesktopApplicationTile Size="2×2" Column="0" Row="0" DesktopApplicationLinkPath="%ALLUSERSPROFILE%\Microsoft\Windows\Start Menu\Programs\Internet Explorer.lnk" />
    <start:DesktopApplicationTile Size="2×2" Column="2" Row="0" DesktopApplicationLinkPath="%ALLUSERSPROFILE%\Microsoft\Windows\Start Menu\Programs\Software Center.lnk" />
    <start:DesktopApplicationTile Size="2×2" Column="4" Row="0" DesktopApplicationLinkPath="%APPDATA%\Microsoft\Windows\Start Menu\Programs\System Tools\File Explorer.lnk" />
    </start:Group>
    </defaultlayout:StartLayout>
    </StartLayoutCollection>
    </DefaultLayoutOverride>
    </LayoutModificationTemplate>[/code]

  • Customized Start Menu, Part 2: Shortcuts The other consideration here is that the tiles do not like to work with their default start menu entries. Without creating the shortcuts in the root Programs folder, the tiles will not appear! I create these in Powershell.

    [code language=”powershell”]#Create an all users Iexplore shortcut in root of Start menu (programs directory).
    #Without this shortcut the pin will not display on the customized start menu (laid down during task sequence)
    $WshShell = New-Object -ComObject WScript.Shell
    $Shortcut = $WshShell.CreateShortcut("C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Internet Explorer.lnk")
    $Shortcut.TargetPath = "C:\Program Files (x86)\Internet Explorer\iexplore.exe"
    $Shortcut.Save()
    #Create a Software Center shortcut in root of start menu (programs directory).
    #Same reason as above. We have to specify the icon to display, else it will be blank
    #as iexplore points to an exe and software center does not (directly)
    $WshShell = New-Object -ComObject WScript.Shell
    $Shortcut = $WshShell.CreateShortcut("C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Software Center.lnk")
    $Shortcut.TargetPath = "softwarecenter:"
    $Shortcut.IconLocation = "%SystemRoot%\CCM\scclient.exe,0"
    $Shortcut.Save()[/code]

  • Disable Edge Browser Looking to the future, I opted not to enable this step at the sites I’m an administrator of, but I left it in place for use at other locations utilizing my task sequences. This is a really simple “run command line” step.
    edge
  • Disable Cloud Content / Consumer Experience / OneDriveSetup / Contact Support Cloud content and Consumer experience are settings you can publish through GPO, but at least in testing, I found myself getting logged in before they were in place. I decided to go ahead and place them during OSD, just to be safe. This will prevent “suggested apps” and such from showing up on the start menu. Consumer Experience is similar, I believe you’ll get things like “Candy Crush” without that setting. Contact Support obviously has no place in an Enterprise deployment. The Fix First Logon bit references an article that can explain what it’s for. I ran into an error and had two choices to fix it: insert quick command line correction or insert another reboot during OSD. Obviously, went with the former.
  • Remove provisioned applications This was a huge one for me. Most of the Apps included in Windows 10 are completely unnecessary in my environment. My ultimate goal in mind, I had to find a way to prevent these from appearing in my users’ otherwise uncluttered start menus. I found this script online (Google for it if you’d like) in many variations, but ultimately I chose one that removes everything that I don’t specify wanting to keep. I modified the script to output a log file of each entry it uninstalls to C:\Windows\Logs\Software. This just happens to be the default log directory for Powershell Application Deployment Toolkit, which I use heavily on all systems.Also important to me: I am not entirely confident that when I deploy a servicing update to my clients these applications won’t be re-provisioned. I’ll find out in testing, but this script can be re-run at any time to re-deprovision the apps if necessary. That’s huge for me. I would recommend you start out with a machine you’ve laid your reference image onto and pull a list of the currently provisioned apps with: “Get-AppxProvisionedPackage -Online”. Note what you *do* want in your production build before proceeding.

    [code language=”powershell”]
    # Get a list of all apps
    $AppArrayList = Get-AppxPackage -PackageTypeFilter Bundle | Select-Object -Property Name, PackageFullName | Sort-Object -Property Name

    # Start a log file for apps removed successfully from OS.
    $Location = "C:\Windows\Logs\Software"
    If((Test-Path $Location) -eq $False) {
    new-item -path C:\Windows\Logs\Software -ItemType Directory
    }
    get-date | Out-File -append C:\Windows\Logs\Software\OSDRemovedApps.txt

    # Loop through the list of apps
    foreach ($App in $AppArrayList) {
    # Exclude essential Windows apps
    if (($App.Name -in "Microsoft.WindowsCalculator","Microsoft.WindowsStore","Microsoft.Appconnector","Microsoft.WindowsSoundRecorder","Microsoft.WindowsAlarms","Microsoft.MicrosoftStickyNotes")) {
    Write-Output -InputObject "Skipping essential Windows app: $($App.Name)"
    }
    # Remove AppxPackage and AppxProvisioningPackage
    else {
    # Gather package names
    $AppPackageFullName = Get-AppxPackage -Name $App.Name | Select-Object -ExpandProperty PackageFullName
    $AppProvisioningPackageName = Get-AppxProvisionedPackage -Online | Where-Object { $_.DisplayName -like $App.Name } | Select-Object -ExpandProperty PackageName
    # Attempt to remove AppxPackage
    try {
    Write-Output -InputObject "Removing AppxPackage: $AppPackageFullName"
    # Write the name of the removed apps to a logfile
    $AppProvisioningPackageName | Out-File -append C:\Windows\Logs\Software\OSDRemovedApps.txt
    Remove-AppxPackage -Package $AppPackageFullName -ErrorAction Stop
    }
    catch [System.Exception] {
    Write-Warning -Message $_.Exception.Message
    }
    # Attempt to remove AppxProvisioningPackage
    try {
    Write-Output -InputObject "Removing AppxProvisioningPackage: $AppProvisioningPackageName"
    Remove-AppxProvisionedPackage -PackageName $AppProvisioningPackageName -Online -ErrorAction Stop
    }
    catch [System.Exception] {
    Write-Warning -Message $_.Exception.Message
    }
    }
    }[/code]

So, there it is. The customization of the reference image took me a significant amount of time to nail down to my own specifications, but I think my environment will benefit from the time invested for a long time. I hope that these centralized scripts are useful to someone in their quest to deploy Windows 10 Enterprise in a way that minimizes confusion to end users, reduces Help Desk inquiries, and ensures that client systems are as secure, efficient, and uniform as possible.