Control 4 Upgrades - Oct 7, 2021
Control 4

We've been running Control 4 at our house since it was built, going on 10 years now.

Last year our Doorstation failed. Although the chime still sounded you no longer got sound and video from it. It's possible I screwed it up when I was power washing the stucco above it one day - perhaps water got inside the unit. In any case Control 4 no longer sold that original model so we'd have to upgrade to their newer Doorstation 2 model. To be honest I preferred the look of the new one, but was concerned as the physical size was different. Thankfully they also sell a conversion bracket/kit so that it'll fit in the opening of the older one.

One variation has a keypad which typically would be used for someone entering a passcode and in order to open the door, but we didn't need that, so ordered the version with just a blank inset instead.

We also had some other C4 related things to be done so we scheduled a technician to come and do all of it at the same time.

Our Apple TV was one of the older (pre-HD) versions and getting long in the tooth so we decided to upgrade it. However there were concerns if we went to the latest 4k version that there'd be issues as it uses a newer HDMI version 2.1 standard, while our C4 matrix switch due to its age only supported the older 1.x standard. As a result I ended up buying the Apple TV HD. It would give us increased longevity in supporting newer OS releases longer, offering a faster processor and more storage - and as it uses HDMI 1.4, it would still be compatible with our C4 system.

The latest Control 4 OS is version 3.x, but we're still on the older 2.x version. Unfortunately we can't upgrade as most of our wall mount touch screens are the older version and aren't compatible. We could upgrade them, but they're hellishly expensive. At some point my beloved Panny plasma will die and we'll be forced to go to a 4k TV which will usher in a slew of required upgrades. At that point we'll bite bullet and upgrade everything - matrix switch, Blu-ray player, Apple TV, touch screens, and the C4 OS.

So we couldn't upgrade to the latest, but we could at least upgrade to the last supported 2.x version - which in our case turned out to be 2.9.1.

Control 4 DoorstationWeb InterfaceProgramming Motion DetectionCamera Display

On the day of our appointment the technician showed up and set about replacing and programming our new Apple TV and upgrading the C4 OS. All that went well and was straightforward. The problems started when he replaced our doorstation with the new model. It briefly worked in that when someone pushed the doorbell the camera feed would automatically pop up on the touch screens, but shortly afterwards it stopped working and despite spending several hours he couldn't get it working again.

So we had to wait a few weeks for them to come back to look at it again. Several more hours of effort ensued including them putting in a call directly to Control 4 support. Finally they got it working. Turns out the issue was that soon after being powered on, the unit tried to auto download and install the latest firmware. But for some reason it kept failing and this forced the unit into an endless loop and resulted in unstable operation. After manually upgrading the firmware everything started working properly.

Afterwards I logged into the web interface and went about doing some simple programming. Currently it's set to send me an email with screenshots when it detects motion on the walk way leading to our house and also to send an image when someone presses the doorbell. This is an ability we didn't have with the original Doorstation. The camera image is also much improved. The only negative is the new model looks so space age it's not obvious to some people what to press for the doorbell. Thankfully there's a plastic slot you can remove and put a label in - in our case we printed one that says 'Doorbell --->' which points to the adjacent button.

As of now everything is working fine and we're happy with the upgrades.

Home automation is terrific when it works, but a nightmare when something breaks or you're forced to do an upgrade which typically results in a cascade effect of required upgrades.

Chipping My Car - Sep 15, 2021

So I'm not exactly a gearhead by any stretch of the imagination.

But recently I decided I wanted to 'chip' my car - my beloved Pontiac G8. Chipping your car is when you buy a 3rd party accessory that modifies the factory default settings. Usually this is done to turn off various features or to enhance engine performance.

In my case I simply wanted to disable GM's Active Fuel Management (AFM) technology. As my engine is a V8 with 8 cylinders, AFM will monitor your driving and turn off half the cylinders when not needed. Typically this happens when cruising or when taking your foot off the gas. The intent was to improve fuel economy, but everything I've seen online in various forums indicates the savings are marginal at best. For me however, I found that when it makes the transition from activating to deactivating (turning on all cylinders) there's a slight but annoying pause before it kicks back to full power. As my car's warranty has long since expired I figured there was no harm in doing some mucking about.

From what I found online there seems to be two types of devices to chip your car. The first one which is about half the cost is a dongle which simply plugs into your cars ODB-II port - a diagnostic port all cars have had for years. The other kind will reprogram your car's computer. As manufacturers routinely warn people that doing this will void your warranty these devices allow you to save your existing configuration so that when you need to take your car back to the dealer for maintenance you can restore it and they'll be none the wiser.

Pontiac G8 GTAFM DisablerODB-II Port (Diagnostics)Active Fuel Management Disabled!

In my case, as I simply wanted to turn off AFM and not muck with any other settings I opted for the cheaper dongle option. After searching the forums I came across a product that had good reviews and whipped out the credit card and ordered online (there a spot on the website where you punch in your car make/model and it will verify if it's compatible or not).

When it finally arrived I opened it up and was somewhat dismayed that other than a company catalog and a sticker there was no instructions. Thankfully they're not needed. You simply plug the dongle into the port. When you start the engine a blue LED will light up and blink and then turn off and that's it. To verify it was working I went into the car's Engineering Mode which displays various diagnostic information - including the state of the AFM system. At first I didn't think it was working as I took 'All Active' to mean that AFM was active. I also figured at idle sitting in the driveway it wouldn't be using all cylinders. After doing some more research on the forums I found out that's not the case. And 'All Active' means all cylinders are active.

So with the dongle unplugged I enabled Engineering Mode and drove around the neighborhood. I was able to see it change from 'All Active' to 'Deactivating' to 'Half Active' and back again as AFM kicked in and out depending on what I was doing. I then put the dongle in, and did the same test. This time however it constantly displayed 'All Active'. In addition, I could tell it was working based on how the car was responding to my driving.

So overall I'm extremely happy with my purchase. While I can't quite compete with the guy across the street and his McLaren Spider at least my car is now running at its full V8 potential at all times.

iMac Resuscitation - May 24, 2021

So my beloved iMac that I use in my office recently died a slow agonizing death.

It started when I sat down one morning to see an error message that it had recovered from a failure waking up from sleep mode. From there onwards it was a slow downward spiral. The OS just completely ground to a halt. It would take 15 minutes to even switch between windows. In trying to figure out what the problem was I tried various things: Reset the System Management Controller (SMC), ran a disk check in Disk Utility, ran the hardware diagnostics on booting, ran Activity Monitor to rule out Malware chewing up CPU/Memory and so on.

Granted my system is considered 'vintage' in that it's a late-2013 model, but up until that crash it ran perfectly fine. I also did not want to simply replace it with a new iMac as the one I had was absolutely maxed out when released - a special build-to-order only configuration. Its equivalent new replacement would have cost me over $7000. So I was really left with only one option - replace the Fusion hard drive that came with it with a SSD drive and upgrade the OS at the same time from High Sierra to Catalina.

Fortunately I knew that I could get a replacement drive and kit from OWC.

After about a week it showed up and I got to work. On their website they have an invaluable video on the procedure to do the swap. Besides the drive, it also came with the equipment needed to disassemble your Mac, including a pair of suction cups. Up until this point I had only ever worked on upgrading the older plastic White cased iMacs which is a rather complicated affair, but getting to the internals a fairly easy process. On these newer aluminum Macs the display is held in place with adhesive, and much like with replacing a car windshield, you need the suction cups to remove it. First however, you need to use a special tool to run just behind the display and cut the adhesive holding it into place.

Once that is done you gently pry off the display and remove the two cables connecting it to the motherboard. This was tiresome as the one cable you have to gently pry off which is awkward while also holding up the screen. Eventually I got them both off and removed the display. Then it was a matter of loosening two screws on the right speaker to move it out of the way in order to access the hard drive and remove it. You then remove the bracket, the side screws of the hard drive, put them in the replacement bracket which also mounts the SSD drive, affix the temperature monitor to the drive, put the entire bracket and drive back in the Mac and put the speaker back in place.

From there you need to remove all the old adhesive tape and then replace it with the new adhesive tape that came with the kit. An included diagram shows you which strip goes where as they are also numbered. Then you put the bottom of the display in place, making sure it's flush to the chassis, and again awkwardly hold the screen while you reattach the two cables and then lower it in place and press all around the edges to seal it. Last step is to use the included microfiber cloth to remove any fingerprints and suction cup marks.

I then plugged everything back in and hit the power button and held my breath.

Upgrade Kit (minus suction cups)Ready to remove displayDisplay removedOld 3TB Fusion drive
New 4TB SSD driveNew drive installedNew adhesive tape affixedPowered on and booting up
Installing OS X CatalinaSelecting the SSD as destinationOS X Catalina installed!Transfering data from backup

Thankfully I got the old familiar boot chime and booted off the USB key which held an installable copy of OS X Catalina. I went into Disk Utility and formatted the new SSD drive. However in a bit of a head scratcher moment it gave me a bunch of formatting options - as the newer versions of OS X now come with APFS (Apple's new file system) I had planned on selecting it. But the drop down also gave you the option of APFS - Encrypted, APFS - Case Sensitive, and APFS - Encrypted and Case Sensitive. So I did some quick Googling and the consensus was to skip the case sensitive options and I didn't care about encrypting the drive, so I just picked 'ADFS'. But then you also had to choose the partition options which included Apple Partition Map, GUID Partition Map, and Master Boot Record. I knew MBR was an ancient DOS era type option so wasn't going to pick it, but wasn't sure which of the other two options to pick. Then answer after some more Googling is to pick GUID Partition Map - Apple Partition Map is for ancient (pre-Intel) Macs.

After the drive was formatted, I rebooted back into the OS X Catalina installer off the USB key and selected the SSD drive to install the OS onto. Once the installation was done I was able to create an account and login. But most importantly, everything was snappy and responsive again! The final step was to use the Migration Assistant to copy all the data back off of my external backup drive which held all my Time Machine backups. I want to emphasize how important using Time Machine and an external drive is. Without it, I would have lost everything.

Ultimately I still don't know what the culprit was, but the combination of a new drive and OS did the trick. As a bonus, the SSD drive is even faster and even larger than before. Plus with Catalina installed I should be able to get a few more years of use out of my pride and joy before Apple stops supporting the OS.

Edge Subscription - Apr 29, 2021

When we migrated from Exchange Server 2013 to 2016, as part of the redesign we implemented an Edge Transport server. It sits in the DMZ (Perimeter) Network and acts as a relay to receive and send emails from the mailbox servers sitting in the Production network. This is typically done to add an extra layer of security as you then aren't opening up ports on your firewall directly into your corporate back-end network.

Recently, as part of the contingency steps to protect against the serious Exchange security flaws that were discovered this past March we upgrade all our Exchange servers to Cumulative Update 20 (CU20) - which contained the prerequisite security fixes. Afterwards I noticed in the Exchange Admin Console (EAC) when looking at the server versions, all the Exchange servers showed the correct version (15.1 Build 2242.4) except for the Edge Transport server which still showed the original version it was setup with.

In addition, when you double clicked on the Edge server to see it's properties in the console it threw up the error below.

Exchange Error
Edge Transport Server Error

So off to Google I went trying to see what was going on. Eventually I came across a useful article that pointed out that after any upgrade to the Edge server you need to resubscribe it. As it sits in the DMZ this is the only way you're able to update its info in the console on your internal servers. This is a procedure that involves running the following command on the Edge server:

New-EdgeSubscription -FileName "c:\<filename>.xml"

Then copying that xml file to one of the mailbox servers (I'd suggest always using the same server for this) and running this command to resubscribe:

New-EdgeSubscription -FileData ([byte[]]$(Get-Content -Path "c:\<filename>.xml" -Encoding Byte -ReadCount 0)) -Site "<AD Site>"

Where obviously filename is the filename you want to use and AD Site is the site the servers are in. Once that is done you can run this command on the mailbox server to test that synchronization is still working properly:


It should give a SyncStatus of Normal. All that said, I was reluctant to just go ahead and do it as when we did the initial subscription, we noticed the license for the Edge server reverted back to Trial. After a bunch of messing about we eventually got the proper license to stick. So I arranged for Microsoft to be available in case that happened again. Thankfully it worked fine this time. I suspect the issue originally was that we forgot to apply the license before doing the subscription - so the take away is to make sure you apply the license to the Edge server first.

Afterwards I tested the synchronization which was good, did a refresh in the EAC, and this time it properly updated the version to match all the other servers.

Microsoft Oddities - Nov 3, 2020

A couple of Microsoft bugs have reared their ugly heads lately.

In the first instance, we recently updated Office 365 to a newer build, and the config file was created using an updated version of the Office customization tool. It tested fine using the traditional install methods (Start, Run, <path>\setup etc.). But we noticed when deploying on newly imaged systems using Microsoft Deployment Toolkit (MDT) that it would fail with the generic error below.

With a bunch of other projects going on I didn't have the time to troubleshoot properly so I put in a support call with Microsoft. That was a bit of a gong show. I've had this support issue with MDT in the past - when creating the support request you are supposed to pick the product affected. But there's no option for MDT. The closest product to it is Systems Center Configuration Manager (SCCM) which is what I chose. It took forever for someone at MS to contact me - I'm guessing the ongoing pandemic is disrupting operations in India - and when I finally was able to talk to someone they of course informed me I had the wrong product group. So I had to wait even longer for the right person to contact me. In total, it took a couple weeks to get to the right person.

Office 365 Error
Office 365 Install Error

By that time I was pretty frustrated so I decided to try and figure it out on my own. I added additional logging to the installation process for O365 and looking at the logs was able to find the problem. For whatever reason, and again only when deploying via MDT, it was adding extra '\'s to the source path. When you run the customization path you specify the source location for the install files, this is then stored in the xml file that gets created. And I verified the xml file was correct. But when you look at the install log it shows that it's trying to install from:


So while Microsoft took that info and investigated after doing a bunch of searching, I came across an article with a workaround. Basically you strip out the source path from the xml file, use a .cmd wrapper to call setup in conjunction with the %~dp0 variable which embeds the path the program is running from. I made those changes and now O365 installed fine. This is also a better way of doing it if you have remote sites as it ensures that O365 is installed locally and not from across the WAN. Microsoft knows what the issue is, hopefully they fix it in a future update.

In the other bug instance, one of my coworkers wanted to increase the default TTL timeout for servers from 20 minutes to 1 day. To do so he created the static entry in DNS for the server and unchecked the 'Register this connection's addresses in DNS' setting in the TCP/IP properties for the network adapter. With that setting checked, the next time the server rebooted, the DNS service restarted, or I believe after 24hrs it would set the default TTL back to 20 minutes. Everything seemed ok at first until the servers did their monthly reboot to apply MS updates. Then we started getting calls from users that they couldn't access certain servers.

Turns out there's a bug in Windows 2008R2/2012/2012 R2 that when that setting is unchecked it will remove the static entry from DNS. Sure enough, the affected servers were all running that version of OS. There's an article that mentions the bug and that it was supposedly fixed, but that article is several years old. So either the bug was reintroduced, or the fix never worked properly.

So after more research and more testing we found a group policy setting that would allow you to set the default TTL and leave the TCP/IP default setting enabled. That way we got our 1hr TTL and had the safety net of leaving the default TCP/IP setting enabled - and ensuring we wouldn't be bitten by this bug again. The group policy setting is:

Computer Configuration\Policies\Administrative Templates\Network\DNS Client\TTL set in the A and PTR records

Two bugs discovered in the span of a couple weeks.

Mystery Log File - Sep 1, 2020

One of our older VM's running Windows 2008 - yes, sadly, some are still running on such older operating systems - recently ran out of space on it's C drive.

After remoting in and poking around I found that there was a VMWare log file - vminst.log - that was taking up a shockingly large amount of disk space. After some research I found that this file is the log file that is written to when installing/upgrading VMWare Tools. Ok, so not critical, I should be able to just delete it right?

Nope. Whenever I'd try it wouldn't let me, complaining that the file was still in use.

So I did some more digging and came across some articles on VMWare's support site which offered several suggestions on how to get rid of this file. Unfortunately none of the 'official' suggestions worked. I tried rebooting, tried turning of the VMWare Tools service etc. But I still couldn't delete it. Finally I came across a thread on Reddit which offered the solution.

On that thread they said the issue was it was a hung install of VMWare Tools. But the weird thing was when I went and looked at the VM in vSphere it didn't show the Tools being mounted at all. So I'm not sure what happened. Obviously at some point in the past there was an attempt made to upgrade and it didn't work and got left in a hung state.

Locked vminst.log fileVMToolsUpgrader process runningVMWare Tools setup processesVMinst.log now able to be deleted

According to the thread the fix was to look for several upgrade processes and kill them - namely the VMWareToolsUpgrader process and a couple VMWare Tools setup processes (both 32-bit and 64-bit). Once I killed them I then had to stop the VMWare Tools service. It was only after doing all those steps that I was finally able to delete the log file and free up the disk space.

I did note that like with this VM, the original poster also encountered this on a VM running Windows 2008. So perhaps these older versions are more susceptible to having a hung install. In any case, the C drive is now no longer out of space.

Wireless Router Upgrade - May 7, 2020

A long time ago, Apple's slogan was 'It Just Works'.

And for a long time the slogan was appropriate as using a Mac was just an all around less frustrating experience than using an equivalent Windows based PC. Fast forward several years however and that slogan began to be something of a cruel joke as that ease of use and ability to simplify the user experience was lost as they started to lose the battle for market share.

Yet occasionally, you're still able to have a shockingly good experience with Apple products.

Such is the case recently when I upgraded my 5th generation Apple Wireless router to their 6th generation model. At the time I was happy enough with the one I had. It was simple to operate, performance was good, and I liked the Airport utility interface. Sadly Apple announced one day they were getting out of the wireless router business so I decided to snap up what would be the last version they were selling before they became unavailable or were sold for large mark ups.

After a few weeks it showed up and then sat in my closet for I don't even know, several years. Until now.

With life under quarantine I decided there was no time like the present to finally getting around to upgrading.

That said, I wasn't looking forward to the process. Over the years I had built up a long list of custom firewall rules and settings and a long list of approved MAC addresses for my wireless network. I just assumed it would be a pain to transfer all those settings over and so I went about creating screenshots of each and every dialog and wrote down all the MAC addresses etc.

Router Migration
Migrating Router Settings

So I was pleasantly surprised when I read online that it was actually a pretty simple process. That process is this:

  • Power on the Airport Extreme

  • Using an iPad, iPhone or Mac computer go into your wireless settings. You should get a popup with it recognizing the new router and give you the option to replace your old router or extend your existing network

  • Select Replace

  • When it's done, power off the old device and connect your Ethernet cables to the new device

And well, that's basically it. I will say I ran into an issue while trying to do it from my iPad, half way through the transfer process it locked up. But I then tried it from my iMac and this time had no problems and it completed quickly. After the transfer was done it recognized there was an available firmware update, so I updated that - and then I was done. Everything was up and running on the new device. I went into Airport utility and verified that everything had been migrated over properly.

So what's better about the 6th gen? Mainly it supports the newer standard 802.11ac and offers roughly double the throughput. About the only downside is it's physically much larger. Same footprint, but quite a bit taller. Thankfully there are a number of 3rd parties selling mounting brackets - one of which I bought and used to mount it on my basement wall by the equipment rack.

In summary, yes, sometimes Apple still does 'just work'.

Phantom Domain Controller - Mar 30, 2020

A couple weeks ago I installed a new Windows 2016 based Domain Controller (DC) in our environment and demoted the old 2012 R2 based DC. Everything appeared to work fine, all the dialog screens indicated things were successful, going into Active Directory Users & Computers showed the old DC no longer in the Domain Controllers OU etc.

However about a week after that we had a user complaining they couldn't run the login script and when they checked their environment variables %LOGONSERVER% was showing the old DC trying to log them in.

Obviously something wasn't right.

Off to Google I went and looked at numerous articles. At least we weren't the only ones who've experienced this issue before. But all the suggested places to look, ADSIEDIT, DNS etc. didn't show the culprit. I did notice that in Active Directory Sites and Services the old system was showing under the local site folder, but unlike the other DC entries, it was missing the NTDS subfolder. In any event I deleted it, but after waiting awhile for replication to occur the user was still having the problem.

I also ran Repadmin and checked the replication topology and confirmed the old system wasn't showing.

Eventually I came across a post that in the troubleshooting steps said to open a Command Prompt and run:

nslookup <domain>

So I did that and sure enough the ip address of the old DC was listed amoungst the other DC's. Now I at least had something to go on. After switching gears and searching for related threads I finally found the answer in an article which suggested looking in DNS, but specifically the (same as parent folder) entry.

Leftover DNS Entry
Leftover DNS Entry

“In DNS console, go to the forward lookup zone for your domain. What you want to delete is the A record for "same as parent folder" that points to the IP of the machine that used to be a DC”.

Once again, there was the ip address of the demoted domain controller showing. I deleted it out of DNS and went back to the Command Prompt and re-ran the nslookup command and this time the entry was gone! Called the user to try again and they were now able to logon, with the newly setup DC doing the authentication.

From now on I'll be checking this when doing any future demotions.

Busted SCCM Update - Mar 4, 2020

We're currently running Build 1906 of SCCM, having recently upgraded to it.

So far the new version has been working fine and everything has been solid and working well.

However, today I was in the SCCM console and I looked in the Updates section and saw that there was a hotfix for it available for download. Going through the release notes showed nothing out of the ordinary, and just a handful of fixes so it should have been a quick and easy install.

In the past, anytime I've gone through this process there were never any issues. This time however was different as no matter how many times I’d click on the download button to get the hotfix, it’d pop up an error dialog. I did some Googling and apparently this is a ‘bug’ in version 1906.

SCCM Dialog
Not Downloading Update

Someone else had to put in a support call to Microsoft to get the solution. Which is this:

In the Updates and Servicing section, right click on the columns and select ‘Package Guid’. Resize the columns so you can see the associated GUID with the 1906 hotfix package. It should be: 8004652A-E13F-4456-B915-FFCB17363190. Open up SQL Management studio, click on the SCCM site database, and Execute the following query:

SQL Query
SQL Query To Add Package

Then restart the SMS_EXEC service. Wait approximately 15 minutes, and do a Refresh in the SCCM console for Updates and Servicing and you should see the State change for the hotfix from ‘Available to download’ to ‘Ready to install’. Now you’re good to go. You can also verify the download status by opening the dmpdownloader.log:

SCCM Log File Trace

Happy updating!

Portrait Mode - Nov 16, 2019

Recently I broke out my Linux Mint PC and upgraded it to the latest version - 19.2.

Normally I just muck around with my Linux box for a few days and then set it aside. But now my intention is to one day use it as a validation node for the Ethereum network. Assuming there'll be a Linux client and assuming they ever make the switch from Proof of Work (POW) to Proof of Stake (POS). As such, I figured I should end up using it on a daily basis and get more familiar with Mint. So I set it up in what will be it's permanent place and hooked up the mouse, keyboard, and monitor. My monitor is an ancient HP model that you can rotate so that it operates in Portrait mode. I've never operated it that way before, but now I'm more concerned about conserving desk space, so it'd be ideal if I could get it to work in that mode.

I remember way back in the day when Radius came out with their Pivot monitor for the Macintosh and being blown away that the monitor could auto sense it's orientation and change the resolution appropriately. Fast forward to today, and unfortuneatly the default video driver I'm using in Mint doesn't do the same.

Off to Google I went and after some searching came across the 'xrandr' command you can run in a terminal shell and manually rotate the display. First you need to run the query option to get the internal name of your monitor connection - in my case DVI-1. Then it's simply a matter of telling it to rotate:

Rotate Command
Rotating The Desktop

Ok great, I can now manually rotate the display but that would quickly get old having to do it everytime I logged on. Fortunately, you can go to the list of Startup Applications and create a new entry with the rotate command:

Startup Applications
Startup Applications

After doing that I rebooted the system and once it came back up I logged in...and the graphic shell Cinnamon promptly crashed. After some head scratching I decided to add a 5s delay for the command and after rebooting and logging in there's no more crashing and the display automatically rotates to Portrait mode!

The only downside I've encountered with this approach is the command doesn't run until after you log on, meaning the login screen is still the wrong orientation. I imagine there's a way around that as well, but for now I'm happy with the result.

OpenManage Issues - Aug 31, 2019

With the introduction of some new servers I've started installing them using a newer version of Dell's Server Administrator (OMSA).

Specifically, I've been installing version 9.3 which is the latest offering of the product which acts as a standalone application to monitor the hardware health of the server but also can be setup to relay information back to Dell's OpenManage Essentials software which you can use as an Enterprise monitoring tool - it allows you to keep an eye on the health of all your servers.

One of the nice features is you can regularly download an updated database of all the latest drivers and firmware versions and you can then check and run reports to see which systems are out of date. You can either go and download the updates individually and apply them directly on the server, or the program itself will connect to the server and install them for you.

However I noticed that the newer systems weren't showing up in the inventory for systems needing updates. When I dug further I found they were all contained in a collection of systems that hadn't been inventoried yet. Which was odd as the software is set to routinely go out and inventory the various servers it has discovered and been set to manage. Looking at the logs I found this error:

Inventory collector is not installed on this server. In Band updates cannot be applied.

Weird, as I've never had to install a seperate piece of software before in order for inventory to work properly...

OpenManage Command
OpenManage Inventory Collection

Off to Google I went and eventually came across this thread.

For some reason Dell, in it's infinite wisdom, decided to turn off inventory collection by default in Server Administrator in the newer versions. To get it working you need to break out the command prompt, navigate to the directory OMSA is installed in and enter the command above. After doing that OpenManage Essentials was able to properly inventory these newer servers and present a list of which drivers and firmware versions needed updating.

Note, the article mentions it being turned off by default on OMSA 9.1, but I never encountered this being an issue until we upgraded to 9.3. Your mileage may vary.

Happy inventorying!

Exchange 2016 Setup Failure - Jul 16, 2019

Went to install our first Exchange 2016 server and as always seems to happen ran into problems right away.

Our current environment is Exchange 2013, having previously upgraded from 2010. Everything I read in the migration guides seemed to indicate the upgrade was a fairly straight forward process as outside of all the roles now being consolidated on one server there didn't seem to be a lot of major changes between 2013 and 2016. So I got the install media, got my install key, fired up setup and watched in keen anticipation that we'd soon be running the new version. However during the phase where it updates the Active Directory schema it crashed and burned.

As usual the error message was somewhat cryptic.

Thankfully punching it into Google returned an actually useful article for a change that explained what the issue was. Although the article is written for 2013, in our case it worked for 2016 as well.

Exchange Setup Error
Exchange Setup Error

I was somewhat sceptical but I went ahead and fired up ADSIEDIT went to the suggested object and added the suggested value. Upon re-running Exchange Setup it carried on past the point it previously failed at and after a few minutes the install was completed. The article mentions the issue is due to the Public Folder tree object being manually removed previously. Which makes sense as I recall having to do just that when we uninstalled the last Exchange 2010 server. No matter what I tried I could not finish the removal as it wouldn't let me delete the Public Folder database on that server. Eventually I had no choice but to use ADSIEDIT to rip it out.

So now we have the first server installed and can continue on with our migration project.

iMac Upgrade - Jun 28, 2019

In the past I've detailed the upgrades I've made to my beloved iMac circa 2006. It was the last Mac with a matte screen, had a fun plastic enclosure, ran the ever reliable Core 2 Duo chipset, and even came with a little remote that magnetically attached to the side you could use with to play music, watch videos etc.

To get the most out of it I had previously upgraded the memory to the maximum of 4GB (only 3GB recognized sadly) and swapped out the hard drive with a SSD drive. Finally I upgraded the Operating System to OS X Lion which was the last supported OS for that model. I also had the very rare 7600GT video card with max ram. So as far as I thought it was as upgraded as you could get.

Then one day I was randomly searching the Internet and I stumbled across a post saying that although officially unsupported it was possible to install OS X Mountain Lion (ML), the succesor to Lion on it. What?!?!?

So I did some more research, came across a mega thread on Macrumors and it seemed like this was actually legitimate.

Then I did some more digging and found out the general consensus was that ML was worth it. It was essentially a release that focused on improving things rather than churning out new features. It fixed most of the major issues with the previous release and gave an all around performance boost.

Ok, so I was convinced, now where to start?

First off, the downside to going through articles from over a decade ago is that in many cases the links are broken, the information out of date or just plain wrong. As such, it took me awhile to find the correct info and the correct software to accomplish this. Here then, are the steps I took:

  • Don't use MacPostFactor. Despite being a newer release. It does not work. Instead I downloaded what I believe is the last release of the software that came before it called MLPostFactor. It can be downloaded here (go to bottom of page).

  • You need the retail version of Mountain Lion. I got an installer off someone from eBay but it didn't work. The installer program should be roughly 4.47GB in size. Unfortunately you can't download it any more from the Apple app store. In the end I discovered I could still order it from Apple by going here. After ordering Apple sent me an email in a couple days with the necessary codes and download links.

  • On your hard drive create two new partitions (shrink existing if you need to) 20GB each in size. Label one 'Install', the other 'ML'. Make sure you pick 'Mac OS Extended (Journaled)' as the format.

  • Drag the 'Install OS X Mountain Lion' icon into your Applications folder

  • Run MLPostFactor. Pick the Install partition you created as the destionation volume. Pick 'OS X 10.8.4' as the version - yes, I know you have OS X 10.8.5, don't worry about that for now. Click 'Install MLPostFactor'.

  • After installation is finished, reboot your iMac while holding down the Option key. This will bring up a boot menu. Select the Install partition - note for me it was renamed 'EFI Boot'. Now install Mountain Lion on the ML partition you created.

  • After rebooting you'll get the Apple logo but it will be crossed out. Don't despair like I did. Reboot again, holding down the option key, pick Install (EFI Boot). On the top menu, go Utilities, and MLPostFactor. Do the same thing as before, but now pick the ML partition instead - it will now patch the ML install so your system will boot. Reboot, hold down the option key, pick the ML partition and you should now be in Mountain Lion!

Ok, but why does it still say 10.8.4 when you go into About This Mac? To fix that you need to edit a file. Open your hard drive and go into System, Library, Core Services. Create a backup copy of SystemVersion.plist and then edit it. Replace the contents with this:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "">
<plist version="1.0">
<string>1983-2013 Apple Inc.</string>
<string>Mac OS X</string>

Now when you go into About This Mac it will show as being 10.8.5. Besides properly reflecting the correct version, you are now able to run Software Update and get all the proper updates and patches for that version. That said, while I was able to install most of the available patches - do not install the 'Security Update 2015-006' as it will break things and you won't be able to boot again.

OS X Mountain Lion
Unsupported iMac Running Mountain Lion

After patches were applied I updated iTunes to the latest supported version -, went to the App store and downloaded software I had previously installed - ie. Numbers - but it now offered the upgraded versions that supported the newer OS. So the last thing to do was to pick a browser.

With Lion I was getting warnings more and more about my browser being outdated. I had long since stopped using Safari and was instead using the Extended Support Release (ER) release of Firefox which was the last supported version for Lion. I had read it was possible with some tweaking to run newer versions of Firefox on ML, but eventually I came across a bunch of posts recommending Waterfox which was a spin-off version of Firefox. It uses the engine from before the major Quantum upgrade, but it still comes with regular security patches - for all intensive purposes it's a modern supported browser running on your ancient iMac.

So far the only issues I've come across is in System Report, Bluetooth returns an error when you click on it - although my Bluetooth mouse works fine - and it looks like Apple detects unsupported configs when you run Messages and won't let you log in. Which is disappointing as I was looking forward to using it. But overall I'm extremely pleased with this upgrade, it definately feels much snappier, and I'm happy that I'm now on a modern browser. It's amazing to think I can still happily use this machine which is now almost 15 years old!

New Website! - Apr 22, 2019


As you can see the new website is up and running. It's not so much new as new and improved I guess. As mentioned previously, originally I had wanted the refresh ready in time for the 15th anniversary of this site. I had started working on a new design and was a few weeks into it when the hard drive crashed and I lost everything (yes I know, backups). So that was somewhat disheartening and then life got in the way combined with general laziness and it didn't happen.

When I finally got around to giving it a second try I quickly felt overwhelmed. It had been so long since I really even considered what website authoring platforms were out there that I felt somewhat like a Luddite. This site was obviously long in the tooth having been authored with Frontpage 2003. Wordpress is the current darling of bloggers everywhere but I don't like Wordpress sites. I don't care how many different themes and templates they offer, to me they all look the same. I had worked hard on my site and wanted to retain it's unique look. I also enjoy knowing the nitty gritty details of how everything works - for me just drag and dropping pictures and typing some content wouldn't be fullfilling. So in the end I went with Microsoft Expressions which while still dated, is about a decade newer than what I was using. It also has the benefit of being free.

But even with a new platform I wasn't sure how to tackle my biggest issue of how to make the site mobile friendly. In the end I found a local web design company and paid them to do the heavy lifting. I was confident they'd be able to supply the necessary coding that I would be able to integrate without having to do a complete rewrite. Turns out I was right.

So now things look good on any platform - whether computer, tablet, or phone. If you're using a phone it defaults to a new mobile menu and if you're using a computer and resize it to a small size it will auto switch as well. In addition to the mobile focus I also updated to version 2 of Slimbox which is the code used when looking at sets of pictures. I also put in a search button which uses Google search. Currently it's add supported, but if I wanted to I could pay them a yearly fee and it would strip that away. The only downside is that most results will show from prior to the update so will look out of place, but over time as more content is published post-upgrade it will all look consistent.

I struggled with how far back to go with the new format - redoing the entire site was not going to happen due to the amount of effort involved. In the end I decided just to go back to last year. I might eventually go back five years, but we'll see.

I also put in a quote generator at the bottom. Instead of just randomly showing different quotes each time a page is visited it will only show a new quote once per day (per browser). This was a blatant rip off of the one they have on Slashdot which I've always gotten a kick out of.

Finally I have obtained a 3rd party certificate and plan to make this site secure at some point in the near future. Personally I think the whole insistence on encrypted sites to be a money making scam by the search engine companies. Unless you're doing banking or inputting other personal information into entry fields websites do NOT need to be encrypted. But to the average user at home they just see the warning at the top of their browser and think something's wrong. So at some point I'll give in and submit to the inevitable.

So there you have it, it's a new era for!

Broken WDS - Apr 9, 2019

Got a phone call recently from our Help Desk asking if Imaging was down. Out Imaging consisting of Microsoft Deployment Toolkit (MDT) tied into Windows Deployment Services (WDS). Once images are captured and stored on our deployment servers, technicians will PXE boot the client system which brings up the Litetouch menu. They pick a few selections, hit Next and off it goes installing the image.

However this time it wasn't working at one of our locations. It would start downloading the Litetouch boot image...hang for a bit during that process...and then puke up the error below.

So I tried a bunch of things to try and resolve things. Had my counterpart try various different models to rule out it being a driver issue. Had them try different network ports to rule out it being a bad cable, port, switch etc. Restarted the WDS service on the deployment server and when that didn't work did the old standby trick of rebooting the entire system. Nothing worked.

I did a bunch of Googling but wasn't getting anywhere.

Litetouch Error
Imaging Failure

Finally I stumbled across this post and a light bulb went off - as the only recent changes done were applying the latest bunch of Windows Updates.

So as per the article I went into WDS, clicked on the TFTP tab, unchecked the 'Enable Variable Windows Extension' option, and rebooted the server. Sure enough that fixed the problem. About the only negative that's obvious from doing this is that the Litetouch boot image loads a little bit slower now. As the March updates broke things, I'm curious if the just released April updates have patched the patch.

Slow clap for Microsoft quality control!

Black Screen of Death - Mar 29, 2019

A colleague asked me why whenever he connected to any of our servers using Remote Desktop Connection (RDP) it would sit at a black screen for minutes before eventually continuing on with the login process.

I had noticed this phenomena as well but hadn't yet gotten around to investigating it. It did seem like it was happening more and more often and when you connect to servers multiple times a day the time wasted does add up.

There didn't seem to be any pattern, it would do it on some, but not all. It would do it on servers running 2012 R2 as well as older ones running 2008 R2. Would also do it on both physical and virtual systems. So off to Google I went (what did we do before the Internet) and tried to find a solution. Turns out we weren't alone in encountering this annoying issue. It has even been coined 'The Black Screen of Death', a humous riff on the infamous Windows' Blue Screen of Death.

The recommended solution, shown below, was to go into RDP properties on your client and turn off 'Persistent bitmap caching'.

Remote Desktop Options
Remote Desktop Options

Sure enough, that has seemed to have done the trick. We can now reliably connect using RDP and now are no longer left staring at a black screen. Doing some more digging it appears to be an issue with when your client has a different resolution than the target system you're trying to connect to. Some other suggestions involve simply running Task Manager which seems to get things rolling, or restarting the RDP service and trying again. But as mentioned, simply turning off Bitmap Caching works for us.

Why Won't You Install? - Feb 26, 2019

Recently I pulled a server out of service which was functioning as a VMWare ESXi host. The model was a Dell PowerEdge R720 and the plan is to repurpose it to be come a new Exchange 2016 server.

The first step was to upgrade the firmware on the NICs and on the iDrac, and install the latest Bios - which was done without any issue. The next step was the install of Windows 2016. That is where all the fun began.

To date, any OS reinstall I had done was on older server models using the Dell deployment DVD. You'd boot of the DVD and when it told you to do so, swap in the Microsoft OS disc and off it'd go and install everything. With 12th generation and up servers however, I knew you were supposed to install via the Lifecycle Management interface. So I popped into there, picked Deploy OS, clicked on the edition drop down...and 2016 wasn't an option. Did some digging online, and apparently 12th gen Dell servers don't support installing 2016 with that interface. Ok, a bit of a pain, but I figured I'd simply install 2012 R2 instead and then upgrade to 2016 from there. So again, back into the controller interface, picked 2012 R2, had my OS disc in the DVD drive...but the option was greyed out. What was going on?

Did some more digging online and found that you can apparently only install Windows through that interface if it's the retail version and not a volume license disc which is what we use. Some grumbling then ensued and back to the web to do some more searching. At that point I turned up several posts from people saying just to directly boot from the OS media and install it that way. Ok, well I happened to already have Win 2016 on a USB key that I'd previously used to test 2016 and knew was good. So I rebooted, picked the UEFI boot menu...and it didn't recognize the USB key. Did some more searching and found out that the file system - FAT32 - that it was formatted with only supports file sizes with a maximum of 4GB. Unfortuneatly the install.wim file was larger than that. For some reason if I booted into Legacy mode (non-UEFI) it would see the drive and I could install 2016 but then my system partition would be formatted with the older MBR format and not the newer GPT format.

At this point I was really starting to get annoyed. I came across some posts which used 3rd party tools as a solution, or other posts which mentioned booting with Windows 10's media and using the MBR2GPT command to convert the partition, but first you had to go and do some resizing and expanding of the support partitions. Eventually I came across a post which for me was the simplest an easiest solution.

First step was to use the DISM command to split the install.wim file into two smaller files:

dism /Split-Image /ImageFile:sources/install.wim /SWMFile:sources/install.swm /FileSize:4000

Then I deleted the old install.wim file, copied all the 2016 files off the USB stick into a temp directory and reformated the USB key using the DISKPART command:

list disk

select disk 3


convert gpt

create partition primary

format fs=fat32 quick


Obviously 'disk 3' being the USB key Then I copied everything back to the key, rebooted, double checked I was still booting into UEFI, and now it saw the USB key as a bootable option. I picked it and was able to proceed with installing Windows 2016. Much rejoicing then ensued.

Can't Resize Datastore - Jan 13, 2019

Our version of vCenter is currently 6.0.

In the past I've had to occasionally increase the size of the Datastores and would increase the size on the SAN and then simply go to the configuration tab on one of the hosts, select the properties for the Datastore, click on the Increase button, it would see the additional space available expand it, then do a rescan and the larger size would now be recognized.

But for some reason, after having upgraded vCenter to 6.0 and going through the exact same procedure I had done in the past, it wouldn't recognize the additional space. When you went to increase it there was no storage listed in the Extent Device dialog. Just to be sure I went online and looked up the procedure from VMWare's documentation and confirmed I was doing everything correctly - it just would not show the storage.

So what was going on? As usual I did a bunch of Googling and came across a post on Reddit of someone complaining of having the same issue. Somewhere in the thread someone mentioned to use the fat client and connect directly to one of the hosts.

They also referenced a VMWare support article.

VMWare Resize
Missing Device

Note that the article says that the inability to expand Datastores is a safety feature to prevent possible corruption. Ok great, but I still need to expand the space. The article doesn't mention how you go about doing that in consideration of these new safety filters that are in place. So as always, when in doubt, contact VMWare support before attempting this! In my case I went ahead and connected the vSphere client to one of the hosts directly, went into the datastore properties, and now when I hit the increase button it saw the added capacity. After expanding it I then went back into vCenter and on each of the hosts in the cluster did a rescan and now they all showed the larger size.

I'm guessing in the future, the 'safe' way is to shut down all the VM's first and then attempt to expand the space. But considering we've never had to do that previously this new 'feature' is somewhat of an annoyance.