Microsoft Oddities - Nov 3, 2020
VMWare

A couple of Microsoft bugs have reared their ugly heads lately.

In the first instance, we recently updated Office 365 to a newer build, and the config file was created using an updated version of the Office customization tool. It tested fine using the traditional install methods (Start, Run, <path>\setup etc.). But we noticed when deploying on newly imaged systems using Microsoft Deployment Toolkit (MDT) that it would fail with the generic error below.

With a bunch of other projects going on I didn't have the time to troubleshoot properly so I put in a support call with Microsoft. That was a bit of a gong show. I've had this support issue with MDT in the past - when creating the support request you are supposed to pick the product affected. But there's no option for MDT. The closest product to it is Systems Center Configuration Manager (SCCM) which is what I chose. It took forever for someone at MS to contact me - I'm guessing the ongoing pandemic is disrupting operations in India - and when I finally was able to talk to someone they of course informed me I had the wrong product group. So I had to wait even longer for the right person to contact me. In total, it took a couple weeks to get to the right person.

Office 365 Error
Office 365 Install Error

By that time I was pretty frustrated so I decided to try and figure it out on my own. I added additional logging to the installation process for O365 and looking at the logs was able to find the problem. For whatever reason, and again only when deploying via MDT, it was adding extra '\'s to the source path. When you run the customization path you specify the source location for the install files, this is then stored in the xml file that gets created. And I verified the xml file was correct. But when you look at the install log it shows that it's trying to install from:

\\\\<server>\\folder\\subfolder

So while Microsoft took that info and investigated after doing a bunch of searching, I came across an article with a workaround. Basically you strip out the source path from the xml file, use a .cmd wrapper to call setup in conjunction with the %~dp0 variable which embeds the path the program is running from. I made those changes and now O365 installed fine. This is also a better way of doing it if you have remote sites as it ensures that O365 is installed locally and not from across the WAN. Microsoft knows what the issue is, hopefully they fix it in a future update.

In the other bug instance, one of my coworkers wanted to increase the default TTL timeout for servers from 20 minutes to 1 day. To do so he created the static entry in DNS for the server and unchecked the 'Register this connection's addresses in DNS' setting in the TCP/IP properties for the network adapter. With that setting checked, the next time the server rebooted, the DNS service restarted, or I believe after 24hrs it would set the default TTL back to 20 minutes. Everything seemed ok at first until the servers did their monthly reboot to apply MS updates. Then we started getting calls from users that they couldn't access certain servers.

Turns out there's a bug in Windows 2008R2/2012/2012 R2 that when that setting is unchecked it will remove the static entry from DNS. Sure enough, the affected servers were all running that version of OS. There's an article that mentions the bug and that it was supposedly fixed, but that article is several years old. So either the bug was reintroduced, or the fix never worked properly.

So after more research and more testing we found a group policy setting that would allow you to set the default TTL and leave the TCP/IP default setting enabled. That way we got our 1hr TTL and had the safety net of leaving the default TCP/IP setting enabled - and ensuring we wouldn't be bitten by this bug again. The group policy setting is:

Computer Configuration\Policies\Administrative Templates\Network\DNS Client\TTL set in the A and PTR records

Two bugs discovered in the span of a couple weeks.

Mystery Log File - Sep 1, 2020
VMWare

One of our older VM's running Windows 2008 - yes, sadly, some are still running on such older operating systems - recently ran out of space on it's C drive.

After remoting in and poking around I found that there was a VMWare log file - vminst.log - that was taking up a shockingly large amount of disk space. After some research I found that this file is the log file that is written to when installing/upgrading VMWare Tools. Ok, so not critical, I should be able to just delete it right?

Nope. Whenever I'd try it wouldn't let me, complaining that the file was still in use.

So I did some more digging and came across some articles on VMWare's support site which offered several suggestions on how to get rid of this file. Unfortunately none of the 'official' suggestions worked. I tried rebooting, tried turning of the VMWare Tools service etc. But I still couldn't delete it. Finally I came across a thread on Reddit which offered the solution.

On that thread they said the issue was it was a hung install of VMWare Tools. But the weird thing was when I went and looked at the VM in vSphere it didn't show the Tools being mounted at all. So I'm not sure what happened. Obviously at some point in the past there was an attempt made to upgrade and it didn't work and got left in a hung state.

Locked vminst.log fileVMToolsUpgrader process runningVMWare Tools setup processesVMinst.log now able to be deleted

According to the thread the fix was to look for several upgrade processes and kill them - namely the VMWareToolsUpgrader process and a couple VMWare Tools setup processes (both 32-bit and 64-bit). Once I killed them I then had to stop the VMWare Tools service. It was only after doing all those steps that I was finally able to delete the log file and free up the disk space.

I did note that like with this VM, the original poster also encountered this on a VM running Windows 2008. So perhaps these older versions are more susceptible to having a hung install. In any case, the C drive is now no longer out of space.

Wireless Router Upgrade - May 7, 2020
Apple

A long time ago, Apple's slogan was 'It Just Works'.

And for a long time the slogan was appropriate as using a Mac was just an all around less frustrating experience than using an equivalent Windows based PC. Fast forward several years however and that slogan began to be something of a cruel joke as that ease of use and ability to simplify the user experience was lost as they started to lose the battle for market share.

Yet occasionally, you're still able to have a shockingly good experience with Apple products.

Such is the case recently when I upgraded my 5th generation Apple Wireless router to their 6th generation model. At the time I was happy enough with the one I had. It was simple to operate, performance was good, and I liked the Airport utility interface. Sadly Apple announced one day they were getting out of the wireless router business so I decided to snap up what would be the last version they were selling before they became unavailable or were sold for large mark ups.

After a few weeks it showed up and then sat in my closet for I don't even know, several years. Until now.

With life under quarantine I decided there was no time like the present to finally getting around to upgrading.

That said, I wasn't looking forward to the process. Over the years I had built up a long list of custom firewall rules and settings and a long list of approved MAC addresses for my wireless network. I just assumed it would be a pain to transfer all those settings over and so I went about creating screenshots of each and every dialog and wrote down all the MAC addresses etc.

Router Migration
Migrating Router Settings

So I was pleasantly surprised when I read online that it was actually a pretty simple process. That process is this:

  • Power on the Airport Extreme

  • Using an iPad, iPhone or Mac computer go into your wireless settings. You should get a popup with it recognizing the new router and give you the option to replace your old router or extend your existing network

  • Select Replace

  • When it's done, power off the old device and connect your Ethernet cables to the new device

And well, that's basically it. I will say I ran into an issue while trying to do it from my iPad, half way through the transfer process it locked up. But I then tried it from my iMac and this time had no problems and it completed quickly. After the transfer was done it recognized there was an available firmware update, so I updated that - and then I was done. Everything was up and running on the new device. I went into Airport utility and verified that everything had been migrated over properly.

So what's better about the 6th gen? Mainly it supports the newer standard 802.11ac and offers roughly double the throughput. About the only downside is it's physically much larger. Same footprint, but quite a bit taller. Thankfully there are a number of 3rd parties selling mounting brackets - one of which I bought and used to mount it on my basement wall by the equipment rack.

In summary, yes, sometimes Apple still does 'just work'.

Phantom Domain Controller - Mar 30, 2020
Microsoft

A couple weeks ago I installed a new Windows 2016 based Domain Controller (DC) in our environment and demoted the old 2012 R2 based DC. Everything appeared to work fine, all the dialog screens indicated things were successful, going into Active Directory Users & Computers showed the old DC no longer in the Domain Controllers OU etc.

However about a week after that we had a user complaining they couldn't run the login script and when they checked their environment variables %LOGONSERVER% was showing the old DC trying to log them in.

Obviously something wasn't right.

Off to Google I went and looked at numerous articles. At least we weren't the only ones who've experienced this issue before. But all the suggested places to look, ADSIEDIT, DNS etc. didn't show the culprit. I did notice that in Active Directory Sites and Services the old system was showing under the local site folder, but unlike the other DC entries, it was missing the NTDS subfolder. In any event I deleted it, but after waiting awhile for replication to occur the user was still having the problem.

I also ran Repadmin and checked the replication topology and confirmed the old system wasn't showing.

Eventually I came across a post that in the troubleshooting steps said to open a Command Prompt and run:

nslookup <domain>

So I did that and sure enough the ip address of the old DC was listed amoungst the other DC's. Now I at least had something to go on. After switching gears and searching for related threads I finally found the answer in an article which suggested looking in DNS, but specifically the (same as parent folder) entry.

Leftover DNS Entry
Leftover DNS Entry

“In DNS console, go to the forward lookup zone for your domain. What you want to delete is the A record for "same as parent folder" that points to the IP of the machine that used to be a DC”.

Once again, there was the ip address of the demoted domain controller showing. I deleted it out of DNS and went back to the Command Prompt and re-ran the nslookup command and this time the entry was gone! Called the user to try again and they were now able to logon, with the newly setup DC doing the authentication.

From now on I'll be checking this when doing any future demotions.

Busted SCCM Update - Mar 4, 2020
Microsoft

We're currently running Build 1906 of SCCM, having recently upgraded to it.

So far the new version has been working fine and everything has been solid and working well.

However, today I was in the SCCM console and I looked in the Updates section and saw that there was a hotfix for it available for download. Going through the release notes showed nothing out of the ordinary, and just a handful of fixes so it should have been a quick and easy install.

In the past, anytime I've gone through this process there were never any issues. This time however was different as no matter how many times I’d click on the download button to get the hotfix, it’d pop up an error dialog. I did some Googling and apparently this is a ‘bug’ in version 1906.

SCCM Dialog
Not Downloading Update

Someone else had to put in a support call to Microsoft to get the solution. Which is this:

In the Updates and Servicing section, right click on the columns and select ‘Package Guid’. Resize the columns so you can see the associated GUID with the 1906 hotfix package. It should be: 8004652A-E13F-4456-B915-FFCB17363190. Open up SQL Management studio, click on the SCCM site database, and Execute the following query:

SQL Query
SQL Query To Add Package

Then restart the SMS_EXEC service. Wait approximately 15 minutes, and do a Refresh in the SCCM console for Updates and Servicing and you should see the State change for the hotfix from ‘Available to download’ to ‘Ready to install’. Now you’re good to go. You can also verify the download status by opening the dmpdownloader.log:

SCCM Log
SCCM Log File Trace

Happy updating!

Portrait Mode - Nov 16, 2019
Linux

Recently I broke out my Linux Mint PC and upgraded it to the latest version - 19.2.

Normally I just muck around with my Linux box for a few days and then set it aside. But now my intention is to one day use it as a validation node for the Ethereum network. Assuming there'll be a Linux client and assuming they ever make the switch from Proof of Work (POW) to Proof of Stake (POS). As such, I figured I should end up using it on a daily basis and get more familiar with Mint. So I set it up in what will be it's permanent place and hooked up the mouse, keyboard, and monitor. My monitor is an ancient HP model that you can rotate so that it operates in Portrait mode. I've never operated it that way before, but now I'm more concerned about conserving desk space, so it'd be ideal if I could get it to work in that mode.

I remember way back in the day when Radius came out with their Pivot monitor for the Macintosh and being blown away that the monitor could auto sense it's orientation and change the resolution appropriately. Fast forward to today, and unfortuneatly the default video driver I'm using in Mint doesn't do the same.

Off to Google I went and after some searching came across the 'xrandr' command you can run in a terminal shell and manually rotate the display. First you need to run the query option to get the internal name of your monitor connection - in my case DVI-1. Then it's simply a matter of telling it to rotate:

Rotate Command
Rotating The Desktop

Ok great, I can now manually rotate the display but that would quickly get old having to do it everytime I logged on. Fortunately, you can go to the list of Startup Applications and create a new entry with the rotate command:

Startup Applications
Startup Applications

After doing that I rebooted the system and once it came back up I logged in...and the graphic shell Cinnamon promptly crashed. After some head scratching I decided to add a 5s delay for the command and after rebooting and logging in there's no more crashing and the display automatically rotates to Portrait mode!

The only downside I've encountered with this approach is the command doesn't run until after you log on, meaning the login screen is still the wrong orientation. I imagine there's a way around that as well, but for now I'm happy with the result.

OpenManage Issues - Aug 31, 2019
Dell

With the introduction of some new servers I've started installing them using a newer version of Dell's Server Administrator (OMSA).

Specifically, I've been installing version 9.3 which is the latest offering of the product which acts as a standalone application to monitor the hardware health of the server but also can be setup to relay information back to Dell's OpenManage Essentials software which you can use as an Enterprise monitoring tool - it allows you to keep an eye on the health of all your servers.

One of the nice features is you can regularly download an updated database of all the latest drivers and firmware versions and you can then check and run reports to see which systems are out of date. You can either go and download the updates individually and apply them directly on the server, or the program itself will connect to the server and install them for you.

However I noticed that the newer systems weren't showing up in the inventory for systems needing updates. When I dug further I found they were all contained in a collection of systems that hadn't been inventoried yet. Which was odd as the software is set to routinely go out and inventory the various servers it has discovered and been set to manage. Looking at the logs I found this error:

Inventory collector is not installed on this server. In Band updates cannot be applied.

Weird, as I've never had to install a seperate piece of software before in order for inventory to work properly...

OpenManage Command
OpenManage Inventory Collection

Off to Google I went and eventually came across this thread.

For some reason Dell, in it's infinite wisdom, decided to turn off inventory collection by default in Server Administrator in the newer versions. To get it working you need to break out the command prompt, navigate to the directory OMSA is installed in and enter the command above. After doing that OpenManage Essentials was able to properly inventory these newer servers and present a list of which drivers and firmware versions needed updating.

Note, the article mentions it being turned off by default on OMSA 9.1, but I never encountered this being an issue until we upgraded to 9.3. Your mileage may vary.

Happy inventorying!

Exchange 2016 Setup Failure - Jul 16, 2019
Microsoft

Went to install our first Exchange 2016 server and as always seems to happen ran into problems right away.

Our current environment is Exchange 2013, having previously upgraded from 2010. Everything I read in the migration guides seemed to indicate the upgrade was a fairly straight forward process as outside of all the roles now being consolidated on one server there didn't seem to be a lot of major changes between 2013 and 2016. So I got the install media, got my install key, fired up setup and watched in keen anticipation that we'd soon be running the new version. However during the phase where it updates the Active Directory schema it crashed and burned.

As usual the error message was somewhat cryptic.

Thankfully punching it into Google returned an actually useful article for a change that explained what the issue was. Although the article is written for 2013, in our case it worked for 2016 as well.

Exchange Setup Error
Exchange Setup Error

I was somewhat sceptical but I went ahead and fired up ADSIEDIT went to the suggested object and added the suggested value. Upon re-running Exchange Setup it carried on past the point it previously failed at and after a few minutes the install was completed. The article mentions the issue is due to the Public Folder tree object being manually removed previously. Which makes sense as I recall having to do just that when we uninstalled the last Exchange 2010 server. No matter what I tried I could not finish the removal as it wouldn't let me delete the Public Folder database on that server. Eventually I had no choice but to use ADSIEDIT to rip it out.

So now we have the first server installed and can continue on with our migration project.

iMac Upgrade - Jun 28, 2019
Apple

In the past I've detailed the upgrades I've made to my beloved iMac circa 2006. It was the last Mac with a matte screen, had a fun plastic enclosure, ran the ever reliable Core 2 Duo chipset, and even came with a little remote that magnetically attached to the side you could use with to play music, watch videos etc.

To get the most out of it I had previously upgraded the memory to the maximum of 4GB (only 3GB recognized sadly) and swapped out the hard drive with a SSD drive. Finally I upgraded the Operating System to OS X Lion which was the last supported OS for that model. I also had the very rare 7600GT video card with max ram. So as far as I thought it was as upgraded as you could get.

Then one day I was randomly searching the Internet and I stumbled across a post saying that although officially unsupported it was possible to install OS X Mountain Lion (ML), the succesor to Lion on it. What?!?!?

So I did some more research, came across a mega thread on Macrumors and it seemed like this was actually legitimate.

Then I did some more digging and found out the general consensus was that ML was worth it. It was essentially a release that focused on improving things rather than churning out new features. It fixed most of the major issues with the previous release and gave an all around performance boost.

Ok, so I was convinced, now where to start?

First off, the downside to going through articles from over a decade ago is that in many cases the links are broken, the information out of date or just plain wrong. As such, it took me awhile to find the correct info and the correct software to accomplish this. Here then, are the steps I took:

  • Don't use MacPostFactor. Despite being a newer release. It does not work. Instead I downloaded what I believe is the last release of the software that came before it called MLPostFactor. It can be downloaded here (go to bottom of page).

  • You need the retail version of Mountain Lion. I got an installer off someone from eBay but it didn't work. The installer program should be roughly 4.47GB in size. Unfortunately you can't download it any more from the Apple app store. In the end I discovered I could still order it from Apple by going here. After ordering Apple sent me an email in a couple days with the necessary codes and download links.

  • On your hard drive create two new partitions (shrink existing if you need to) 20GB each in size. Label one 'Install', the other 'ML'. Make sure you pick 'Mac OS Extended (Journaled)' as the format.

  • Drag the 'Install OS X Mountain Lion' icon into your Applications folder

  • Run MLPostFactor. Pick the Install partition you created as the destionation volume. Pick 'OS X 10.8.4' as the version - yes, I know you have OS X 10.8.5, don't worry about that for now. Click 'Install MLPostFactor'.

  • After installation is finished, reboot your iMac while holding down the Option key. This will bring up a boot menu. Select the Install partition - note for me it was renamed 'EFI Boot'. Now install Mountain Lion on the ML partition you created.

  • After rebooting you'll get the Apple logo but it will be crossed out. Don't despair like I did. Reboot again, holding down the option key, pick Install (EFI Boot). On the top menu, go Utilities, and MLPostFactor. Do the same thing as before, but now pick the ML partition instead - it will now patch the ML install so your system will boot. Reboot, hold down the option key, pick the ML partition and you should now be in Mountain Lion!

Ok, but why does it still say 10.8.4 when you go into About This Mac? To fix that you need to edit a file. Open your hard drive and go into System, Library, Core Services. Create a backup copy of SystemVersion.plist and then edit it. Replace the contents with this:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>ProductBuildVersion</key>
<string>12F45</string>
<key>ProductCopyright</key>
<string>1983-2013 Apple Inc.</string>
<key>ProductName</key>
<string>Mac OS X</string>
<key>ProductUserVisibleVersion</key>
<string>10.8.5</string>
<key>ProductVersion</key>
<string>10.8.5</string>
</dict>
</plist>

Now when you go into About This Mac it will show as being 10.8.5. Besides properly reflecting the correct version, you are now able to run Software Update and get all the proper updates and patches for that version. That said, while I was able to install most of the available patches - do not install the 'Security Update 2015-006' as it will break things and you won't be able to boot again.

OS X Mountain Lion
Unsupported iMac Running Mountain Lion

After patches were applied I updated iTunes to the latest supported version - 12.4.3.1, went to the App store and downloaded software I had previously installed - ie. Numbers - but it now offered the upgraded versions that supported the newer OS. So the last thing to do was to pick a browser.

With Lion I was getting warnings more and more about my browser being outdated. I had long since stopped using Safari and was instead using the Extended Support Release (ER) release of Firefox which was the last supported version for Lion. I had read it was possible with some tweaking to run newer versions of Firefox on ML, but eventually I came across a bunch of posts recommending Waterfox which was a spin-off version of Firefox. It uses the engine from before the major Quantum upgrade, but it still comes with regular security patches - for all intensive purposes it's a modern supported browser running on your ancient iMac.

So far the only issues I've come across is in System Report, Bluetooth returns an error when you click on it - although my Bluetooth mouse works fine - and it looks like Apple detects unsupported configs when you run Messages and won't let you log in. Which is disappointing as I was looking forward to using it. But overall I'm extremely pleased with this upgrade, it definately feels much snappier, and I'm happy that I'm now on a modern browser. It's amazing to think I can still happily use this machine which is now almost 15 years old!

New Website! - Apr 22, 2019
Website

Finally!

As you can see the new website is up and running. It's not so much new as new and improved I guess. As mentioned previously, originally I had wanted the refresh ready in time for the 15th anniversary of this site. I had started working on a new design and was a few weeks into it when the hard drive crashed and I lost everything (yes I know, backups). So that was somewhat disheartening and then life got in the way combined with general laziness and it didn't happen.

When I finally got around to giving it a second try I quickly felt overwhelmed. It had been so long since I really even considered what website authoring platforms were out there that I felt somewhat like a Luddite. This site was obviously long in the tooth having been authored with Frontpage 2003. Wordpress is the current darling of bloggers everywhere but I don't like Wordpress sites. I don't care how many different themes and templates they offer, to me they all look the same. I had worked hard on my site and wanted to retain it's unique look. I also enjoy knowing the nitty gritty details of how everything works - for me just drag and dropping pictures and typing some content wouldn't be fullfilling. So in the end I went with Microsoft Expressions which while still dated, is about a decade newer than what I was using. It also has the benefit of being free.

But even with a new platform I wasn't sure how to tackle my biggest issue of how to make the site mobile friendly. In the end I found a local web design company and paid them to do the heavy lifting. I was confident they'd be able to supply the necessary coding that I would be able to integrate without having to do a complete rewrite. Turns out I was right.

So now things look good on any platform - whether computer, tablet, or phone. If you're using a phone it defaults to a new mobile menu and if you're using a computer and resize it to a small size it will auto switch as well. In addition to the mobile focus I also updated to version 2 of Slimbox which is the code used when looking at sets of pictures. I also put in a search button which uses Google search. Currently it's add supported, but if I wanted to I could pay them a yearly fee and it would strip that away. The only downside is that most results will show from prior to the update so will look out of place, but over time as more content is published post-upgrade it will all look consistent.

I struggled with how far back to go with the new format - redoing the entire site was not going to happen due to the amount of effort involved. In the end I decided just to go back to last year. I might eventually go back five years, but we'll see.

I also put in a quote generator at the bottom. Instead of just randomly showing different quotes each time a page is visited it will only show a new quote once per day (per browser). This was a blatant rip off of the one they have on Slashdot which I've always gotten a kick out of.

Finally I have obtained a 3rd party certificate and plan to make this site secure at some point in the near future. Personally I think the whole insistence on encrypted sites to be a money making scam by the search engine companies. Unless you're doing banking or inputting other personal information into entry fields websites do NOT need to be encrypted. But to the average user at home they just see the warning at the top of their browser and think something's wrong. So at some point I'll give in and submit to the inevitable.

So there you have it, it's a new era for Jamesplee.com!

Broken WDS - Apr 9, 2019
Microsoft

Got a phone call recently from our Help Desk asking if Imaging was down. Out Imaging consisting of Microsoft Deployment Toolkit (MDT) tied into Windows Deployment Services (WDS). Once images are captured and stored on our deployment servers, technicians will PXE boot the client system which brings up the Litetouch menu. They pick a few selections, hit Next and off it goes installing the image.

However this time it wasn't working at one of our locations. It would start downloading the Litetouch boot image...hang for a bit during that process...and then puke up the error below.

So I tried a bunch of things to try and resolve things. Had my counterpart try various different models to rule out it being a driver issue. Had them try different network ports to rule out it being a bad cable, port, switch etc. Restarted the WDS service on the deployment server and when that didn't work did the old standby trick of rebooting the entire system. Nothing worked.

I did a bunch of Googling but wasn't getting anywhere.

Litetouch Error
Imaging Failure

Finally I stumbled across this post and a light bulb went off - as the only recent changes done were applying the latest bunch of Windows Updates.

So as per the article I went into WDS, clicked on the TFTP tab, unchecked the 'Enable Variable Windows Extension' option, and rebooted the server. Sure enough that fixed the problem. About the only negative that's obvious from doing this is that the Litetouch boot image loads a little bit slower now. As the March updates broke things, I'm curious if the just released April updates have patched the patch.

Slow clap for Microsoft quality control!

Black Screen of Death - Mar 29, 2019
Microsoft

A colleague asked me why whenever he connected to any of our servers using Remote Desktop Connection (RDP) it would sit at a black screen for minutes before eventually continuing on with the login process.

I had noticed this phenomena as well but hadn't yet gotten around to investigating it. It did seem like it was happening more and more often and when you connect to servers multiple times a day the time wasted does add up.

There didn't seem to be any pattern, it would do it on some, but not all. It would do it on servers running 2012 R2 as well as older ones running 2008 R2. Would also do it on both physical and virtual systems. So off to Google I went (what did we do before the Internet) and tried to find a solution. Turns out we weren't alone in encountering this annoying issue. It has even been coined 'The Black Screen of Death', a humous riff on the infamous Windows' Blue Screen of Death.

The recommended solution, shown below, was to go into RDP properties on your client and turn off 'Persistent bitmap caching'.

Remote Desktop Options
Remote Desktop Options

Sure enough, that has seemed to have done the trick. We can now reliably connect using RDP and now are no longer left staring at a black screen. Doing some more digging it appears to be an issue with when your client has a different resolution than the target system you're trying to connect to. Some other suggestions involve simply running Task Manager which seems to get things rolling, or restarting the RDP service and trying again. But as mentioned, simply turning off Bitmap Caching works for us.

Why Won't You Install? - Feb 26, 2019
Dell

Recently I pulled a server out of service which was functioning as a VMWare ESXi host. The model was a Dell PowerEdge R720 and the plan is to repurpose it to be come a new Exchange 2016 server.

The first step was to upgrade the firmware on the NICs and on the iDrac, and install the latest Bios - which was done without any issue. The next step was the install of Windows 2016. That is where all the fun began.

To date, any OS reinstall I had done was on older server models using the Dell deployment DVD. You'd boot of the DVD and when it told you to do so, swap in the Microsoft OS disc and off it'd go and install everything. With 12th generation and up servers however, I knew you were supposed to install via the Lifecycle Management interface. So I popped into there, picked Deploy OS, clicked on the edition drop down...and 2016 wasn't an option. Did some digging online, and apparently 12th gen Dell servers don't support installing 2016 with that interface. Ok, a bit of a pain, but I figured I'd simply install 2012 R2 instead and then upgrade to 2016 from there. So again, back into the controller interface, picked 2012 R2, had my OS disc in the DVD drive...but the option was greyed out. What was going on?

Did some more digging online and found that you can apparently only install Windows through that interface if it's the retail version and not a volume license disc which is what we use. Some grumbling then ensued and back to the web to do some more searching. At that point I turned up several posts from people saying just to directly boot from the OS media and install it that way. Ok, well I happened to already have Win 2016 on a USB key that I'd previously used to test 2016 and knew was good. So I rebooted, picked the UEFI boot menu...and it didn't recognize the USB key. Did some more searching and found out that the file system - FAT32 - that it was formatted with only supports file sizes with a maximum of 4GB. Unfortuneatly the install.wim file was larger than that. For some reason if I booted into Legacy mode (non-UEFI) it would see the drive and I could install 2016 but then my system partition would be formatted with the older MBR format and not the newer GPT format.

At this point I was really starting to get annoyed. I came across some posts which used 3rd party tools as a solution, or other posts which mentioned booting with Windows 10's media and using the MBR2GPT command to convert the partition, but first you had to go and do some resizing and expanding of the support partitions. Eventually I came across a post which for me was the simplest an easiest solution.

First step was to use the DISM command to split the install.wim file into two smaller files:

dism /Split-Image /ImageFile:sources/install.wim /SWMFile:sources/install.swm /FileSize:4000

Then I deleted the old install.wim file, copied all the 2016 files off the USB stick into a temp directory and reformated the USB key using the DISKPART command:

list disk

select disk 3

clean

convert gpt

create partition primary

format fs=fat32 quick

assign

Obviously 'disk 3' being the USB key Then I copied everything back to the key, rebooted, double checked I was still booting into UEFI, and now it saw the USB key as a bootable option. I picked it and was able to proceed with installing Windows 2016. Much rejoicing then ensued.

Can't Resize Datastore - Jan 13, 2019
VMWare

Our version of vCenter is currently 6.0.

In the past I've had to occasionally increase the size of the Datastores and would increase the size on the SAN and then simply go to the configuration tab on one of the hosts, select the properties for the Datastore, click on the Increase button, it would see the additional space available expand it, then do a rescan and the larger size would now be recognized.

But for some reason, after having upgraded vCenter to 6.0 and going through the exact same procedure I had done in the past, it wouldn't recognize the additional space. When you went to increase it there was no storage listed in the Extent Device dialog. Just to be sure I went online and looked up the procedure from VMWare's documentation and confirmed I was doing everything correctly - it just would not show the storage.

So what was going on? As usual I did a bunch of Googling and came across a post on Reddit of someone complaining of having the same issue. Somewhere in the thread someone mentioned to use the fat client and connect directly to one of the hosts.

They also referenced a VMWare support article.

VMWare Resize
Missing Device

Note that the article says that the inability to expand Datastores is a safety feature to prevent possible corruption. Ok great, but I still need to expand the space. The article doesn't mention how you go about doing that in consideration of these new safety filters that are in place. So as always, when in doubt, contact VMWare support before attempting this! In my case I went ahead and connected the vSphere client to one of the hosts directly, went into the datastore properties, and now when I hit the increase button it saw the added capacity. After expanding it I then went back into vCenter and on each of the hosts in the cluster did a rescan and now they all showed the larger size.

I'm guessing in the future, the 'safe' way is to shut down all the VM's first and then attempt to expand the space. But considering we've never had to do that previously this new 'feature' is somewhat of an annoyance.