Should I be worried about Code 39 for these drivers on a virtual server

driversvirtualizationvmware-vspherewindows-sbs-2008

We currently run an SBS2008 DC/Exchange/AD on hardware that is getting dated. Before it falls all the way over we are trying to get it virtualized, as we do have a shiny new (and under utilized) VMWare host. I am doing network isolated test runs at the moment to figure out how long it will take and which bugbears to expect, etc etc.
In case it matters, the process we are following is –

  1. Full backup via Acronis B&R 11.5
  2. Run up VM hardware (we use VMWare vSphere 5 Essentials)
  3. Connect VM CD Drives to Acronis Bootable media Image & VMWare-tools Windows image
  4. Run the universal restore sourcing drivers from the VMWare-tools disc.

At the very end of the restore we got two errors concerning missing drivers with generic long PCI device names names – I am almost sure that these are the same two devices I am having problems with but am unsure how to check this. (We tried re-searching the VMWare tools disc for these fut failed, so I crossed my fingers and clicked 'Ignore')

After it boots up I install VMWare tools (Complete installation) and restart. The whole thing seems to run fine in the very isolated environment I have it in, however the device manager shows issues (that is, yellow !'s) with 2 devices –

Virtual Machine Bus
Virtualization Infrastructure Driver

Trying to update these drivers (Either automatically or searching the VMWare tools disc) shows that they are apparently the latest. The Device status (for both) is:

Windows cannot load the device driver for this hardware. The driver
may be corrupted or missing (Code 39).

My searches have revealed nothing VMWare related, however I did see a few articles like this one and this one related to Hyper-V and with different status codes, but the same devices. Nevertheless, I've tried the troubleshooting for them (enabling HAL in msconfig->Boot tab->Advanced Options; disabling and re-enabling the device) to no avail.

There is nothing in the event logs which seems related, as a matter of fact I can't see that anything is affected – but I'd rather not find out after hours on a Friday night at the end of the migration, when all I want to be thinking of is whether to watch GoT or Legend of the Seeker when I get home. Is there anyone who has either encountered this before or has the experience to say what the outcomes/cause might be?

Best Answer

OK, so the answer here is - Don't do what I did.

Those two devices are Hyper-V related - they exist on the Hyper-V hypervisor to communicate with the VM's running on it.

My misunderstanding was that I assumed they had appeared there when I virtualized the server, and, as such, that they were systemic of a VM. When I read online that they were for Host<->VM communications I assumed that they made up the VM side of that link.

What clued me onto the real issue here was when I noticed that the physical server also had these devices (although with working drivers). This brought me to the realization that the Hyper-V role was installed there (completely unnecessarily) and that I could resolve the issue simply by removing this role.

In hindsight, it makes a lot of sense - running one virtualization platform inside another seems like a recipe for disaster.

For the record - removing the role from the already virtualized machine seemed to destroy it - after a reboot it went 100% CPU and hung on "Installing Updates - Stage 3 of 3, 76%". A safe boot failed to get all the way back in. In this case it was a test VM anyway so no great loss, but the moral of the story is, to paraphrase a well known quote - Don't hypervise a hypervisor. Remove any virtualization software BEFORE a p2v.