Windows desktop virutalization instead of replacing work stations

hyper-vvdivirtualizationwindows

I'm head of the IT department at the small business I work for, however I am primarily a software architect and all of my system administration experience and knowledge is ancillary to software development. At some point this year or next we will be looking at upgrading our workstation environment to a uniform Windows 7 / Office 2010 environment as opposed to the hodge podge collection of various OEM licensed editions of software that are on each different machine.

It occurred to me that it is probably possible to forgo upgrading each workstation and instead have it be a dumb terminal to access a virutalization server and have their entire virtual workstation hosted on the server.

Now I know basically anything is possible but is this a feasible solution for a small business (25-50 work stations)? Assuming that this is feasible, what type of rough guidelines exist for calculating the required server resources needed for this.

How exactly do solutions handle a user accessing their VM, do they log on normally to their physical workstation and then use remote desktop to access their VM, or is it usually done with a client piece of software to negotiate this?

What types of software available for administering and monitoring these VM's, can this functionality be achieved out of box with Microsoft Server 2008? I'm mostly interested in these questions relating to Server 2008 with Hyper-V but fell free to offer insight with VMware's product line up, especially if there's any compelling reasons to choose them over Hyper-V in a Microsoft shop.

Edit: Just to add some more information on implementation goals would be to upgrade our platform from a Win2k3 / XP environment to a full Windows 2008 / Win7 platform without having to perform any of that associated work with our each differently configured workstation.

Also could anyone offer any realistic guidelines for how big of hardware is needed to support 25-50 workstations virtually? The majority the workstations do nothing except Office, Outlook and web. The only high demand workstations are the development workstations which would keep everything local.

Best Answer

This type of solution exists in a continuum.

On one end of the spectrum you have client computers running a "thick" operating system (like Windows or a desktop Linux distribution) and connecting via client software to hosted applications (via RemoteApp shortcuts and the Remote Desktop Protocol (RDP), or via Citrix ICA protocol).

In the middle of the spectrum you have clients connecting via these same protocols to full-blown desktop sessions (rather than a single application), but using a shared operating system installation. This is typically the world of Windows "Terminal Services".

On the far end of the spectrum you have what's typically known as a Virtual Desktop Infrastructure (VDI) where client devices are very stripped down and only host client software to connect to a hosted operating system instance.

All of these situations are physically feasible, but you'd do yourself a favor to start investigating the licensing costs before you go down the road of spec'ing servers, etc.

The licensing costs in the Microsoft world include either Terminal Services Client Access Licenses or Windows Virtual Enterprise Centralized Desktop (VECD) licenses of operating systems to contend with for each device or user accessing the VDI solution. Licensing for your desktop application software, depending on where on the spectrum you're falling, may also be different than you currently use and this necessitate additional license purchases.

It's likely that you're going to find that the acquisition costs of a VDI infrastructure are similiar, if not more expensive, than going down the traditional "thick client" route. Phyisically and pratically using thin-client devices sounds like a "win", but software licensing expense has traditionally more than made up for any hardware cost savings, which leaves only "soft cost" management and TCO savings as justification.

Edit:

Ryan Bolger hit it right on the head with his answer (and I +1'd him) with respect to "soft cost" savings, which you're right to identify as the place to save money.

Learning how to centrally deploy software, manage user environments, and generally maintain the hell out of your network using Group Policy will build your personal knowledge of the "innards" and operation of a Windows network and will have far fewer "moving parts" than a VDI infrastructure. Even if you had a VDI infrastructure, frankly, I think you'd still be able to leverage immense benefits from Group Policy-fu.

VDI and remote application delivery is a great solution for very task-specific application, or delivery of applications over slow or unreliable network connections (think "shared Microsoft Access database over a T1-based WAN"). I don't think that desktop virtualization, at least in the current incarnation as an excessive-licensing-fee-based minefield, is "the answer".

I'll even jump out on a limb and say that, with proper "care and feeding" maintenance of very large fleets of client computers running Windows isn't really all that hard, using the built-in tools in Windows Server, WSUS, good knowledge of scripting, and an understanding of how Windows itself and your application software works. Automating your client computer build, removing users' Administrator rights, and getting a handle on your OS and application update deployment infrastructure will take you leaps and bounds ahead.

Related Topic