To create a test environment I’m going to need a variety of different boxes: network infrastructure stuff like Active Directory, DNS, DHCP, web servers, database servers, desktop clients. I’m also going to need flexibility – I might need to test a proposed upgrade to SQL Server doesn’t break anything, for example, so I need to be able to try that out while still being able to get back to a safe state in case it causes problems. If something urgent comes up I can suspend some servers, build another VM and then swap back later.
Hardware virtualisation is the thing for this. I can build a box, stop it and do something else, change it and restore it to a previous snapshot. We use VMWare on our production kit, but as I already have the licensing for Microsoft’s stuff, and no real expectation of being given any help with VMWare, and a vested interest in using Microsoft products, I decided to try Hyper-V instead.
Hyper-V comes as a role within Windows Server 2008 R2. It can also be installed ‘stand-alone’ which is essentially a stripped-down version of Windows with just the bits needed to run Hyper-V, but no user interface, for example. Probably makes sense if you are planning to build your own datacentre with many physical servers to manage and don’t want to waste any performance carrying around a bunch of stuff you don’t need, but without a UI I won’t be able to use any of the virtual guests unless I plug in another machine and rely on the network to get anything done. It just seems simpler and safer to go with the full Windows Server for now.
If you want to run multiple things at once you’ll need the resources to do it. Virtualisation doesn’t magically create additional CPU or memory – it just lets you share what you have across multiple virtual guests.
Our production kit is already groaning under its load and attempts to acquire access to old existing kit didn’t go anywhere. I don’t have enough experience to put a server specification together from scratch. We have an existing procurement arrangement with Dell (and a nice educational discount) so the simplest way forward was to go for the biggest box I could fit under my desk. I also ran the 30 day trial of PassMark Performance Test on a bunch of existing kit to get some idea of what to expect (results further down).
I went for the Dell Precision T7600. For processing power we have an Intel Xeon E5-2687W which has 8 cores running at 3.10 GHz (which shows as 16 CPUs in Windows Task Manager). This was simply the highest specification in terms of speed and cache they had on offer, but the system has capacity to add another one later if I run short. Likewise, I went for 32GB of 1600MHz DDR3 memory. I’d rather have less things running fast than lots of things going slowly. Again there’s plenty of headroom here in future if needed.
For working storage I went for 4 x 10k RPM 900GB SAS drives in a RAID 10 configuration, with a “proper” hardware RAID controller (PERC H710P), to give ~1.5TB of fast storage, which should be able to survive a hard-drive failure, although I’ve no experience of actually trying this (I should test it and pull a drive out so I know what to expect). While solid-state (SSD) is faster it’s far too small. 1.5TB is also a bit small, but was the most I could get without dropping to slower drives. In my experience it’s the drive access that ends up dragging a system down so I’d rather stay as fast as possible here. I can always plug in an external drive (via USB or something) if I want to archive some VMs to free up space on the working storage.
I went for a mid-range graphics card, the 2 GB AMD FirePro V7900. I don’t expect to be doing any gaming on this kit, but it has 4 display port (DP) outputs for up to 4 monitors. I have 2 x 24 inch widescreen monitors for now, but again, it’s nice to know I can add more.
A minor problem here is that the box only came with one DVI to DP adapter so I had to get another so I could plug the second monitor in as the monitors didn’t come with with DP leads.
Here are a couple of graphs showing the performance of the system compared to some of the others we already have around the place.
I don’t know how accurate these tests are, but it’s a start. Click to enlarge the images. Most of the comparison machines were “dirty” in that they had existing stuff installed and running whereas the new Precision (shown as This Computer, in green) just had the Windows 7 build that Dell put on. The memory, CPU and graphics are good and are on par with a couple of recent Dell machines that were bought for multimedia content creation (video editing, etc, a fairly demanding task). The real shock was the drive performance, which is way better. I’ve no idea if the test is just too easy and so the results are over-reported or whether this is a true reflection of the difference the better drives and RAID actually make. I guess I’ll find out when I start asking it to handle some real load.
Installation of Windows Server 2008 R2 Datacentre Edition (key available via MSDN) was the usual, smooth experience from Microsoft. The only tricky bit was that it doesn’t have drivers for the RAID controller so needed them to be supplied. I did this via a USB stick, but it took a bit of head scratching to find the driver as the only thing listed on the support page for this machine at Dell’s website was the RAID monitoring and control software utility, not the driver. Then I remembered the DVD that came in the box – sure enough, the drivers were on there.
The graphics card drivers also needed to be installed. I did this once Windows was up and running. The driver is fine, but the control panel component that gives access to all the card’s various settings doesn’t work and crashes every time the system restarts, so I’ve uninstalled it for now. It was an optional component anyway so uninstalled OK leaving the drivers installed on their own, which is all I really need for now.
Hyper-V has it’s own Virtual Machine Connection thingy for driving the GUI’s of its guests, but it doesn’t appear to offer multi-monitor support. So this is fine for initial setup of guest VMs and accessing infrastructure VMs (domain controllers, SQL Server, etc), but a bit limited for writing code and running tests.
Remote Desktop (RD) permits multiple monitors, but it depends on the guest RD client version. The one in Windows Server 2008 R2 (my domain controller, DNS, DHCP box) allows both monitors at full screen and resolution. The one in Windows Server 2003 does not – I can only have one monitor, although still at full resolution. Still, this is OK. I only use it to support an old system. New work is done in more recent guest VMs.
As an aside, this old Windows 2003 Server was originally running in VirtualBox, then VMWare Workstation 8, then its disk image was converted and migrated to Hyper-V, where it has been running flawlessly ever since.
My graphics card supports SLAT so can be accessed by the guest VMs, but RemoteFX is needed for this which only works via Remote Desktop and only in Windows 7 Ultimate and better. I can’t see that I’ll be doing much gaming in a guest VM, but I thought I’d try it out. It also supports the pretty aero stuff in Windows 7 and presumably will be needed for all the additional eye candy coming in Windows 8.
Getting this installed was simple enough. It adds another hardware option when configuring your guest VMs so you can add the RemoteFX graphics adapter to them. On the downside it requires full-blown remote desktop licensing, not just the included two-connection stuff you get for free. MSDN again covers this (I used a key for 20 devices) but you need to install the Remote Desktop Session Host role and Remote Desktop Licensing. See the screenshot (click to enlarge) to see the server role services I installed.
You also need to configure the RD Session Host to talk to the RD Licensing server even though it’s the same box. You do this via Server Manager. However, adding the RD License key isn’t done here – you need Administrative Tools > Remote Desktop Services > Remote Desktop Licensing Manager for that. Not a big deal, but a minor UX niggle.
A colleague has created a virtual LAN and given me an address range to use and a gateway (with built-in firewall) with internet access.
So, Hyper-V is installed and my first, stand-alone, VM is operational. Now I need to build myself some virtual servers.