Thursday, August 23, 2012

Computer Names

My test environment is running on a Hyper-V virtual server system. I’ve created an Active Directory Directory Services (AD DS) server running as a guest OS to provide a domain to my other virtual machines.

I do not want my Hyper-V system to join this test domain. It is supposed to host the test environment, not take part in it. So I have left it in the default workgroup.

This leaves me with a minor annoyance, which is that I need a local user account and password when administrating that server. Perhaps it would be better to have the AD domain hosted on separate tin, but it’s not a big enough problem to warrant me finding some at this point.

I also have a couple of guest VMs that are also outside of the test domain. One of these is a Windows 8 preview box, for example. There are likely to be a handful of oddball VMs like this that will need to stand alone.

Connecting to them via Remote Desktop is then a pain because they won’t respond to their machine names so I have to go and hunt around to find their IP v4 addresses and connect with those.

However, this turned out to be because the master browser service is disabled by default in Windows 2008 Server. I have enabled this on the Hyper-V server and started it. I had to wait a few minutes, but after a while I started to see machine names appearing in the results of “net view” (run from the command prompt).

So now I can remote onto the client VMs without having to find their IP addresses.

Wednesday, August 22, 2012

Hardware for a test environment

To create a test environment I’m going to need a variety of different boxes: network infrastructure stuff like Active Directory, DNS, DHCP, web servers, database servers, desktop clients. I’m also going to need flexibility – I might need to test a proposed upgrade to SQL Server doesn’t break anything, for example, so I need to be able to try that out while still being able to get back to a safe state in case it causes problems. If something urgent comes up I can suspend some servers, build another VM and then swap back later.

Hardware virtualisation is the thing for this. I can build a box, stop it and do something else, change it and restore it to a previous snapshot. We use VMWare on our production kit, but as I already have the licensing for Microsoft’s stuff, and no real expectation of being given any help with VMWare, and a vested interest in using Microsoft products, I decided to try Hyper-V instead.

Hyper-V comes as a role within Windows Server 2008 R2. It can also be installed ‘stand-alone’ which is essentially a stripped-down version of Windows with just the bits needed to run Hyper-V, but no user interface, for example. Probably makes sense if you are planning to build your own datacentre with many physical servers to manage and don’t want to waste any performance carrying around a bunch of stuff you don’t need, but without a UI I won’t be able to use any of the virtual guests unless I plug in another machine and rely on the network to get anything done. It just seems simpler and safer to go with the full Windows Server for now.

If you want to run multiple things at once you’ll need the resources to do it. Virtualisation doesn’t magically create additional CPU or memory – it just lets you share what you have across multiple virtual guests.

Our production kit is already groaning under its load and attempts to acquire access to old existing kit didn’t go anywhere. I don’t have enough experience to put a server specification together from scratch. We have an existing procurement arrangement with Dell (and a nice educational discount) so the simplest way forward was to go for the biggest box I could fit under my desk. I also ran the 30 day trial of PassMark Performance Test on a bunch of existing kit to get some idea of what to expect (results further down).

I went for the Dell Precision T7600. For processing power we have an Intel Xeon E5-2687W which has 8 cores running at 3.10 GHz (which shows as 16 CPUs in Windows Task Manager). This was simply the highest specification in terms of speed and cache they had on offer, but the system has capacity to add another one later if I run short. Likewise, I went for 32GB of 1600MHz DDR3 memory. I’d rather have less things running fast than lots of things going slowly. Again there’s plenty of headroom here in future if needed.

For working storage I went for 4 x 10k RPM 900GB SAS drives in a RAID 10 configuration, with a “proper” hardware RAID controller (PERC H710P), to give ~1.5TB of fast storage, which should be able to survive a hard-drive failure, although I’ve no experience of actually trying this (I should test it and pull a drive out so I know what to expect). While solid-state (SSD) is faster it’s far too small. 1.5TB is also a bit small, but was the most I could get without dropping to slower drives. In my experience it’s the drive access that ends up dragging a system down so I’d rather stay as fast as possible here. I can always plug in an external drive (via USB or something) if I want to archive some VMs to free up space on the working storage.

I went for a mid-range graphics card, the 2 GB AMD FirePro V7900. I don’t expect to be doing any gaming on this kit, but it has 4 display port (DP) outputs for up to 4 monitors. I have 2 x 24 inch widescreen monitors for now, but again, it’s nice to know I can add more.

A minor problem here is that the box only came with one DVI to DP adapter so I had to get another so I could plug the second monitor in as the monitors didn’t come with with DP leads.

Here are a couple of graphs showing the performance of the system compared to some of the others we already have around the place.

benchmark - summarybenchmark - disk

I don’t know how accurate these tests are, but it’s a start. Click to enlarge the images. Most of the comparison machines were “dirty” in that they had existing stuff installed and running whereas the new Precision (shown as This Computer, in green) just had the Windows 7 build that Dell put on. The memory, CPU and graphics are good and are on par with a couple of recent Dell machines that were bought for multimedia content creation (video editing, etc, a fairly demanding task). The real shock was the drive performance, which is way better. I’ve no idea if the test is just too easy and so the results are over-reported or whether this is a true reflection of the difference the better drives and RAID actually make. I guess I’ll find out when I start asking it to handle some real load.

Installation of Windows Server 2008 R2 Datacentre Edition (key available via MSDN) was the usual, smooth experience from Microsoft. The only tricky bit was that it doesn’t have drivers for the RAID controller so needed them to be supplied. I did this via a USB stick, but it took a bit of head scratching to find the driver as the only thing listed on the support page for this machine at Dell’s website was the RAID monitoring and control software utility, not the driver. Then I remembered the DVD that came in the box – sure enough, the drivers were on there.

The graphics card drivers also needed to be installed. I did this once Windows was up and running. The driver is fine, but the control panel component that gives access to all the card’s various settings doesn’t work and crashes every time the system restarts, so I’ve uninstalled it for now. It was an optional component anyway so uninstalled OK leaving the drivers installed on their own, which is all I really need for now.

Hyper-V has it’s own Virtual Machine Connection thingy for driving the GUI’s of its guests, but it doesn’t appear to offer multi-monitor support. So this is fine for initial setup of guest VMs and accessing infrastructure VMs (domain controllers, SQL Server, etc), but a bit limited for writing code and running tests.

Remote Desktop (RD) permits multiple monitors, but it depends on the guest RD client version. The one in Windows Server 2008 R2 (my domain controller, DNS, DHCP box) allows both monitors at full screen and resolution. The one in Windows Server 2003 does not – I can only have one monitor, although still at full resolution. Still, this is OK. I only use it to support an old system. New work is done in more recent guest VMs.

As an aside, this old Windows 2003 Server was originally running in VirtualBox, then VMWare Workstation 8, then its disk image was converted and migrated to Hyper-V, where it has been running flawlessly ever since.

My graphics card supports SLAT so can be accessed by the guest VMs, but RemoteFX is needed for this which only works via Remote Desktop and only in Windows 7 Ultimate and better. I can’t see that I’ll be doing much gaming in a guest VM, but I thought I’d try it out. It also supports the pretty aero stuff in Windows 7 and presumably will be needed for all the additional eye candy coming in Windows 8.

hyper-v remotefx server rolesGetting this installed was simple enough. It adds another hardware option when configuring your guest VMs so you can add the RemoteFX graphics adapter to them. On the downside it requires full-blown remote desktop licensing, not just the included two-connection stuff you get for free. MSDN again covers this (I used a key for 20 devices) but you need to install the Remote Desktop Session Host role and Remote Desktop Licensing. See the screenshot (click to enlarge) to see the server role services I installed.

You also need to configure the RD Session Host to talk to the RD Licensing server even though it’s the same box. You do this via Server Manager. However, adding the RD License key isn’t done here – you need Administrative Tools > Remote Desktop Services > Remote Desktop Licensing Manager for that. Not a big deal, but a minor UX niggle.

A colleague has created a virtual LAN and given me an address range to use and a gateway (with built-in firewall) with internet access.

So, Hyper-V is installed and my first, stand-alone, VM is operational. Now I need to build myself some virtual servers.

Wednesday, August 8, 2012

IE Keyboard Shortcuts

A quick cheat-sheet of the shortcuts for things I use a lot in IE9:

Ctrl+T / Ctrl+N Open a new tab / window
Ctrl+W / Alt+F4 Close current tab / window
Alt+Home Go to your homepage
Alt+D Edit current web address (URL)
Ctrl+E Web search
Ctrl+F Find on current page
Ctrl+= / Ctrl+- / Ctrl+0 Zoom in / out / 100%

Someday I might write something that watches what you do and suggests the appropriate shortcut to you.

Tuesday, July 24, 2012

SharePoint / SQL Gotcha: Massive Log Files

My backup scripts on my test SharePoint 2010 box have been running but not producing backups for the last few days. I tried a manual backup via Central Admin and found that the backup estimate was more than 10GB, which is odd for a test server.

A bit of hunting with Sequoia View showed that SharePoint_Config.ldf was more than 10GB (the database data file itself was only 100MB).

It seems that the database was set to the Full Recovery model, which I’ve been bitten with before. In this mode SQL will not throw away the transaction log after a database backup, you have to backup the transaction log as well. If you do a lot of updates to the same data you can end up with large log files (compared to the actual data file).

Thinking about why you might want this makes my head spin a bit. Presumably if you have the previous backup plus the log file you can recover anyway – you just need to get the log off the server in case it goes bang. I guess if you have all the transactions you don’t even need the data file backup – you could rebuild to any point from the accumulated transaction logs. I’m sure there are lots of useful things you can do in this mode, but I don’t know of any so I don’t “need” it at the moment.

In SQL Server 2010 R2’s Management Studio you can view the properties of a database and see how large the files are. You can also use Tasks > Shrink > Database and Tasks > Shrink > Files to see how much space is available in a file (SQL keeps some free space and can auto-grow when it gets low so you can strike a balance between “wasted” free space and disk fragmentation.

The Shrink dialog told me that the log had 0% free space available. I tried a transaction log backup (Tasks > Backup > Backup Type > Transaction) but there wasn’t enough free disk space for it (failed with error 112, but couldn’t give me a descriptive message). Oh well, I don’t need it as it’s a test box and I can stand to lose some data if I need to do a restore. My very simple PowerShell scripts will get SharePoint to do a full weekly backup and daily differential backups so I should be good if something goes wrong – I think I’m only exposed for up to a day’s loss (assuming I notice if the backup itself fails – there’s no error reporting in the script at the moment).

I’m mainly just playing with the SharePoint installation – I know there are third-party tools that will automate this for me and I know that Data Protection Manager (or whatever it’s called now) can do automated SQL Backups, for example. I’m just interested in what you can easily do out-of-the-box.

Anyway, I set the SharePoint_Config database to the Simple Recovery model, and then using Tasks > Shrink > Files, I could see that now 99% of the log file was available. Shrinking it recovered that space, freeing up 10GB.

While I was there I set the model database to Simple Recovery too so any future databases created on the box will “inherit” that. I also set all the databases that were Full recovery to Simple. It turns out there was a 50-50 split. I’m not sure why some were already in Simple mode. Possibly SharePoint will set some non-critical things to Simple by default and leave the reset to “inherit” from the model database?

So a sort-of lesson learned is to watch out for Full Recovery model databases in future. If you need this, you also need to figure out what to do with all those transaction logs.

In future SQL Server and SharePoint installations I will need to watch out for default options that I might have chosen badly. It seems to me that new databases should use Simple Recovery by default – Full Recovery is more an advanced option. Maybe this is down to the edition of SQL Server that was installed?

Tuesday, July 17, 2012

Exporting from Active Directory

I recently needed to get a dump of users from Active Directory to analyse the department field values because we changed some recently and now some users have lost access to the call logging system we use (Vivantio).

This was surprisingly fiddly to achieve with the tools available to me, so I thought I’d write it up for future reference.

You can get at the users via the built-in Active Directory Users & Computers tool (dsa.msc). This will already be on Windows servers running Active Directory (domain controllers for example), but can also be installed onto a workstation via Remote Server Admin Tools (RSAT).

Using Active Directory Users & Computers, select a container in the AD tree. If your users are in multiple containers (multiple organisational units (OUs) for example) then you’ll need to repeat this for each as I didn’t find a way to list several at once.

To get the department field showing (there are lots of others available too, but not everything) use View > Add/Remove Columns.

To remove groups and disabled users (for example) use View > Filter Options. The tool will help you to build simple queries, but to screen out the disabled users we need to use an LDAP query, so choose Create Custom Filter > Customized > Advanced and enter the following:

(&(objectCategory=User)(!userAccountControl:1.2.840.113556.1.4.803:=2))

You can then export the users to file by right-clicking on the selected container and choosing Export List. I changed the file format to Unicode Text (Tab Delimited). I tried Comma Delimited first, but some of our departments contain commas which messed up the import into Excel.

I opened the file with Excel (via File > Open) which walked me through the import process.

I inserted a Pivot Table and dragged Department into the Row Labels box and Name into the Values box (this was automatically converted to “Count of Name” by Excel). Excel lists all the departments and the count of users in that department. Double-clicking a department name will ask what to display next (or just drag another row name into the Row Labels box). I chose Display Name, so expanding a department lists the names of the people in it.

From this it was easy to spot users that had incorrect or unusual department values.

Thursday, July 12, 2012

MSDN & Campus Agreement

TL;DR – you can get the top-end MSDN license, which permits use of pretty much any Microsoft product for software development, for ~£350 per year under the Campus Agreement.

I’ve been in my current role for about a year. An on-going issue is that we don’t have much of a test environment, which means every change is in some sense a “live” change and therefore a risk. Even well understood changes carry some risk, but software development (and perhaps learning in general) is a destructive thing – I need to try things to see what happens and some of what happens might be “breakages”.

The way to handle this is to have another environment that isn’t live and which is OK to smash up and rebuild as required. This seems to be generally understood and agreed, but no such environment has been forthcoming.

So, plan B then: build my own. My boss has given me the go ahead to get a good workstation (more on that later). Actually, I’ve already got a good laptop, but Visual Studio 2010 (with Resharper) just grinds along slowly and VMWare Workstation 7 isn’t exactly fast on it either. The CPU & memory seem ok, but the laptop has picked up anti-virus, SCCM and Zenworks clients and I suspect these all drain precious resources.

On the hardware I will need software. I have previously had an MSDN subscription and am generally aware that this is the right thing to have for software development on the Microsoft platform. It comes in a variety of flavours, but the top-end one (Visual Studio Ultimate 2010 with MSDN) retails for ~£7500 for the first year and £2500 a year thereafter. For this you get access to all the development tools and all the server stuff so you can write and test pretty much anything you want. But that’s still a fair size chunk of money.

I had previously learned that we get a good discount on Microsoft licenses via our Campus Agreement, so I sent an email off to our supplier – Pugh Computers.

I was told that it would cost a bit less than £350. That’s quite a saving.

That’s per developer, per year. Because the Campus Agreement is a yearly subscription we don’t get the perpetual usage rights that the retail MSDN gives you, but that’s not really much of a loss unless you want your development environment to go stale.

This was duly ordered and the subscription delivered via our Campus Agreement logon. It was then assigned to me by email and I now have access to the MSDN downloads & key generation pages.

UPDATE 1: When we assigned the MSDN subscription from the Campus Agreement website we ticked an optional boxed marked “media” or something. Today, a small pouch was delivered containing DVDs with all the latest Microsoft products, including Windows 2008 R2 SP1, Windows 7 Ultimate, Office 2010, Visual Studio 2010, SharePoint 2010. Sweet – saves me downloading them and burning them to disk myself.

Thursday, July 5, 2012

Home Routers

I’ve had home wifi routers from both LinkSys (later bought out by Cisco) and Netgear and been underwhelmed with both. Poor wifi range, dropped connections, etc.

Recently Cisco attempted an evil change of terms & conditions on people who already own their routers, so they might be a firm to avoid if at all possible.

I’ve not owned a D-Link router. They may be OK I suppose.

However, Jeff Atwood recommends using open firmware, such as DD-WRT, OpenWRT & Tomato, because it’s better quality and can be fixed & upgraded. The off-the-shelf stuff can also, in theory, but why would Cisco, etc, spend time making the thing you already bought better, when you could be out buying a more up-to-date version instead?

I’m especially interested in QoS, so I can prioritise Xbox gaming traffic, for example. I’d also like to explore the option of a private VPN. Some decent traffic logging would also be useful.

So, I guess I better see if my current routers will work with one of these.