My backup scripts on my test SharePoint 2010 box have been running but not producing backups for the last few days. I tried a manual backup via Central Admin and found that the backup estimate was more than 10GB, which is odd for a test server.
A bit of hunting with Sequoia View showed that SharePoint_Config.ldf was more than 10GB (the database data file itself was only 100MB).
It seems that the database was set to the Full Recovery model, which I’ve been bitten with before. In this mode SQL will not throw away the transaction log after a database backup, you have to backup the transaction log as well. If you do a lot of updates to the same data you can end up with large log files (compared to the actual data file).
Thinking about why you might want this makes my head spin a bit. Presumably if you have the previous backup plus the log file you can recover anyway – you just need to get the log off the server in case it goes bang. I guess if you have all the transactions you don’t even need the data file backup – you could rebuild to any point from the accumulated transaction logs. I’m sure there are lots of useful things you can do in this mode, but I don’t know of any so I don’t “need” it at the moment.
In SQL Server 2010 R2’s Management Studio you can view the properties of a database and see how large the files are. You can also use Tasks > Shrink > Database and Tasks > Shrink > Files to see how much space is available in a file (SQL keeps some free space and can auto-grow when it gets low so you can strike a balance between “wasted” free space and disk fragmentation.
The Shrink dialog told me that the log had 0% free space available. I tried a transaction log backup (Tasks > Backup > Backup Type > Transaction) but there wasn’t enough free disk space for it (failed with error 112, but couldn’t give me a descriptive message). Oh well, I don’t need it as it’s a test box and I can stand to lose some data if I need to do a restore. My very simple PowerShell scripts will get SharePoint to do a full weekly backup and daily differential backups so I should be good if something goes wrong – I think I’m only exposed for up to a day’s loss (assuming I notice if the backup itself fails – there’s no error reporting in the script at the moment).
I’m mainly just playing with the SharePoint installation – I know there are third-party tools that will automate this for me and I know that Data Protection Manager (or whatever it’s called now) can do automated SQL Backups, for example. I’m just interested in what you can easily do out-of-the-box.
Anyway, I set the SharePoint_Config database to the Simple Recovery model, and then using Tasks > Shrink > Files, I could see that now 99% of the log file was available. Shrinking it recovered that space, freeing up 10GB.
While I was there I set the model database to Simple Recovery too so any future databases created on the box will “inherit” that. I also set all the databases that were Full recovery to Simple. It turns out there was a 50-50 split. I’m not sure why some were already in Simple mode. Possibly SharePoint will set some non-critical things to Simple by default and leave the reset to “inherit” from the model database?
So a sort-of lesson learned is to watch out for Full Recovery model databases in future. If you need this, you also need to figure out what to do with all those transaction logs.
In future SQL Server and SharePoint installations I will need to watch out for default options that I might have chosen badly. It seems to me that new databases should use Simple Recovery by default – Full Recovery is more an advanced option. Maybe this is down to the edition of SQL Server that was installed?