Archive | February, 2012

List of SQL Server instances and version on Cluster Nodes

With a large windows cluster with many installed SQL Server instances it can get tricky to quickly check which instances are on which nodes, and a what patch level they’re running at. It’s also a good idea to check what revision level the instances think they’re on after a patching session.

In the grand tradition of lazyefficient DBAs I don’t want to be manually checking each node and instance for this information. So I use the following Powershell script:

Import-Module FailoverClusters
$Cluster=$args[0]
$clustnodes = Get-ClusterNode -Cluster $Cluster

foreach($node in $clustnodes){
echo "$node`n------------------"
	$reg = [Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey('LocalMachine', $node)
	$regKey= $reg.OpenSubKey("SOFTWARE\\Microsoft\\Microsoft SQL Server\\Instance Names\\SQL")
	foreach($val in ($regkey.GetValueNames() | sort)){
		$regKey2= $reg.OpenSubKey("SOFTWARE\\Microsoft\\Microsoft SQL Server\\$val\\MSSQLServer\\CurrentVersion" )
		echo "$val `t`t $($regkey2.GetValue("CurrentVersion"))"
	}
echo "`n`n"
}

This can then be run as:

.\clust-version.ps1 my-clust

The returned results are in plain text, and need to be pairedup. But unless you’re a *LOT* of instances that’s the easy bit.

sql-01
------------------
INSTANCE1		10.50.1600.1
INSTAMCE2		10.0.2531.0
INSTANCE3		10.0.2531.0
INSTANCE4		10.50.1600.1

sql-02
------------------
INSTANCE1		10.50.1600.1
INSTAMCE2		10.0.2531.0
INSTANCE3		10.0.2531.0
INSTANCE4		10.50.1600.1

sql-03
------------------
INSTANCE1		10.50.1600.1
INSTAMCE2		10.0.2531.0
INSTANCE3		10.0.2531.0
INSTANCE4		10.50.1600.1
Stuart Moore

Stuart Moore

Nottingham based IT professional with over 15 years in the industry. Specialising in SQL Server, Infrastructure design, Disaster Recovery and Service Continuity management. Now happily probing the options the cloud provides

When not in front of a computer, is most likely to be found on the saddle of his bike

More Posts - Website

Follow Me:
TwitterLinkedInGoogle Plus

Stuart Moore

About Stuart Moore

Nottingham based IT professional with over 15 years in the industry. Specialising in SQL Server, Infrastructure design, Disaster Recovery and Service Continuity management. Now happily probing the options the cloud provides

When not in front of a computer, is most likely to be found on the saddle of his bike

CRM Dynamics 2011 on an Amazon EC2 instance

It’s always handy to have a well sized Dynamics CRM install to hand, that’s quick to reset, and cheap to take with you for demos. By far the cheapest and easiest way I’ve found for doing this is using Amazons EC2 system. I only have to pay for when I use it, which at time of writing is about

Stuart Moore

Stuart Moore

Nottingham based IT professional with over 15 years in the industry. Specialising in SQL Server, Infrastructure design, Disaster Recovery and Service Continuity management. Now happily probing the options the cloud provides

When not in front of a computer, is most likely to be found on the saddle of his bike

More Posts - Website

Follow Me:
TwitterLinkedInGoogle Plus

Stuart Moore

About Stuart Moore

Nottingham based IT professional with over 15 years in the industry. Specialising in SQL Server, Infrastructure design, Disaster Recovery and Service Continuity management. Now happily probing the options the cloud provides

When not in front of a computer, is most likely to be found on the saddle of his bike

A transaction is a transaction is a transaction

Or at least as far as SQL Server’s transaction logs are concerned.

Had to fettle a SQL DB that was as in unfortunately common running out of space on a regular basis. And as is depressingly common it was running out of space because of a hugely increasing transaction log. The workaround had been to do a ad-hoc transaction backup and then shrink the file. Then repeat this the next time it fell in a heap.

But what about transaction log backups I hear you cry. Yep they were being done. But only every 6 hours. Yes, just 4 transaction log backups a day for a database that was generating enough transaction logs to fill a decent sized filesystem up in less than 6 hours.

So I volunteered the following non-controversial solution:

  • More frequent transaction log backups (~10 minutes in this case)
  • Size the transaction log based on how frequently you want to backup those logs and leave them fixed (~1GB in this case).

Which cues the following discourse:

“But we can’t do that because we’ll fill up our transaction log backup area”

“Do you do that at the moment?”

“No, but if we back up 15GB every 20 minutes we will”

“But you won’t, as there won’t be 15GB of transactions in 20 minutes”

“But our transaction log backups are 15GB each”

Penny drops on my side, and leads to the main point of this post.

If your system is performing x transactions per day, then that’s the number of transactions you do during a day. Exactly that number, no matter what you do. And if you want to be able to restore to a point in time, then that’s the number of transactions you need to backup every day.

Sounds sensible doesn’t it? But how many transaction log problems are cause because the owner doesn’t follow that through logically:

Let’s say you’re doing 30,000,000 transactions a day and each one of those is writing a certain amount of data to the transaction log, because that’s the amount SQL Server needs to write to the transaction log. Let’s say that amount is 8k, so your database is writing out ~230GB of transaction log per day.

So, assuming that those transactions come in nice and evenly throughout the day (oh, if only they woul), then depending on how often you backup and truncate the transactions logs you’ll need the following space to hold the transaction log:

  • once per day – 230GB
  • twice per day – 115GB
  • every six hours – 57GB
  • every 4 hours – 39GB
  • every 2 hours – 19GB
  • every hour – 10GB
  • every 30 minutes – 5GB
  • every 15 mnutes – 2.5GB

OK, you get the idea. But the main thing to remember is that you’re still backing up exactly the same volume of transaction log data, just doing it in smaller chunks, more frequently. So you need less space on your fast DB disks, and if you’re moving your transaction log backups to a safe secondary location you’ve less critical business data sat waiting to be lost when the server dies.

If you’re generating 230GB of transaction logs a day, have 45GB of space for transaction logs, and expect to only backup and truncate the logs 4 times a day, you’re going to end up with a mighty big mess on your hands the 5 times a day the database falls over. But you could fix it at a stroke by backing them up 8 times a day.

And remember, if you think you’re seeing too much transaction log being generated then you’ll need to investigate why it’s

Stuart Moore

Stuart Moore

Nottingham based IT professional with over 15 years in the industry. Specialising in SQL Server, Infrastructure design, Disaster Recovery and Service Continuity management. Now happily probing the options the cloud provides

When not in front of a computer, is most likely to be found on the saddle of his bike

More Posts - Website

Follow Me:
TwitterLinkedInGoogle Plus

Stuart Moore

About Stuart Moore

Nottingham based IT professional with over 15 years in the industry. Specialising in SQL Server, Infrastructure design, Disaster Recovery and Service Continuity management. Now happily probing the options the cloud provides

When not in front of a computer, is most likely to be found on the saddle of his bike