Leaf Node blog

SQL Server training opportunities, January and February 2014

Now that we’re all recovering from Christmas and the New Year is just around the corner, what better time to start thinking about some SQL Server training and networking. 2014 is starting off with some free training opportunities easily accessible from Nottingham and the East Midlands,

  • 10th January 2014 – RedGate SQL in the City – Cambridge
    • RedGate are holding another of their SQL in the City events, this time at their Cambridge HQ. Some great speakers, and they’re even offering post talk beer and soft drinks!
  • 23rd January 2014 – SQL Server Midlands User Group
    • No speakers announced yet, but Alex always picks 2 great ones. Easily accessible from New Street station makes it simple to get there and back on the train and with a start and end time that makes it doable with a cheap day return ticket from Nottingham
  • 19th February 2014 – SQL Server Leicester User Group
    • Back at the Leicester YMCA for another evening of SQL Server speakers and great samosas. Speaker announced so far is Craig Ottley-Thistlethwaite talking on Integrating SQL Server Data Tools into a Continuous Integration environment. Another speaker will be announced soon.


Also very much worth putting in your calendar for further into the year is SQL Saturday Exeter on 22nd March. Details of the exact lineup haven’t been released yet, but just looking at the submitted sessions shows that there’s going to be some cracking free training on offer! And the paid full day sessions on the Friday look exceedingly good value for money, so why not make a weekend of it?

We are also looking at offering some of our training courses as open public sessions during 2014. More details will follow.

10 days of Free SQL Server Training – SQL Relay 2013R2

For anyone looking for some free training from some of the best in SQL Server trainers in the world, then this November you have a real treat coming up. SQL Relay is back in a 2013R2 version. For anyone who’s not been to a SQL Relay event before, these are a full day of SQL Server training sessions covering a variety of topics including DBA topics, BI, performance tuning and career development. And all of this is free, and includes Lunch

This edition is covering the country with 10 dates:

 

Location (Click Link for Details) Date
Reading Monday 11th Nov 2013
Southampton Tuesday 12th Nov 2013
Cardiff Wednesday 13th Nov 2013
Birmingham Thursday 14th Nov 2013
Hemel Hempstead Friday 15th Nov 2013
Newcastle Monday 25th Nov 2013
Manchester Tuesday 26th Nov 2013
Norwich Wednesday 27th Nov 2013
Bristol Thursday 28th Nov 2013
London Friday 29th Nov 2013

For any SQL Server users based in Nottingham or the East Midlands, the events in Birmingham on the 14th November, Manchester on the 26th November or Norwich on the 27th November are very easily reached via cheap day tickets from Nottingham on the train.

We’re very proud to announce that our Stuart will be speaking at the Birmingham and Norwich events, talking about using PowerShell to Automate your SQL Server backups, and more importantly  Automating restoring your backups so you can sleep easy knowing everything is in order.

Disaster Recovery Basics: RPO – Recovery Point Objective

When writing a Business Continuity plan, one of the first metrics to you need to nail down is your Recovery Point Objective, or RPO. This seemingly simple metric can have a major influence on how you need to design your application infrastructure

For disaster recovery your RPO is the point before disaster that you want to be able to recover to. Another way of looking at it is as the maximum amount of data you are willing to loose in a disaster.

So an RPO of 15 minutes would mean that in the event of a major incident, you are willing to accept losing up to 15 minutes worth of data. Depending on when the distaster occurs, you may recover to within 4 minutes or to the full 15 minutes.

Surely every company wants a recovery point objectice of 0 don’t they? Well yes, that would be the golden RPO. But achieving that is a huge task that will have massive implications on the architecture and infrastrure behind their application. And an RPO for one system may not fit another system if they vary massively in workload or size.

Taking our example 15 minute RPO above for a SQL Server based application, this could be achieved with the following:

  • Take Transaction Log backups every 15 minutes
    • We can now recover to within 15 minutes.
  • Backups done to a remote server
    • We ensure that we have the backups if we lose the entire SQL Server box

 

So far so simple. Now, what if the Business wants an RPO of zero? (ie; no loss of any committed data). Is that doable? Yes, but it’s going to cost you:

  • 2 SQL Server instances running in Synchronous Mirroring mode (High-Safety)
    • So that’s a doubling of physcial hardware costs, and a near doubling of licensing costs as well.
    • Plus your system is now more complicated to administer (compare to a single box).

And that’s assuming your disk systems at either end, and the network connection between your servers can keep up with your transaction throughput. If they can’t then you will be looking at beefing them up as well, with all the associated costs that goes along with that.

You can now see that the RPO you set can have a huge impact on the costs and complexity of designing a new solution. And that forcing an unrealistic recovery point objective onto an existing solution can lead to a possibly painful restructuring and redesign. This is yet another reason why a Disaster Recovery plan should be written during the design phase of a new project rather than as an afterthought.

Database slow to open after Instance restart or failover due to too many VLFs in transaction log.

An occasional client hosts a pretty busy 1/2 TB Dynamics CRM database on a SQL Server cluster. The database has been in place for a while, and everyone’s happy with the performance. So on the surface everything seems nice and happy. And btw, they don’t have a full time DBA.

Due to a hardware failure the had an Instance failover on the cluster. All DBs in the instance popped back up nice and quickly, but company_mscrm is stuck IN RECOVERY. As this is the core database for Dynamics CRM their system was down. Thinking it was a size problem they decided to leave it a bit longer. After 10 minutes they decided it was ‘stuck’ and restarted the instance. Same problem.

Now people were getting worried, so they started a restore onto a secondary server, and called us.

On connecting to their ailing server a quick sp_who2 showed a number of processes executing command DB STARTUP, so something was still operating on the database.

Using the scripts provided from Microsoft SQL Server Customer Support – http://blogs.msdn.com/b/psssql/archive/2010/12/29/tracking-database-recovery-progress-using-information-from-dmv.aspx – I started collecting execution stats so we could see what the processes were doing (or not doing as the case might be). Btw, I tend to adjust the capture interval to 5 seconds to start with, so I can get a feel without swamping myself with data.

From the tbl_recovery_tracking table: SELECT total_elapsed_time, wait_type,reads,writes, cpu_time FROM [dbo].[tbl_recovery_tracking] where session_id=23

 

total_elapsed_time wait_type reads writes cpu_time
762441 IO_COMPLETION 109966 0 2418
767445 IO_COMPLETION 110811 0 2418
772448 IO_COMPLETION 111726 0 2449
777451 IO_COMPLETION 112753 0 2480
782453 IO_COMPLETION 113625 0 2480
787457 IO_COMPLETION 114565 0 2496
792460 IO_COMPLETION 115527 0 2527
797462 IO_COMPLETION 116303 0 2574
802465 IO_COMPLETION 117106 0 2589
807469 IO_COMPLETION 117880 0 2589
812471 IO_COMPLETION 118499 0 2589

So much as you’d expect we see lots and lot of reads.Then all of a sudden, reads stop and we start seeing a bigger increase in cpu_time:

total_elapsed_time wait_type reads writes cpu_time
2103341 IO_COMPLETION 317957 0 7129
2108344 IO_COMPLETION 318662 0 7144
2113346 IO_COMPLETION 319523 0 7207
2118350 NULL 320016 0 9672
2123353 NULL 320016 0 14523
2128355 NULL 320016 0 19390
2133359 NULL 320016 0 24398

So nearly there you might think. Not a bit of it. This carried on for a long time. In fact twice as long as it had taken to complete the reads:

total_elapsed_time wait_type reads writes cpu_time
6787241 NULL 320016 0 4456636
6792244 NULL 320016 0 4461316
6797247 NULL 320016 0 4465856
6802251 NULL 320016 0 4470505
6807262 NULL 320016 0 4475341
6812265 NULL 320016 0 4480177
6817267 NULL 320016 0 4484950

So bearing in mind that total_elapsed_time is measure in milliseconds, the database recovery consisted of 35 minutes of reading data, and then 78 minutes of processing. Which is a lot of time, when the recovery stats in the error log are this small:

LogDate ProcessInfo Text
18/04/2013 08:50 spid23s 16 transactions rolled forward in database ‘company_mscrm’ (8). This is an informational message only. No user action is required.
18/04/2013 08:50 spid13s 0 transactions rolled back in database ‘company_mscrm’ (8). This is an informational message only. No user action is required.
18/04/2013 08:50 spid13s Recovery is writing a checkpoint in database ‘company_mscrm’ (8). This is an informational message only. No user action is required.
18/04/2013 08:50 spid13s Recovery completed for database company_mscrm (database ID 8) in 2 second(s) (analysis 720 ms, redo 698 ms, undo 196 ms.) This is an informational message only. No user action is required.
18/04/2013 08:50 spid13s Recovery is complete. This is an informational message only. No user action is required.

And the restore on the seconday box? Hadn’t gotten anywhere.

This all pointed to a ridiculous number of VLFs in their transaction logs. Lo and behold and quick DBCC LOGINFO in the offending database revealed a shockingly high count of ~28000 VLFs in a 70GB transaction log file. That’s a new record for me.

So to fix this we agreed that that they’d shrink their log files and recreated them at a more realistic size, lay off on shrinking them and setting autogrowth use a better value if it needed to by running:

USE company_mscrm
go
BACKUP LOG company_mscrm to DISK=N'\\backup\share\company_mscrm\backup.trn'
DBCC SHRINKFILE ('company_mscrm_log',0,TRUNCATEONLY)
DBCC SHRINKFILE ('company_mscrm_log',0)
ALTER DATABASE company_mscrm MODIFY FILE (NAME='company_mscrm_log',SIZE=70GB, FILEGROWTH=2GB)

First we backup the transaction log to ‘flush’ everything out. Then we shrink the log file as much as we can. Then resize it to 70GB, which was the size we determined would cope with pretty much all of their usage. And then we set a sensible size autogrowth for the few occasions when the transaction log file isn’t big enough.

A simple solution to what was quite a major problem. The lesson to take away is that just because your production database appears to be running perfectly there can still be issues under the surface just waiting to catch you out. There are a number of tools and scripts available that allow you to monitor your log files and their VLF counts.

If you need any help setting them up, or require advice on sizing your transaction logs then Leaf Node are happy to help, please get in touch

Hunting for select * with Extended Events

After a question from Alex Whittles (t|w) during my Extended Events session at the Leicester SQL Server User Group about monitoring for rough SELECT * FROM queries sneaking into production I decided to have a look.

The trick comes in using the special predicates comparators provided in Extended Events. I’ve not really come across much need for these, as I can normally get by with the usual ,= and others. Like everything else in Extended Events, these can be found in the sys.dm_xe_object management view:

select xp.name, xo.name, xo.description, xo.type_name, xo.type_size from
sys.dm_xe_objects xo
inner join sys.dm_xe_packages xp on xo.package_guid=xp.guid
where object_type like 'pred_compare'

which gives these (abbreviated) results:

name name description type_name type_size
package0 greater_than_max_uint64 Greater than the maximum 64-bit unsigned int value previously seen uint64 8
package0 less_than_min_uint64 Less than the minimum 64-bit unsigned int value previously seen uint64 8
package0 equal_int64 Equality operator between two 64-bit signed int values int64 8
package0 not_equal_int64 Inequality operator between two 64-bit signed int values int64 8
package0 less_than_min_float64 Less than the minimum 64-bit double value previously seen float64 8
package0 equal_i_ansi_string Equality operator between two ANSI string values ansi_string 0
package0 not_equal_i_ansi_string Inequality operator between two ANSI string values ansi_string 0
package0 less_than_i_ansi_string Less than operator between two ANSI string values ansi_string 0
package0 less_than_equal_i_ansi_string Less than or Equal operator between two ANSI string values ansi_string 0
package0 greater_than_i_ansi_string Greater than operator between two ANSI string values ansi_string 0
package0 greater_than_equal_i_ansi_string Greater than or Equal operator between two ANSI string values ansi_string 0
package0 equal_i_unicode_string Equality operator between two UNICODE string values unicode_string 0
package0 not_equal_i_unicode_string Inequality operator between two UNICODE string values unicode_string 0
sqlserver like_i_sql_unicode_string LIKE operator between two SQL UNICODE string values unicode_string 0
sqlserver like_i_sql_ansi_string LIKE operator between two SQL ANSI string values ansi_string 0

The interesting ones for the purpose of Alex’s question are the last 2, like_i_sql_ansi_string and like_i_sql_unicode_string. So from that it’s obvious that the Extended Event engine wants us to use a filter appropriate for the data type. A quick look into a extended Events result set for sql_statement_completed gives this fragment for statement:

<data name="statement">
    <type name="unicode_string" package="package0" />
        <value>select * from msdb.dbo.backupmediafamily
        </value>
</data>

So we need the unicode version, so we’ll try:

/*
Note, there are no predicates to restrict which sessions, dbs or planets this is looking at. So be careful where you run it as you could get more data than you expected
*/

create event session SelectStarHunt on server
add event sqlserver.sql_statement_completed
(where (sqlserver.like_i_sql_unicode_string(statement,N'select *%')))
add target package0.ring_buffer;

alter event session SelectStarHunt on server state=start;
go

select * from sys.all_columns where 1=2
select object_id, name from sys.all_columns where 1=2

select cast(target_Data as XML) from sys.dm_xe_session_targets xt
inner join sys.dm_xe_sessions xs on xt.event_session_address=xs.address
where xs.name='SelectStarHunt'

declare @target_Data xml
select @target_data=cast(target_Data as XML) from sys.dm_xe_session_targets xt
inner join sys.dm_xe_sessions xs on xt.event_session_address=xs.address
where xs.name='SelectStarHunt'

select c.value('(data[@name="statement"]/value)[1]','varchar(max)') as result
from @target_Data.nodes('/RingBufferTarget/event') as T(c)

which gives us the single result:

result
select * from sys.all_columns where 1=2

So, yes, we can track all SELECT * FROM‘s that appear.

If you were going to try this in anger, I’d recommend starting with a simple Counter target first to quickly see if you have a large or small number of potential captures, and then work on a filter set from there.

SQL Server Extended Events presentation from Leicester SSUG 20-02-2013

Here’s the slides and demo scripts from my presentation at the Leicester SSUG meeting on 20/02/2013

Slides on slideshare:

Powerpoint slides: Extended Events Presentation 20/02/2013

Demo Scripts: Extended Events Demo Scripts

If you’ve any questions, please get in touch.

What impact do SQL Server data and log file sizes have on backup size?

Many SQL Server DBAs seem to be very keen to shrink their database files at every opportunity. There seems to be a worry that any spare space inside a data file or transaction log is a problem. Is it?

Once of the common misconceptions is that a large data file translates to a large backup file. So now to try and lay that myth to bed. We’ll run through a couple of scenarios and compare the backup sizes of different size databases.

These examples were all done on SQL Server 2012 Developer Edition, but will apply to any other version as well.

Also, for all these examples we have no transactions occuring during the Backup. If the data in the data files changes during the backup, then SQL Server will include the relevant portions of the Transaction log in the backup so that you have a consistent backup.

First we’ll create 2 databases of different sizes, and populate them with the same data:

use master;
go

CREATE DATABASE dbsmall
ON 
( NAME = small_dat,
    FILENAME = 'C:\DATA\smalldat.mdf',
    SIZE = 10,
    MAXSIZE = 50,
    FILEGROWTH = 5 )
LOG ON
( NAME = small_log,
    FILENAME = 'C:\DATA\smalllog.ldf',
    SIZE = 5MB,
    MAXSIZE = 25MB,
    FILEGROWTH = 5MB ) ;
GO


CREATE DATABASE dbbig
ON 
( NAME = big_dat,
    FILENAME = 'C:\DATA\bigdat.mdf',
    SIZE = 100,
    MAXSIZE = 150,
    FILEGROWTH = 5 )
LOG ON
( NAME = big_log,
    FILENAME = 'C:\DATA\biglog.ldf',
    SIZE = 5MB,
    MAXSIZE = 25MB,
    FILEGROWTH = 5MB ) ;
GO

use dbsmall;
go
create table tbl_junk
(junk varchar(max)
);
go
insert into tbl_junk (junk) select REPLICATE('Junk',1000);
go 50

use dbbig;
go
create table tbl_junk
(junk varchar(max)
);
go
insert into tbl_junk (junk) select REPLICATE('Junk',1000);
go 50

So dbsmall has a primary data file of 10MB and dbbig has a primary data file of 100MB, both contain a single table of 200KB.

So now let’s back them up to see if there’s any difference. And just to make sure we’ll try it with compression as well:

use master;
go

backup database dbsmall to disk=N'c:\backup\dbsmall_normal.bak';
go
backup database dbbig to disk=N'c:\backup\dbbig_normal.bak';
go
backup database dbsmall to disk=N'c:\backup\dbsmall_compressed.bak' with compression;
go
backup database dbbig to disk=N'c:\backup\dbbig_compressed.bak' with compression;
go

exec xp_cmdshell 'dir c:\backup\';

And the results from the xp_cmdshell are:

01/16/2013  04:49 PM           371,200 dbbig_compressed.bak
01/16/2013  04:49 PM         3,232,256 dbbig_normal.bak
01/16/2013  04:49 PM           371,200 dbsmall_compressed.bak
01/16/2013  04:49 PM         3,232,256 dbsmall_normal.bak
               4 File(s)      7,206,912 bytes
               2 Dir(s)  16,019,734,528 bytes free

So that shows that all the backups are the same size. Which is to be expected, as it’s well documented that SQL Server only backs up the pages in use within the data files, and as both databases contain exactly the same amount of data the backups will be the same size.

So that’s the data files checked. But what about transaction logs I hear you ask. Well, let’s have a look. In these examples we’re only looking at full database backups, transaction log backups are a different thing and I’ll be looking at those in another post.

So let’s increase the size of the transaction log for dbbig and see what that does for the backup size:

alter database dbbig modify file (Name=big_log, size=200MB);
go

backup database dbbig to disk=N'c:\backup\dbbig_log_normal.bak';
go

backup database dbbig to disk=N'c:\backup\dbbig_log_compressed.bak' with compression;
go

exec xp_cmdshell 'dir c:\backup';

And this tie xp_cmdshell tells us:

01/16/2013  05:10 PM           375,296 dbbig_log_compressed.bak
01/16/2013  05:10 PM         3,240,448 dbbig_log_normal.bak
01/16/2013  04:49 PM           371,200 dbbig_compressed.bak
01/16/2013  04:49 PM         3,232,256 dbbig_normal.bak

Hang on, that’s 8KB more. Are we missing something here? No, not really. The 8k is an artefact of the alter database statement, we get that whatever we increase the transaction log to:

drop database dbbig;
go

restore database dbbig from disk=N'c:\backup\dbbig_normal.bak' with recovery;
go

alter database dbbig modify file (Name=big_log, size=100MB);
go

backup database dbbig to disk=N'c:\backup\dbbig_1log_normal.bak';
go

backup database dbbig to disk=N'c:\backup\dbbig_1log_compressed.bak' with compression;
go

drop database dbbig;
go

restore database dbbig from disk=N'c:\backup\dbbig_normal.bak' with recovery;
go

alter database dbbig modify file (Name=big_log, size=500MB);
go

backup database dbbig to disk=N'c:\backup\dbbig_5log_normal.bak';
go

backup database dbbig to disk=N'c:\backup\dbbig_5log_compressed.bak' with compression;
go

drop database dbbig;
go

restore database dbbig from disk=N'c:\backup\dbbig_normal.bak' with recovery;
go

alter database dbbig modify file (Name=big_log, size=1000MB);
go

backup database dbbig to disk=N'c:\backup\dbbig_10log_normal.bak';
go

backup database dbbig to disk=N'c:\backup\dbbig_10log_compressed.bak' with compression;
go

exec xp_cmdshell 'dir c:\backup';
go

Which gives:

01/16/2013  05:30 PM         3,240,448 dbbig_10log_normal.bak
01/16/2013  05:29 PM         3,240,448 dbbig_1log_normal.bak
01/16/2013  05:29 PM         3,240,448 dbbig_5log_normal.bak

So we take that small 8kb hit for any size transaction log. Now the eagle eyed will have spotted that those are empty transaction logs, so what happens when we have a full transaction log?

We’ll reset dbbig, increase the transaction and then run something to fill up the transaction log. But then we’ll empty the datafile back to where it was:

use master;
go
drop database dbbig;
go

restore database dbbig from disk=N'c:\backup\dbbig_normal.bak' with recovery;
go

alter database dbbig modify file ( name = N'big_log', maxsize = 150MB , FILEGROWTH = 25MB );
go

create table dbbig.dbo.tbl_junk2
(junk varchar(max)
);
go

insert into dbbig.dbo.tbl_junk2 (junk) select REPLICATE('Junk',200) from sys.all_columns;
go 4
drop table dbbig.dbo.tbl_junk2;
go

backup database dbbig to disk=N'c:\backup\dbbig_full_log.bak';
go

backup database dbbig to disk=N'c:\backup\dbbig_compressed_full_log.bak' with compression;
go

exec xp_cmdshell 'dir c:\backup';

Which gives us:

01/16/2013  06:14 PM           375,808 dbbig_compressed_full_log.bak
01/16/2013  06:14 PM        29,585,920 dbbig_full_log.bak

Hang on a sec, that’s a much larger backup there. What’s going on?

Well this is an example of the Ghost Writer process in action. The short version is that SQL Server doesn’t instantly deallocate the pages in a data file when you drop/truncate a table. It just marks the pages as deletable and the Ghost Write process actually clears them the next time it runs. (lot’s of good detail from Paul Randal (who wrote the code) in this post: Inside the Storage Engine: Ghost cleanup in depth).

So if we rerun the above SQL with this appended:

waitfor delay '00:02:00';
go

backup database dbbig to disk=N'c:\backup\dbbig_full_logaq.bak';
go

exec xp_cmdshell 'dir c:\backup';

Then we get:

01/16/2013  06:14 PM           375,808 dbbig_compressed_full_log.bak
01/16/2013  06:14 PM        29,585,920 dbbig_full_log.bak
01/16/2013  06:16 PM         3,240,448 dbbig_full_logaq.bak

Which is the same size as we’ve been seeing in all the other examples. BTW, I give a very generous 2 minutes for Ghost Cleanup to kick in to make sure, it’s usually much quicker than that, though actual time will depend on how busy your system is.

More free SQL Server training in 2013

As well as SQLBits coming to Nottingham in May, there’s plenty of other chances for some free SQL Server training this year:

I’m presenting on SQL Server Extended Events at the Leicester SQL Server User group on February 20th. The event is free, and it’s only £11.60 for an Anytime return from Nottingham. And Gavin lays on some very nice samosas for the half time snack. There may even be a swift half in the bar before getting the train back up to Nottingham.

Microsoft are lining up Extended Events as being a major part of a DBA’s performance and problem tracking toolkit, so if you’re not already using them this could be a good chance to see what they can offer you.

And then coming up in March is the UK’s 2nd SQL Saturday event. This time it’s down in Exeter, SQL Saturday #194 Exeter. The friday is a pre-con day with a number of great trainers running day long sessions, which unfortunately work scheduling means I can’t make. But Saturday is a free community day. The final list of speakers hasn’t been announced yet, but the list of submitted sessions is already looking very good. While it’s a fair trek from Nottingham to Exeter, it looks like it’ll be well worth it for anyone who want some quality SQL Server training.

Powershell to drop out create table scripts from a text file

So you’ve been handed a list of tables that need to be migrated  from Dev to Staging ASAP. So powershell to the rescue for a nice quick solution.

First up I’m assuming you’ve got a nice list of table names like:

vcstr
numbers

Stored in a file somewhere (assuming c:\tables.txt for this example). Now for the PowerShell script:

Import-Module "SQLPS" -DisableNameChecking

$sqlsrv = New-Object Microsoft.SqlServer.Management.Smo.Server($args[0])
$db = New-Object Microsoft.SqlServer.Management.Smo.Database
$db = $sqlsrv.Databases.Item($args[1])
$scrptr = New-Object Microsoft.SqlServer.Management.Smo.Scripter($sqlsrv)
$scrptr.Options.includeifnotexists = $TRUE
$scrptr.Options.WithDependencies = $TRUE
$scrptr.Options.Indexes = $true

$tbllist = get-content c:\tables.txt

foreach ($tbl in $tbllist)
{
$tbobj = New-Object Microsoft.SqlServer.Management.Smo.Table($db, $tbl)

      $sc = $scrptr.Script($tbobj.urn)
      foreach ($st in $sc)
      {
        $st >> C:\scripts\create_tables.sql
      }
}

This is designed to be saved as tables.ps1 and called as follows:

 c:\scripts\tables.ps1 "spiggy\sql2012" "resourcedb"

And this will churn you out a nice set of create table statements, with the indexes as well:

SET ANSI_NULLS ON
SET QUOTED_IDENTIFIER ON
IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[vcstr]') AND type in (N'U'))
BEGIN
CREATE TABLE [dbo].[vcstr](
	[int_id] [int] NULL,
	[int_1] [int] NULL,
	[int_2] [int] NULL,
	[txt] [varchar](250) COLLATE Latin1_General_CI_AS NULL
) ON [PRIMARY]
END
SET ANSI_PADDING ON

IF NOT EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[vcstr]') AND name = N'ix_vcstr')
CREATE CLUSTERED INDEX [ix_vcstr] ON [dbo].[vcstr]
(
	[int_1] ASC,
	[int_2] ASC,
	[txt] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
SET ANSI_NULLS ON
SET QUOTED_IDENTIFIER ON
IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[numbers]') AND type in (N'U'))
BEGIN
CREATE TABLE [dbo].[numbers](
	[number] [int] NULL
) ON [PRIMARY]
END

The $scrptr.Options list is rather huge microsoft.sqlserver.management.smo.scriptingoptions, but should offer enough scope to cover everything you might want to include in your scripts

SQLBits coming to Nottingham

Excellent news for any SQL Server DBAs or Developers in Nottingham, SQLBits is coming to town in 2013. The East Midlands Conference Centre has been picked to host the event which is running 2nd-4th May 2013.

If you’ve been, you’ll know what to expect. If you haven’t, then you’ve a whole lot to look forward to. The traditional format is to have whole day Deep Dive sessions on the Thursday, the traditional 1 hour Conference sessions on the Friday and the Saturday. SQLBits really go out of their way to attract the cream of current SQL experts to provide these sessions so it’s a great way to pick up some excellent knowlege.

Prices are to be announced shortly. But remember, the Saturday is community day, which means it’s FREE. Yes, that’s right FREE SQL Server training from some of the best in the world is coming to Nottingham, so make sure to keep an eye on the site for when booking opens.

If anyone’s coming in from out of town then feel free to ask about local accomodation, restaurants, pubs, running or cycle routes. I’m sure we can find something for everyone

Hopefully see lots of you there.