Tag Archives | sql server

Database slow to open after Instance restart or failover due to too many VLFs in transaction log.

An occasional client hosts a pretty busy 1/2 TB Dynamics CRM database on a SQL Server cluster. The database has been in place for a while, and everyone’s happy with the performance. So on the surface everything seems nice and happy. And btw, they don’t have a full time DBA.

Due to a hardware failure the had an Instance failover on the cluster. All DBs in the instance popped back up nice and quickly, but company_mscrm is stuck IN RECOVERY. As this is the core database for Dynamics CRM their system was down. Thinking it was a size problem they decided to leave it a bit longer. After 10 minutes they decided it was ‘stuck’ and restarted the instance. Same problem.

Now people were getting worried, so they started a restore onto a secondary server, and called us.

On connecting to their ailing server a quick sp_who2 showed a number of processes executing command DB STARTUP, so something was still operating on the database.

Using the scripts provided from Microsoft SQL Server Customer Support – http://blogs.msdn.com/b/psssql/archive/2010/12/29/tracking-database-recovery-progress-using-information-from-dmv.aspx – I started collecting execution stats so we could see what the processes were doing (or not doing as the case might be). Btw, I tend to adjust the capture interval to 5 seconds to start with, so I can get a feel without swamping myself with data.

From the tbl_recovery_tracking table: SELECT total_elapsed_time, wait_type,reads,writes, cpu_time FROM [dbo].[tbl_recovery_tracking] where session_id=23

 

total_elapsed_time wait_type reads writes cpu_time
762441 IO_COMPLETION 109966 0 2418
767445 IO_COMPLETION 110811 0 2418
772448 IO_COMPLETION 111726 0 2449
777451 IO_COMPLETION 112753 0 2480
782453 IO_COMPLETION 113625 0 2480
787457 IO_COMPLETION 114565 0 2496
792460 IO_COMPLETION 115527 0 2527
797462 IO_COMPLETION 116303 0 2574
802465 IO_COMPLETION 117106 0 2589
807469 IO_COMPLETION 117880 0 2589
812471 IO_COMPLETION 118499 0 2589

So much as you’d expect we see lots and lot of reads.Then all of a sudden, reads stop and we start seeing a bigger increase in cpu_time:

total_elapsed_time wait_type reads writes cpu_time
2103341 IO_COMPLETION 317957 0 7129
2108344 IO_COMPLETION 318662 0 7144
2113346 IO_COMPLETION 319523 0 7207
2118350 NULL 320016 0 9672
2123353 NULL 320016 0 14523
2128355 NULL 320016 0 19390
2133359 NULL 320016 0 24398

So nearly there you might think. Not a bit of it. This carried on for a long time. In fact twice as long as it had taken to complete the reads:

total_elapsed_time wait_type reads writes cpu_time
6787241 NULL 320016 0 4456636
6792244 NULL 320016 0 4461316
6797247 NULL 320016 0 4465856
6802251 NULL 320016 0 4470505
6807262 NULL 320016 0 4475341
6812265 NULL 320016 0 4480177
6817267 NULL 320016 0 4484950

So bearing in mind that total_elapsed_time is measure in milliseconds, the database recovery consisted of 35 minutes of reading data, and then 78 minutes of processing. Which is a lot of time, when the recovery stats in the error log are this small:

LogDate ProcessInfo Text
18/04/2013 08:50 spid23s 16 transactions rolled forward in database ‘company_mscrm’ (8). This is an informational message only. No user action is required.
18/04/2013 08:50 spid13s 0 transactions rolled back in database ‘company_mscrm’ (8). This is an informational message only. No user action is required.
18/04/2013 08:50 spid13s Recovery is writing a checkpoint in database ‘company_mscrm’ (8). This is an informational message only. No user action is required.
18/04/2013 08:50 spid13s Recovery completed for database company_mscrm (database ID 8) in 2 second(s) (analysis 720 ms, redo 698 ms, undo 196 ms.) This is an informational message only. No user action is required.
18/04/2013 08:50 spid13s Recovery is complete. This is an informational message only. No user action is required.

And the restore on the seconday box? Hadn’t gotten anywhere.

This all pointed to a ridiculous number of VLFs in their transaction logs. Lo and behold and quick DBCC LOGINFO in the offending database revealed a shockingly high count of ~28000 VLFs in a 70GB transaction log file. That’s a new record for me.

So to fix this we agreed that that they’d shrink their log files and recreated them at a more realistic size, lay off on shrinking them and setting autogrowth use a better value if it needed to by running:

USE company_mscrm
go
BACKUP LOG company_mscrm to DISK=N'\\backup\share\company_mscrm\backup.trn'
DBCC SHRINKFILE ('company_mscrm_log',0,TRUNCATEONLY)
DBCC SHRINKFILE ('company_mscrm_log',0)
ALTER DATABASE company_mscrm MODIFY FILE (NAME='company_mscrm_log',SIZE=70GB, FILEGROWTH=2GB)

First we backup the transaction log to ‘flush’ everything out. Then we shrink the log file as much as we can. Then resize it to 70GB, which was the size we determined would cope with pretty much all of their usage. And then we set a sensible size autogrowth for the few occasions when the transaction log file isn’t big enough.

A simple solution to what was quite a major problem. The lesson to take away is that just because your production database appears to be running perfectly there can still be issues under the surface just waiting to catch you out. There are a number of tools and scripts available that allow you to monitor your log files and their VLF counts.

If you need any help setting them up, or require advice on sizing your transaction logs then Leaf Node are happy to help, please get in touch

Stuart Moore

Stuart Moore

Nottingham based IT professional with over 15 years in the industry. Specialising in SQL Server, Infrastructure design, Disaster Recovery and Service Continuity management. Now happily probing the options the cloud provides

When not in front of a computer, is most likely to be found on the saddle of his bike

More Posts - Website

Follow Me:
TwitterLinkedInGoogle Plus

Stuart Moore

About Stuart Moore

Nottingham based IT professional with over 15 years in the industry. Specialising in SQL Server, Infrastructure design, Disaster Recovery and Service Continuity management. Now happily probing the options the cloud provides

When not in front of a computer, is most likely to be found on the saddle of his bike

Hunting for select * with Extended Events

After a question from Alex Whittles (t|w) during my Extended Events session at the Leicester SQL Server User Group

Stuart Moore

Stuart Moore

Nottingham based IT professional with over 15 years in the industry. Specialising in SQL Server, Infrastructure design, Disaster Recovery and Service Continuity management. Now happily probing the options the cloud provides

When not in front of a computer, is most likely to be found on the saddle of his bike

More Posts - Website

Follow Me:
TwitterLinkedInGoogle Plus

Stuart Moore

About Stuart Moore

Nottingham based IT professional with over 15 years in the industry. Specialising in SQL Server, Infrastructure design, Disaster Recovery and Service Continuity management. Now happily probing the options the cloud provides

When not in front of a computer, is most likely to be found on the saddle of his bike

SQL Server Extended Events presentation from Leicester SSUG 20-02-2013

Here’s the slides and demo scripts from my presentation at the Leicester SSUG meeting on 20/02/2013

Slides on slideshare:

Powerpoint slides: Extended Events Presentation 20/02/2013

Demo Scripts: Extended Events Demo Scripts

If you’ve any questions, please get in touch.

Stuart Moore

Stuart Moore

Nottingham based IT professional with over 15 years in the industry. Specialising in SQL Server, Infrastructure design, Disaster Recovery and Service Continuity management. Now happily probing the options the cloud provides

When not in front of a computer, is most likely to be found on the saddle of his bike

More Posts - Website

Follow Me:
TwitterLinkedInGoogle Plus

Stuart Moore

About Stuart Moore

Nottingham based IT professional with over 15 years in the industry. Specialising in SQL Server, Infrastructure design, Disaster Recovery and Service Continuity management. Now happily probing the options the cloud provides

When not in front of a computer, is most likely to be found on the saddle of his bike

What impact do SQL Server data and log file sizes have on backup size?

Many SQL Server DBAs seem to be very keen to shrink their database files at every opportunity. There seems to be a worry that any spare space inside a data file or transaction log is a problem. Is it?

Once of the common misconceptions is that a large data file translates to a large backup file. So now to try and lay that myth to bed. We’ll run through a couple of scenarios and compare the backup sizes of different size databases.

These examples were all done on SQL Server 2012 Developer Edition, but will apply to any other version as well.

Also, for all these examples we have no transactions occuring during the Backup. If the data in the data files changes during the backup, then SQL Server will include the relevant portions of the Transaction log in the backup so that you have a consistent backup.

First we’ll create 2 databases of different sizes, and populate them with the same data:

use master;
go

CREATE DATABASE dbsmall
ON 
( NAME = small_dat,
    FILENAME = 'C:\DATA\smalldat.mdf',
    SIZE = 10,
    MAXSIZE = 50,
    FILEGROWTH = 5 )
LOG ON
( NAME = small_log,
    FILENAME = 'C:\DATA\smalllog.ldf',
    SIZE = 5MB,
    MAXSIZE = 25MB,
    FILEGROWTH = 5MB ) ;
GO


CREATE DATABASE dbbig
ON 
( NAME = big_dat,
    FILENAME = 'C:\DATA\bigdat.mdf',
    SIZE = 100,
    MAXSIZE = 150,
    FILEGROWTH = 5 )
LOG ON
( NAME = big_log,
    FILENAME = 'C:\DATA\biglog.ldf',
    SIZE = 5MB,
    MAXSIZE = 25MB,
    FILEGROWTH = 5MB ) ;
GO

use dbsmall;
go
create table tbl_junk
(junk varchar(max)
);
go
insert into tbl_junk (junk) select REPLICATE('Junk',1000);
go 50

use dbbig;
go
create table tbl_junk
(junk varchar(max)
);
go
insert into tbl_junk (junk) select REPLICATE('Junk',1000);
go 50

So dbsmall has a primary data file of 10MB and dbbig has a primary data file of 100MB, both contain a single table of 200KB.

So now let’s back them up to see if there’s any difference. And just to make sure we’ll try it with compression as well:

use master;
go

backup database dbsmall to disk=N'c:\backup\dbsmall_normal.bak';
go
backup database dbbig to disk=N'c:\backup\dbbig_normal.bak';
go
backup database dbsmall to disk=N'c:\backup\dbsmall_compressed.bak' with compression;
go
backup database dbbig to disk=N'c:\backup\dbbig_compressed.bak' with compression;
go

exec xp_cmdshell 'dir c:\backup\';

And the results from the xp_cmdshell are:

01/16/2013  04:49 PM           371,200 dbbig_compressed.bak
01/16/2013  04:49 PM         3,232,256 dbbig_normal.bak
01/16/2013  04:49 PM           371,200 dbsmall_compressed.bak
01/16/2013  04:49 PM         3,232,256 dbsmall_normal.bak
               4 File(s)      7,206,912 bytes
               2 Dir(s)  16,019,734,528 bytes free

So that shows that all the backups are the same size. Which is to be expected, as it’s well documented that SQL Server only backs up the pages in use within the data files, and as both databases contain exactly the same amount of data the backups will be the same size.

So that’s the data files checked. But what about transaction logs I hear you ask. Well, let’s have a look. In these examples we’re only looking at full database backups, transaction log backups are a different thing and I’ll be looking at those in another post.

So let’s increase the size of the transaction log for dbbig and see what that does for the backup size:

alter database dbbig modify file (Name=big_log, size=200MB);
go

backup database dbbig to disk=N'c:\backup\dbbig_log_normal.bak';
go

backup database dbbig to disk=N'c:\backup\dbbig_log_compressed.bak' with compression;
go

exec xp_cmdshell 'dir c:\backup';

And this tie xp_cmdshell tells us:

01/16/2013  05:10 PM           375,296 dbbig_log_compressed.bak
01/16/2013  05:10 PM         3,240,448 dbbig_log_normal.bak
01/16/2013  04:49 PM           371,200 dbbig_compressed.bak
01/16/2013  04:49 PM         3,232,256 dbbig_normal.bak

Hang on, that’s 8KB more. Are we missing something here? No, not really. The 8k is an artefact of the alter database statement, we get that whatever we increase the transaction log to:

drop database dbbig;
go

restore database dbbig from disk=N'c:\backup\dbbig_normal.bak' with recovery;
go

alter database dbbig modify file (Name=big_log, size=100MB);
go

backup database dbbig to disk=N'c:\backup\dbbig_1log_normal.bak';
go

backup database dbbig to disk=N'c:\backup\dbbig_1log_compressed.bak' with compression;
go

drop database dbbig;
go

restore database dbbig from disk=N'c:\backup\dbbig_normal.bak' with recovery;
go

alter database dbbig modify file (Name=big_log, size=500MB);
go

backup database dbbig to disk=N'c:\backup\dbbig_5log_normal.bak';
go

backup database dbbig to disk=N'c:\backup\dbbig_5log_compressed.bak' with compression;
go

drop database dbbig;
go

restore database dbbig from disk=N'c:\backup\dbbig_normal.bak' with recovery;
go

alter database dbbig modify file (Name=big_log, size=1000MB);
go

backup database dbbig to disk=N'c:\backup\dbbig_10log_normal.bak';
go

backup database dbbig to disk=N'c:\backup\dbbig_10log_compressed.bak' with compression;
go

exec xp_cmdshell 'dir c:\backup';
go

Which gives:

01/16/2013  05:30 PM         3,240,448 dbbig_10log_normal.bak
01/16/2013  05:29 PM         3,240,448 dbbig_1log_normal.bak
01/16/2013  05:29 PM         3,240,448 dbbig_5log_normal.bak

So we take that small 8kb hit for any size transaction log. Now the eagle eyed will have spotted that those are empty transaction logs, so what happens when we have a full transaction log?

We’ll reset dbbig, increase the transaction and then run something to fill up the transaction log. But then we’ll empty the datafile back to where it was:

use master;
go
drop database dbbig;
go

restore database dbbig from disk=N'c:\backup\dbbig_normal.bak' with recovery;
go

alter database dbbig modify file ( name = N'big_log', maxsize = 150MB , FILEGROWTH = 25MB );
go

create table dbbig.dbo.tbl_junk2
(junk varchar(max)
);
go

insert into dbbig.dbo.tbl_junk2 (junk) select REPLICATE('Junk',200) from sys.all_columns;
go 4
drop table dbbig.dbo.tbl_junk2;
go

backup database dbbig to disk=N'c:\backup\dbbig_full_log.bak';
go

backup database dbbig to disk=N'c:\backup\dbbig_compressed_full_log.bak' with compression;
go

exec xp_cmdshell 'dir c:\backup';

Which gives us:

01/16/2013  06:14 PM           375,808 dbbig_compressed_full_log.bak
01/16/2013  06:14 PM        29,585,920 dbbig_full_log.bak

Hang on a sec, that’s a much larger backup there. What’s going on?

Well this is an example of the Ghost Writer process in action. The short version is that SQL Server doesn’t instantly deallocate the pages in a data file when you drop/truncate a table. It just marks the pages as deletable and the Ghost Write process actually clears them the next time it runs. (lot’s of good detail from Paul Randal (who wrote the code) in this post: Inside the Storage Engine: Ghost cleanup in depth).

So if we rerun the above SQL with this appended:

waitfor delay '00:02:00';
go

backup database dbbig to disk=N'c:\backup\dbbig_full_logaq.bak';
go

exec xp_cmdshell 'dir c:\backup';

Then we get:

01/16/2013  06:14 PM           375,808 dbbig_compressed_full_log.bak
01/16/2013  06:14 PM        29,585,920 dbbig_full_log.bak
01/16/2013  06:16 PM         3,240,448 dbbig_full_logaq.bak

Which is the same size as we’ve been seeing in all the other examples. BTW, I give a very generous 2 minutes for Ghost Cleanup to kick in to make sure, it’s usually much quicker than that, though actual time will depend on how busy your system is.

Stuart Moore

Stuart Moore

Nottingham based IT professional with over 15 years in the industry. Specialising in SQL Server, Infrastructure design, Disaster Recovery and Service Continuity management. Now happily probing the options the cloud provides

When not in front of a computer, is most likely to be found on the saddle of his bike

More Posts - Website

Follow Me:
TwitterLinkedInGoogle Plus

Stuart Moore

About Stuart Moore

Nottingham based IT professional with over 15 years in the industry. Specialising in SQL Server, Infrastructure design, Disaster Recovery and Service Continuity management. Now happily probing the options the cloud provides

When not in front of a computer, is most likely to be found on the saddle of his bike

More free SQL Server training in 2013

As well as SQLBits coming to Nottingham in May, there’s plenty of other chances for some free SQL Server training this year:

I’m presenting on SQL Server Extended Events at the Leicester SQL Server User group on February 20th. The event is free, and it’s only

Stuart Moore

Stuart Moore

Nottingham based IT professional with over 15 years in the industry. Specialising in SQL Server, Infrastructure design, Disaster Recovery and Service Continuity management. Now happily probing the options the cloud provides

When not in front of a computer, is most likely to be found on the saddle of his bike

More Posts - Website

Follow Me:
TwitterLinkedInGoogle Plus

Stuart Moore

About Stuart Moore

Nottingham based IT professional with over 15 years in the industry. Specialising in SQL Server, Infrastructure design, Disaster Recovery and Service Continuity management. Now happily probing the options the cloud provides

When not in front of a computer, is most likely to be found on the saddle of his bike

Powershell to drop out create table scripts from a text file

So you’ve been handed a list of tables that need to be migrated

Stuart Moore

Stuart Moore

Nottingham based IT professional with over 15 years in the industry. Specialising in SQL Server, Infrastructure design, Disaster Recovery and Service Continuity management. Now happily probing the options the cloud provides

When not in front of a computer, is most likely to be found on the saddle of his bike

More Posts - Website

Follow Me:
TwitterLinkedInGoogle Plus

Stuart Moore

About Stuart Moore

Nottingham based IT professional with over 15 years in the industry. Specialising in SQL Server, Infrastructure design, Disaster Recovery and Service Continuity management. Now happily probing the options the cloud provides

When not in front of a computer, is most likely to be found on the saddle of his bike

SQLBits coming to Nottingham

Excellent news for any SQL Server DBAs or Developers in Nottingham, SQLBits is coming to town in 2013. The East Midlands Conference Centre has been picked to host the event which is running 2nd-4th May 2013.

If you’ve been, you’ll know what to expect. If you haven’t, then you’ve a whole lot to look forward to. The traditional format is to have whole day Deep Dive sessions on the Thursday, the traditional 1 hour Conference sessions on the Friday and the Saturday. SQLBits really go out of their way to attract the cream of current SQL experts to provide these sessions so it’s a great way to pick up some excellent knowlege.

Prices are to be announced shortly. But remember, the Saturday is community day, which means it’s FREE. Yes, that’s right FREE SQL Server training from some of the best in the world is coming to Nottingham, so make sure to keep an eye on the site for when booking opens.

If anyone’s coming in from out of town then feel free to ask about local accomodation, restaurants, pubs, running or cycle routes. I’m sure we can find something for everyone

Hopefully see lots of you there.

Stuart Moore

Stuart Moore

Nottingham based IT professional with over 15 years in the industry. Specialising in SQL Server, Infrastructure design, Disaster Recovery and Service Continuity management. Now happily probing the options the cloud provides

When not in front of a computer, is most likely to be found on the saddle of his bike

More Posts - Website

Follow Me:
TwitterLinkedInGoogle Plus

Stuart Moore

About Stuart Moore

Nottingham based IT professional with over 15 years in the industry. Specialising in SQL Server, Infrastructure design, Disaster Recovery and Service Continuity management. Now happily probing the options the cloud provides

When not in front of a computer, is most likely to be found on the saddle of his bike

SQL Server, SFTP and SCP

A common requirement for SQL Server DBAs is sending or fetching data from remote unix/linux systems. And quite rightly the customers want their data transmitting securely.

Stuart Moore

Stuart Moore

Nottingham based IT professional with over 15 years in the industry. Specialising in SQL Server, Infrastructure design, Disaster Recovery and Service Continuity management. Now happily probing the options the cloud provides

When not in front of a computer, is most likely to be found on the saddle of his bike

More Posts - Website

Follow Me:
TwitterLinkedInGoogle Plus

Stuart Moore

About Stuart Moore

Nottingham based IT professional with over 15 years in the industry. Specialising in SQL Server, Infrastructure design, Disaster Recovery and Service Continuity management. Now happily probing the options the cloud provides

When not in front of a computer, is most likely to be found on the saddle of his bike

typeperf, logman and “Call to SQLAllocConnect failed with %1.”

Typeperf and Logman make automating performance counter data gathering much easier. Especially if you use them to log the counters directly to a SQL Server database, so you can then slice and dice, or combine stats across multiple boxs (a god send for complete Dynamics CRM environments).

Generally I’ll use typeperf for checking I’ve got the basic settings right as it’s a bit quicker to check, and then move over to logman for the actual work.

But there are a couple of gotchas to watch out for when setting them up. These examples assume you’ve got a system DSN called perfmon pointed at the correct database, you’re using Trusted Connections and you’ve a list of counters in the file

Stuart Moore

Stuart Moore

Nottingham based IT professional with over 15 years in the industry. Specialising in SQL Server, Infrastructure design, Disaster Recovery and Service Continuity management. Now happily probing the options the cloud provides

When not in front of a computer, is most likely to be found on the saddle of his bike

More Posts - Website

Follow Me:
TwitterLinkedInGoogle Plus

Stuart Moore

About Stuart Moore

Nottingham based IT professional with over 15 years in the industry. Specialising in SQL Server, Infrastructure design, Disaster Recovery and Service Continuity management. Now happily probing the options the cloud provides

When not in front of a computer, is most likely to be found on the saddle of his bike

Generating a list of full text word breakers for SQL Server

This isn’t going to be a nice handy list, but it will show how to quickly get such a list.

The basic principle is going to rely on sys.dm_fts_parser. As described on the linked page this takes a string, and then breaks it down into ‘tokens’ based on the defined language, word breaker, thesaurus and stop list. As an example:

select * FROM sys.dm_fts_parser ('"the quick brown fox jumpsover"', 1033, 0, 0)

The typo is deliberate, for once!. And this returns:

Splitting full text strings into tokens in SQL Server

So it’s returned a row for every token it can find. This example was nice and simple as it only contains words and spaces. The type was to show that without a word breaker SQL Server will just return the string without interpreting it. So that was for spaces, now just to confirm for other characters. First of all commas:

select * FROM sys.dm_fts_parser ('"quick,brown"', 1033, 0, 0)

Splitting full text strings with word breakers in SQL Server

So, that returns 2 rows as expected. How about hyphens?:

select * FROM sys.dm_fts_parser ('"quick-brown"', 1033, 0, 0)

Are hyphens word breakers in SQL Server full text searches?

Interesting, we get 3 rows this time. This makes sense when you think that hyphenation isn’t an exact science so some people will use them, and some won’t. So by combining both ‘sides’ of the hyphenated word SQL Server can hopefully match both uses.

So, that’s the basic theory. So this little piece of T-SQL is going to loop through all the 255 ASCII characters. For each one we’re going to use it to join 2 ‘words’, and then run that string through sys.dm_fts_parser. If the function returns more than 1 row we now that it’s found a word breaker, so we then output the character, and the character code as not all the characters are printable. You’ll also notice that code 34 throws an error, that’s because it’s ” which is a reserved character within full Text searches.

declare @i integer
declare @cnt integer
set @i=0
while @i<255
begin
  set @cnt=0
  select @cnt=COUNT(1) FROM sys.dm_fts_parser ('"word1'+CHAR(@i)+'word2"', 1033, 0, 0)
  if @cnt>1
  begin
    print 'this char - '+char(@i)+' - char('+convert(varchar(3),@i)+') is a word breaker'
  end
  set @i=@i+1
end

Which gives a nice long list:

this char –  – char(1) is a word breaker this char –  – char(2) is a word breaker this char –  – char(3) is a word breaker this char –  – char(4) is a word breaker this char –  – char(5) is a word breaker this char –  – char(6) is a word breaker this char –  – char(7) is a word breaker this char –  – char(8) is a word breaker this char – – char(9) is a word breaker this char – – char(10) is a word breaker this char – – char(11) is a word breaker this char – – char(12) is a word breaker this char – – char(13) is a word breaker this char –  – char(14) is a word breaker this char –  – char(15) is a word breaker this char –  – char(16) is a word breaker this char –  – char(17) is a word breaker this char –  – char(18) is a word breaker this char –  – char(19) is a word breaker this char –  – char(20) is a word breaker this char –  – char(21) is a word breaker this char –  – char(22) is a word breaker this char –  – char(23) is a word breaker this char –  – char(24) is a word breaker this char –  – char(25) is a word breaker this char –  – char(26) is a word breaker this char –  – char(27) is a word breaker this char –  – char(28) is a word breaker this char –  – char(29) is a word breaker this char –  – char(30) is a word breaker this char –  – char(31) is a word breaker this char – – char(32) is a word breaker this char – ! – char(33) is a word breaker Msg 7630, Level 15, State 3, Line 7 Syntax error near ‘word2’ in the full-text search condition ‘"word1"word2"’. this char – # – char(35) is a word breaker this char – $ – char(36) is a word breaker this char – % – char(37) is a word breaker this char – & – char(38) is a word breaker this char – ( – char(40) is a word breaker this char – ) – char(41) is a word breaker this char – * – char(42) is a word breaker this char – + – char(43) is a word breaker this char – , – char(44) is a word breaker this char – – – char(45) is a word breaker this char – . – char(46) is a word breaker this char – / – char(47) is a word breaker this char – : – char(58) is a word breaker this char – ; – char(59) is a word breaker this char – < – char(60) is a word breaker this char – = – char(61) is a word breaker this char – > – char(62) is a word breaker this char – ? – char(63) is a word breaker this char – @ – char(64) is a word breaker this char – [ – char(91) is a word breaker this char – \ – char(92) is a word breaker this char – ] – char(93) is a word breaker this char – ^ – char(94) is a word breaker this char – { – char(123) is a word breaker this char – | – char(124) is a word breaker this char – } – char(125) is a word breaker this char – ~ – char(126) is a word breaker this char –  – char(127) is a word breaker this char –

Stuart Moore

Stuart Moore

Nottingham based IT professional with over 15 years in the industry. Specialising in SQL Server, Infrastructure design, Disaster Recovery and Service Continuity management. Now happily probing the options the cloud provides

When not in front of a computer, is most likely to be found on the saddle of his bike

More Posts - Website

Follow Me:
TwitterLinkedInGoogle Plus

Stuart Moore

About Stuart Moore

Nottingham based IT professional with over 15 years in the industry. Specialising in SQL Server, Infrastructure design, Disaster Recovery and Service Continuity management. Now happily probing the options the cloud provides

When not in front of a computer, is most likely to be found on the saddle of his bike