Playing with OPTION KEEPFIXED PLAN hint in SQL Server

My previous article was on Compilation& Recompilation of Stored Procedure. After reading that article, two ofmy reader sent me an email about more clarification on some terminology and thatis the base of writing this article.
Actually SQL Server Query Optimizer is really one of thesmart tools which used to find or generate best execution plan for query or storedprocedure and make the task easy for DBA. Basically it uses Cost BasedOptimizer (CBO) to do this task, even sometime feels to have specific ExecutionPlan for query so Microsoft have provided many different hints at differentlevel.
Query Optimizer (QO) used to save execution plan when itcompiles Stored Procedure first time and use the same execution plan when thesame stored procedure is being used again so that you can get rid of overheadof generating execution plan again and again. 
When index statistics changes heavily, QO used to recompilethe statement to get best plan for your stored procedure but if you are surethat the existing Plan would works fine than one should go for OPTION KEEPFIXEDPLAN for that statement so that you can save your resources and boost upperformance.
I will demonstrate this with few queries run in SQL ServerManagement Studio (SSMS) and capture the results in Profiler. 
I will select following Events in Profiler while creatingnew Trace for my server.
Stored Procedure
  • SP:Completed
  • SP:Recompile
  • SP:smtpCompleted
  • SP:smtpStarting

 

Now execute following TSQL in your SSMS.
–create onedatabase which you can delete after running this example
create database SQLHub
GO
USE SQLHub
GO
–if orderstable is already there. you can delete it than create new one with name”Orders”
IF OBJECT_ID(‘orders’, ‘U’) IS NOT NULL BEGIN
      DROP TABLE orders
END
GO
–creating table
CREATE TABLE orders (OrderID INT IDENTITY, OrderDate DATETIME, Amount MONEY, Refno INT)
GO
–creatingnonclustered index
CREATE NONCLUSTERED INDEXIDX_ORD ON Orders(amount)
GO
–inserting only1 record in our table
INSERT INTO orders VALUES(GETDATE(),1001,111)
GO
–creating SPwhich will SELECT all records from Orders table where Amount is 1001
CREATE PROC spGet_ord
AS
SELECT * FROM orders WHERE Amount=1001
GO
–execute SP
exec spGet_ord
GO
If you will see in profiler than you would get followingevent captured



Now, make change in Index statistics by inserting so manyrecords in table so that we can test the effect in profiler after executing SPagain.
–inserting50000 fack rows into table
INSERT INTO orders (OrderDate, Amount, Refno)
SELECT TOP 50000
      DATEADD(minute, ABS(a.object_id % 50000 ), CAST(‘2010-02-01’ AS DATETIME)),
      ABS(a.object_id % 10),
      CAST(ABS(a.object_id) AS VARCHAR)
FROM sys.all_objects a
CROSS JOIN sys.all_objects b
GO
–execute SPagain
exec spGet_ord
GO

Now, if you see the profiler, you will see “SP:Recompile”event as your statistics are updated so it need recompilation of statement. If youagain few more thousands record in same table and execute the same SP again,you will see recompilation again.
Now, after droping table and SP, we will make one smallchange in Stored Procedure and will use “KEEPFIXED PLAN” hint so that we canavoid recompilation.
drop proc spGet_ord
GO
drop table orders
go
–creating table
CREATE TABLE orders (OrderID INT IDENTITY, OrderDate DATETIME, Amount MONEY, Refno INT)
GO
CREATE NONCLUSTERED INDEXIDX_ORD ON Orders(amount)
GO
INSERT INTO orders VALUES(GETDATE(),1001,111)
GO
CREATE PROC spGet_ord
AS
SELECT * FROM orders WHERE Amount=1001
–adding belowstatement other than that, same code as previous SP
OPTION (KEEPFIXED PLAN);
GO
exec spGet_ord
GO
After recreating table along with fresh records, we aregoing to see what happened while executing SP in above statement.

Since our table now has only 1 record so it is time tochange statistics with bulk insert as follows again.
–inserting100000 fack rows into table (BTW, thank to Jacob Sebastian, copy in INSERTscript from one of his demo to generate big table)
INSERT INTO orders (OrderDate, Amount, Refno)
SELECT TOP 50000
      DATEADD(minute, ABS(a.object_id % 50000 ), CAST(‘2010-02-01’ AS DATETIME)),
      ABS(a.object_id % 10),
      CAST(ABS(a.object_id) AS VARCHAR)
FROM sys.all_objects a
CROSS JOIN sys.all_objects b
GO
exec spGet_ord
GO
–if you wish,you can uncomment below code and delete SQLHub database
–use master
–go
–drop database sqlhub
Here is the screen capture of Profiler which doesn’t showRecompilation.

 

You can even add few more thousands of rows and execute SPagain, you won’t get recompilation. 
Reference: Ritesh Shah
 
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles butexamples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile



Playing with OPTION KEEPFIXED PLAN hint in SQL Server

My previous article was on Compilation& Recompilation of Stored Procedure. After reading that article, two ofmy reader sent me an email about more clarification on some terminology and thatis the base of writing this article.
Actually SQL Server Query Optimizer is really one of thesmart tools which used to find or generate best execution plan for query or storedprocedure and make the task easy for DBA. Basically it uses Cost BasedOptimizer (CBO) to do this task, even sometime feels to have specific ExecutionPlan for query so Microsoft have provided many different hints at differentlevel.
Query Optimizer (QO) used to save execution plan when itcompiles Stored Procedure first time and use the same execution plan when thesame stored procedure is being used again so that you can get rid of overheadof generating execution plan again and again.
When index statistics changes heavily, QO used to recompilethe statement to get best plan for your stored procedure but if you are surethat the existing Plan would works fine than one should go for OPTION KEEPFIXEDPLAN for that statement so that you can save your resources and boost upperformance.
I will demonstrate this with few queries run in SQL ServerManagement Studio (SSMS) and capture the results in Profiler.
I will select following Events in Profiler while creatingnew Trace for my server.
Stored Procedure
  • SP:Completed
  • SP:Recompile
  • SP:smtpCompleted
  • SP:smtpStarting

 

Now execute following TSQL in your SSMS.
–create onedatabase which you can delete after running this example
create database SQLHub
GO
USE SQLHub
GO
–if orderstable is already there. you can delete it than create new one with name”Orders”
IF OBJECT_ID(‘orders’, ‘U’) IS NOT NULL BEGIN
      DROP TABLE orders
END
GO
–creating table
CREATE TABLE orders(OrderID INT IDENTITY, OrderDate DATETIME, Amount MONEY, Refno INT)
GO
–creatingnonclustered index
CREATE NONCLUSTERED INDEXIDX_ORD ON Orders(amount)
GO
–inserting only1 record in our table
INSERT INTO orders VALUES(GETDATE(),1001,111)
GO
–creating SPwhich will SELECT all records from Orders table where Amount is 1001
CREATE PROC spGet_ord
AS
SELECT * FROM orders WHERE Amount=1001
GO
–execute SP
exec spGet_ord
GO
If you will see in profiler than you would get followingevent captured
Now, make change in Index statistics by inserting so manyrecords in table so that we can test the effect in profiler after executing SPagain.
–inserting50000 fack rows into table
INSERT INTO orders(OrderDate, Amount, Refno)
SELECT TOP 50000
      DATEADD(minute, ABS(a.object_id % 50000 ), CAST(‘2010-02-01’ AS DATETIME)),
      ABS(a.object_id % 10),
      CAST(ABS(a.object_id) AS VARCHAR)
FROM sys.all_objects a
CROSS JOIN sys.all_objects b
GO
–execute SPagain
exec spGet_ord
GO
Now, if you see the profiler, you will see “SP:Recompile”event as your statistics are updated so it need recompilation of statement. If youagain few more thousands record in same table and execute the same SP again,you will see recompilation again.
Now, after droping table and SP, we will make one smallchange in Stored Procedure and will use “KEEPFIXED PLAN” hint so that we canavoid recompilation.
drop proc spGet_ord
GO
drop table orders
go
–creating table
CREATE TABLE orders(OrderID INT IDENTITY, OrderDate DATETIME, Amount MONEY, Refno INT)
GO
CREATE NONCLUSTERED INDEXIDX_ORD ON Orders(amount)
GO
INSERT INTO orders VALUES(GETDATE(),1001,111)
GO
CREATE PROC spGet_ord
AS
SELECT * FROM orders WHERE Amount=1001
–adding belowstatement other than that, same code as previous SP
OPTION (KEEPFIXED PLAN);
GO
exec spGet_ord
GO
After recreating table along with fresh records, we aregoing to see what happened while executing SP in above statement.
Since our table now has only 1 record so it is time tochange statistics with bulk insert as follows again.
–inserting100000 fack rows into table (BTW, thank to Jacob Sebastian, copy in INSERTscript from one of his demo to generate big table)
INSERT INTO orders(OrderDate, Amount, Refno)
SELECT TOP 50000
      DATEADD(minute, ABS(a.object_id % 50000 ), CAST(‘2010-02-01’ AS DATETIME)),
      ABS(a.object_id % 10),
      CAST(ABS(a.object_id) AS VARCHAR)
FROM sys.all_objects a
CROSS JOIN sys.all_objects b
GO
exec spGet_ord
GO
–if you wish,you can uncomment below code and delete SQLHub database
–use master
–go
–drop database sqlhub
Here is the screen capture of Profiler which doesn’t showRecompilation.

 

You can even add few more thousands of rows and execute SPagain, you won’t get recompilation.
Reference: Ritesh Shah
 
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles butexamples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile

Some facts about Stored Procedure Compilation & Recompilation

I generally ask one question in interview “If we have onecomplex query which JOIN five table, what would work best from .NETapplication, Call of Query or Stored Procedure?” 80% of candidate used to say “SPworks fast”. My next question always would be “WHY?” than out of those 80%, 60%would say “Because SP is a compiled code”. After that I fire my follow upquestion, I can see option of “Compile page or project in .NET but I never seethat kind of option in SSMS, how do you compile your SP?”, once I EXECUTE thisquestion, 90% candidate of those 60% candidate would like to keep silence ordivert the talk.
Anyway, intention of this article is to let you know somefacts about SP compilation & Recompilation. Since it is big topic andwouldn’t cover under on article, I may come up with some more articles on thistopic but right now, let me explain some basic facts only.
First of all, let me tell you that you don’t need to compileStored Procedure manually, when you execute it, SQL Server will compile yourStored Procedure for you and save the execution plan for future use so that itdoesn’t need to compile again and again, this is generic understanding, itdoesn’t work all time as few facts are there which cause recompilation manytime or every time. If you want to recompile your Stored Procedure manually,you should use “SP_Recompile” Stored Procedure given by SQL Server.
Now, you think if recompilation is that bad than whyMicrosoft has given facility to recompile? Well, let me have an opportunity tobreak the ice, recompilation of stored procedure is not always bad. It may bebeneficial or may be harmful, it is totally depends on the situation.
Actually compilation of Stored Procedure stores theexecution plan first time you execute your Stored Procedure and every follow upcall would use the same execution plan but recompilation of SP would be helpfulif you have new statistics or new index on the table. BTW, in SQL Server 2008+there is in-built functionality to recompile at statement level rather thanrecompiling whole stored procedure which is less resource centric. 
Following is the list of basic cause which forces StoredProcedure to recompile.
·         
  • Change in SET option within Stored Procedure
  • Execution plan is very old
  • Schema change in table, index, view or temptables which are used in Stored Procedure
  •  “Deferred object resolution”, means object wasnot available while compiling Stored Procedure  but you have created later on,may be some temp table you have created in Stored Procedure.
  •  Call of “SP_Recompile” Stored Procedure.
  • Call of RECOMPILE clause in Stored Procedure.
  • Statistics are old
How to avoid Stored Procedure recompilations?
  • Avoid using temp table or other DDL statements as long aspossible.
  • Use table variable in Stored Procedure if needed
  • Avoid changing SET option in Stored Procedure likeARITHABORT, Quoted_Identifier, ANSI_NULLS, ANSI_WARNINGS etc.
  • Avoiding recompilation by statistics change by using “KEEPFIXEDPLAN” hint.
  • Disable Auto Update statistics for your database.
Well, these are very basic understanding and each point ofthis article may consume separate dedicated article and I may come up withseries on this very soon.
Reference: Ritesh Shah
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles butexamples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile

Do I need backup of my database?

If you are working as a DBA or you have responsibility tomaintain your SQL Server database up & running, take my words, you have NOOPTION of full database backup.
Recently I came to know that one person is not taking backupjust because he has RAID and Mirroring setup so he is least worried aboutbackup. Believe me, no matter what redundant hardware / software and highavailability setup you have, you MUST take backup regularly because even goodand up-to-date disaster recovery system could get failed. So, you never know,when and how do you need your latest backup file.
Let us discuss why database backup is really that muchimportant.
Suppose you have high availability solutions like RAID orserver clustering but what if you hardware gets corrupt and may be completedisk array get corrupt? You have to build up your database from backup only.
Sometime, catastrophic events or natural disasters likeflooding, earth quake etc. could damage your hardware or may be complete datacenter and if you have no backup at other location, you have no place to gofor.
May be sometime Security issues comes to the picture andsomebody intentionally or unintentionally damage you data which could beaffected adversely and in that situation you might look for the latest databasebackup to restore that data.
Sometime your OS or even your SQL Server gets corrupted andyou might need to restore data from backup itself.
In short, I would HIGHLY recommend database backup as “Preventionis always better than cure”. So, keep strong backup policies with full databasebackup, differential database backup and transaction log backup, depends onyour need and business policies.
BTW, I am not against the high availability concepts likemirroring, clustering, replication or log shipping. You should implement thosetoo as per your business needs but also MUST keep strong backup policies.
Reference: Ritesh Shah
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles butexamples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile

Do I need backup of my database?

If you are working as a DBA or you have responsibility tomaintain your SQL Server database up & running, take my words, you have NOOPTION of full database backup.
Recently I came to know that one person is not taking backupjust because he has RAID and Mirroring setup so he is least worried aboutbackup. Believe me, no matter what redundant hardware / software and highavailability setup you have, you MUST take backup regularly because even goodand up-to-date disaster recovery system could get failed. So, you never know,when and how do you need your latest backup file.
Let us discuss why database backup is really that muchimportant.
Suppose you have high availability solutions like RAID orserver clustering but what if you hardware gets corrupt and may be completedisk array get corrupt? You have to build up your database from backup only.
Sometime, catastrophic events or natural disasters likeflooding, earth quake etc. could damage your hardware or may be complete datacenter and if you have no backup at other location, you have no place to gofor.
May be sometime Security issues comes to the picture andsomebody intentionally or unintentionally damage you data which could beaffected adversely and in that situation you might look for the latest databasebackup to restore that data.
Sometime your OS or even your SQL Server gets corrupted andyou might need to restore data from backup itself.
In short, I would HIGHLY recommend database backup as “Preventionis always better than cure”. So, keep strong backup policies with full databasebackup, differential database backup and transaction log backup, depends onyour need and business policies.
BTW, I am not against the high availability concepts likemirroring, clustering, replication or log shipping. You should implement thosetoo as per your business needs but also MUST keep strong backup policies.
Reference: Ritesh Shah
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles butexamples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile

Index Fill Factor in SQL Server

Today once again I felt to write something about Index, soopen up the listof Index articles I have written, find out the topic which are stillmissing for concept of Index and thought to write about fill factor.
Generally, when you create an Index, your data get stored inB-Tree format where you can find “Root Page”, may be “Intermediate Page” and “LEAFPage”. “Leaf Page” would have your actual data sorted in order of Index key andeach “Leaf Page” would be of 8KB.
When you insert/update/delete data to/from table, it triesto insert in the proper data page according to the record you are inserting.For example, you have Index on SSN number you are inserting new row with SSN number,SQL Server tries to insert that record in the appropriate page, suppose yourSSN starts with “2” so it will find what is the last page which has SSN numberstarts with “2”, SQL Server will insert your new row in that page only. If your8 KB page is full and don’t have room to accommodate new row whose SSN startswith “2”, it will split page and the data in that page will be shared betweentwo pages so now you have two pages which are half full so your row will be accommodatedin that page.
If your page would already had space for accommodating newrow, you wouldn’t need to wait for extra I/O overhead and wait until page splitgets finish and space for your row would become available.
This is the time when FillFactor comes into the picture.Fill Factor decides how much your page would be filled up initially. Supposeyou give 10 in FillFactor than your data page will consume only 10% of your 8KBpage size and when you exceed this limit of 10%, it keeps 90% page empty andcreate new page for other records.  Now,when you insert new records, you don’t need to worry about I/O overhead of pagesplit as you would have 90% free space and your record will be accommodate inthat space easily. So, if you have lower number of Fillfactor, you can decreaseI/O over head generated by Page Split which helps you to write your datafaster.
Now, you might be thinking now that why shouldn’t I use lownumber always like 5% or 10% in Fillfactor? Well, it will decrease your pagesplit but it will increase number of data page in your index so every time youscan your index, you have to read more pages and it is again over head whilereading the data and decrease the speed of reading data. Suppose you have 10records in one table and its size is 8KB, you can fit all 10 records in onepage only but if you have fill factor of 50 than those 10 records will bestored in two pages rather than one while creating an index.
Uff, what to do now? if we decrease fillfactor figure, ithelps in writing but creates problem in reading and if we increase fillfactorfigure may be 100% than helps in reading but creates issues while writing indatabase?
You have to be reasonable and have to take decision basedthe load on your database, you have to decide first that you have more writesor more read?
Personally I majority go for figure somewhere between 80 to90 for fillfactor so that you have some space remain for new records anddecrease page split at the same time, we don’t keep too much free space in datapages so number of pages doesn’t get increased. Even, it is not final figure I use for all my databases, if my database is read only or very rare insert/update, I would go even for 100% Fillfactor and if my database is write intensive, I would have even 50% Fillfactor.
BTW, you can define Fill Factor while creating an Indexor rebuildingan Index or you can set default value for fill factor via following T-SQL (bydefault it is 0 in SQL Server).
–turning onadvanced configuration option
Sp_configure ‘show advanced options’,1
GO
RECONFIGURE
GO
–setting upfill factor
sp_configure ‘fill factor’, 90
GO
RECONFIGURE
GO
Reference: Ritesh Shah

http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles butexamples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile

Index Fill Factor in SQL Server

Today once again I felt to write something about Index, soopen up the listof Index articles I have written, find out the topic which are stillmissing for concept of Index and thought to write about fill factor.
Generally, when you create an Index, your data get stored inB-Tree format where you can find “Root Page”, may be “Intermediate Page” and “LEAFPage”. “Leaf Page” would have your actual data sorted in order of Index key andeach “Leaf Page” would be of 8KB.
When you insert/update/delete data to/from table, it triesto insert in the proper data page according to the record you are inserting.For example, you have Index on SSN number you are inserting new row with SSN number,SQL Server tries to insert that record in the appropriate page, suppose yourSSN starts with “2” so it will find what is the last page which has SSN numberstarts with “2”, SQL Server will insert your new row in that page only. If your8 KB page is full and don’t have room to accommodate new row whose SSN startswith “2”, it will split page and the data in that page will be shared betweentwo pages so now you have two pages which are half full so your row will be accommodatedin that page.
If your page would already had space for accommodating newrow, you wouldn’t need to wait for extra I/O overhead and wait until page splitgets finish and space for your row would become available.
This is the time when FillFactor comes into the picture.Fill Factor decides how much your page would be filled up initially. Supposeyou give 10 in FillFactor than your data page will consume only 10% of your 8KBpage size and when you exceed this limit of 10%, it keeps 90% page empty andcreate new page for other records.  Now,when you insert new records, you don’t need to worry about I/O overhead of pagesplit as you would have 90% free space and your record will be accommodate inthat space easily. So, if you have lower number of Fillfactor, you can decreaseI/O over head generated by Page Split which helps you to write your datafaster.
Now, you might be thinking now that why shouldn’t I use lownumber always like 5% or 10% in Fillfactor? Well, it will decrease your pagesplit but it will increase number of data page in your index so every time youscan your index, you have to read more pages and it is again over head whilereading the data and decrease the speed of reading data. Suppose you have 10records in one table and its size is 8KB, you can fit all 10 records in onepage only but if you have fill factor of 50 than those 10 records will bestored in two pages rather than one while creating an index.
Uff, what to do now? if we decrease fillfactor figure, ithelps in writing but creates problem in reading and if we increase fillfactorfigure may be 100% than helps in reading but creates issues while writing indatabase?
You have to be reasonable and have to take decision basedthe load on your database, you have to decide first that you have more writesor more read?
Personally I majority go for figure somewhere between 80 to90 for fillfactor so that you have some space remain for new records anddecrease page split at the same time, we don’t keep too much free space in datapages so number of pages doesn’t get increased. Even, it is not final figure I use for all my databases, if my database is read only or very rare insert/update, I would go even for 100% Fillfactor and if my database is write intensive, I would have even 50% Fillfactor.
BTW, you can define Fill Factor while creating an Indexor rebuildingan Index or you can set default value for fill factor via following T-SQL (bydefault it is 0 in SQL Server).
–turning onadvanced configuration option
Sp_configure ‘show advanced options’,1
GO
RECONFIGURE
GO
–setting upfill factor
sp_configure ‘fill factor’, 90
GO
RECONFIGURE
GO
Reference: Ritesh Shah

http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles butexamples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile

ISNULL, COALESCE or CONCAT_NULL_YIELDS_NULL in SQL Server

Before we jump into the core logic, let me explain you whatare the use of this functions / property.
BTW, this is the table to be used in example:
CREATE TABLE EMPTEST
(
FirstName varchar(10)
,MiddleName varchar(10)
,LastName varchar(10)
)
INSERT INTO EMPTEST
select ‘Ritesh’,‘A’,‘Shah’ UNION ALL
select ‘Roger’,NULL,‘Federer’ UNION ALL
select ‘Steffi’,NULL,‘Graf’
GO
Select FirstName,LastName fromEMPTEST
ISNULL: Thisfunction replaces the NULL value with specified value given in function.Suppose we have probability to have Middle Name NULL in Employee table, wecould do something like below given TSQL statement.
Select FirstName, ISNULL(Lastname,) as LastName From EMPTEST
If Lastname will be NULL, it will be replaced with blank inresult set.
COALESCE: BasicallyCOALESCE function will return first not null value from the given list, so wecan use this function in place of ISNULL too, like this:
Select FirstName, COALESCE(Lastname,) as LastName From EMPTEST
NULL is nothing but the absent of value, it doesn’t evenrepresent a blank or space or zero. When you try to concatenate two or morethan two strings and any of the string is NULL, it will return the NULL only. Seefollowing example:
 
Select FirstName + ‘ ‘ + MiddleName + ‘ ‘ + LastName as FullName fromEMPTEST
You will get first record “Ritesh A Shah” as full name and remainingtwo records as NULL as those two records are having NULL value in itsMiddleName field so concatenate results will be NULL only.
You can have solution for that with ISNULL & COALESCE.
Select FirstName + ‘ ‘ + ISNULL(MiddleName,) + ‘ ‘ + LastName asFullName from EMPTEST
Select FirstName + ‘ ‘ + COALESCE(MiddleName,) + ‘ ‘ + LastName asFullName from EMPTEST
In this solution, you have to wrap up all the fields all thefields with either ISNULL or with COALESCE if it has probability of containingNULL value. If you have many fields in one query which needs this wrapping, itis bit tedious for you as a developer. At this time, CONCAT_NULL_YIELDS_NULLproperty comes as a rescue.
CONCAT_NULL_YIELDS_NULL:this property controls concatenation of the string, like what should do if anyof the string is NULL and it is being used in concatenation.
You can set this property at database level by ALTERDATABASE command or you can use this for your current session/connection.
Select FirstName + ‘ ‘ + MiddleName + ‘ ‘ + LastName as FullName fromEMPTEST
As I have already told you that above query returns tworecord with NULL value as FullName, you can do something like below TSQL to setCONCAT_NULL_YIELDS_NULL  for your batch.
SET CONCAT_NULL_YIELDS_NULL OFF;
Select FirstName + ‘ ‘ + MiddleName + ‘ ‘ + LastName as FullName fromEMPTEST
SET CONCAT_NULL_YIELDS_NULL ON;
By default CONCAT_NULL_YIELDS_NULL  set with TRUE (ON) for every database but youcan turn it OFF for your batch or for your database (by ALTER DATABASEcommand). It will save you from writing ISNULL and COALESCE command for manytimes in your query.
If you want to check whether your database is set to TRUE orFALSE for this property or your SESSION has this property TRUE or FALSE, youcan use following TSQL.
SELECT DATABASEPROPERTYEX(‘YourDatabaseName’, ‘IsNullConcat’)
SELECT SESSIONPROPERTY(‘CONCAT_NULL_YIELDS_NULL’)
Personally I prefer to go for either ISNULL or COALESCErather than CONCAT_NULL_YIELDS_NULL . There are few reasons for that.
–> CONCAT_NULL_YIELDS_NULL  is supported in even SQL Server 2008 R2 alongwith previous versions of SQL Server but as per Microsoft, They will remove this feature in new releases, it will be by default ONin new versions and you will not be able to set it OFF
 –> If you havethis option in Stored procedure, it will force your Stored Procedure to recompile every time you execute your SP.
Apart from these, as long as possible, we should avoid tempering with default settingsof SQL Server unless you know what exactly your doing.
 Reference: Ritesh Shah

http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles butexamples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile