Thursday, December 21, 2006

Difference between @@IDENTITY and SCOPE_IDENTITY()

A colleague asked me today if he could use @@IDENTITY in the context of an insert stored procedure to return the generated value for an identity column. I recalled there was something about this function so I consulted the Books OnLine.
It states :

@@IDENTITY, SCOPE_IDENTITY, and IDENT_CURRENT are similar functions because they all return the last value inserted into the IDENTITY column of a table.

@@IDENTITY and SCOPE_IDENTITY return the last identity value generated in any table in the current session. However, SCOPE_IDENTITY returns the value only within the current scope; @@IDENTITY is not limited to a specific scope.

It took me a minute to find a good example to illustrate the difference between the two, but with a trigger I created the following example:

( a int identity(1,1), s varchar(10))
( a int identity(1000,1), s varchar(10))
ON tst
AS INSERT tst2 SELECT inserted.s FROM inserted

SCOPE_IDENTITY() will give you the value of tst.$IDENTITY, ignoring the identity value that is generated for table tst2 with the trigger after the insert into tst. @@IDENTITY will give you that value from tst2.$IDENTITY.

So the function we actually need in our case is not @@IDENTITY but SCOPE_IDENTITY().

Wednesday, November 22, 2006


We are in the middle of optimizing a big staging database that contains literally billions of uniqueidentifier-integer pairs over several tables. The database is used in a process to convert the nice little GUIDs of our production environment into keys that the reporting environment on Oracle of our enterprise can more easily deal with. Don't ask why we do it - I guess we lost that strategic battle :(

Anyway, when I was looking at some of the generic legacy code to identify the ROWGUID of a table, I remembered something about $ROWGUID. This automatically returns the ROWGUIDCOL of that table. Sadly, it's new in SQL Server 2005 and because we haven't had the chance to migrate that database, we still need to use some looping to investigate the COLUMNPROPERTY to finally get the name of the column that can be used in dynamic SQL to mimic the behavior of $ROWGUID:

ID uniqueidentifier ROWGUIDCOL

Thursday, October 19, 2006

SQL Server 2005 System Views Map

I always seem to forget the link for the great SQL Server 2005 System Views Map so I might as well dedicate a little post about it. The map gives an overview that seems quite chaotic at first, but when you zoom in you can find valuable information to be able to combine several related dmv's.

Tuesday, September 12, 2006

Be creative with BCP

My favorite colleague - you know the one :) - left me with a simple task before he left on a holiday. I had to finish this export for him that we decided to rewrite. He had to export several quite large recordsets to multiple XML-files. We agreed to use FOR XML EXPLICIT instead of a custom made SSIS component to be able to deploy it on our current production environment, still being SQL Server 2000.

When using FOR XML EXPLICIT, you don't get that many options to export the result to a file because of the special nature of that result. But with BCP, everything is possible - at least I thought so. Because of the complexity of the export, some data had to be staged temporarily in the stored procedure that retrieved the data and formatted the XML. And that's where the trouble started.
With the code below you simulate the core of the problem - which isn't FOR XML by the way.


CREATE PROC usp_bcp_test
IF 1=0
SELECT 1 as ColA, 'test' as ColB

SET @Stmt = 'BCP "EXEC TempDB.dbo.usp_bcp_test" QUERYOUT C:\TEMP\tst.txt -c -S ' + @@SERVERNAME + ' -T'
EXEC master.dbo.xp_cmdshell @Stmt

DROP PROC usp_bcp_test

I try to use the output of the stored procedure as the 'query' for BCP, but this fails because of the INSERT-statement in the stored procedure, even though it's unreachable code. BCP gives the following feedback:

SQLState = HY000, NativeError = 0
Error = [Microsoft][SQL Native Client]BCP host-files must contain at least one column

Using Profiler, I discovered that BCP issued the following statement to determine the resultset it can expect.


In the messages pane of SSMS you can see the following:

(0 row(s) affected)

(0 row(s) affected)

Which explains the fact that BCP complains about the stored proc: it expects the INSERT statement to give a columnlist for its resultset, which it doesn't have of course. Moving the SELECT clause to the begin of the stored proc makes this construct work because BCP only looks at the first resultset.

The solution to this problem is straightforward: split the retrieval and the formatting over multiple stored procs so that the stored proc used for the BCP only contains a simple SELECT-statement.

Nice little detail: I was only able to simulate this behavior in SQL Server 2005, not with SQL Server 2000.

Tuesday, August 29, 2006

Plan cache gets cleared

"Blog this!", my colleague said to me, so I am. Just a simple note this time to mention that when you change the max degree of parallelism on an instance, the plan cache gets cleared. Kind of logic when you think about it because you are changing the rules for the stored plans, but nevertheless he thought it was useful info.

EXEC sp_configure 'max degree of parallelism', 0

Tuesday, August 08, 2006

Behavior of Logins/sec

We are in the middle of migrating our production environment from SQL Server 2000 to SQL Server 2005, but during the stress tests this week I noticed something scary. The performance counter "SQL Server:General Statistics - Logins/sec" indicated a factor 1000 more for SQL 2005. I suspected a severe problem with our connection pooling but in spite of that very high number of logins, there were very little logouts/sec and the number of user connections stayed stable. Also Profiler didn't capture those logins (actual and failed ones) and the familiar sp_reset_connection showed up whenever a connection was reused.

Nothing wrong with the connection pooling luckily, but it seems that the behavior of the performance counter has changed between versions. It seems to indicate not only the actual logins, but also the reuse of connections because of connection pooling. A colleague put the matter on a forum for confirmation. Tbc?

Thursday, July 06, 2006

Using fn_dblog

I like the fn_dblog function even if it is undocumented. It enables you to read from you transaction log which contains very valuable information about stuff that is happening in your database. It is for example the only way I now at this time to determin which object experienced a page split if you search for the LOP_DELETE_SPLIT operation.

FROM ::fn_dblog(NULL, NULL)

This is the usage of that system function you see everywhere, just passing NULL's for the @start and @end parameters, which could limit the LSN's (Log Sequence Number) to be searched. It took me some time to figure it out because just passing the output of the function as a parameter simply doesn't work. You can't use a LSN like for example 00000073:000020fa:0001 as a value for the parameters, you have to translate the hexadecimal values into its numeric equivalent (115:8442:1).

Let's show this with some code and because there's no easy way to make this conversion, I came up with something quick & dirty. At last an example of the actual usage of the parameters of fn_dblog on the web ;-)

DECLARE @tbl TABLE (id INT identity(1,1), i VARCHAR(10))

SET @LSN = (SELECT TOP 1 [Current LSN] FROM fn_dblog(NULL, NULL))

SET @stmt = 'SELECT CAST(0x' + SUBSTRING(@LSN, 1, 8) + ' AS INT)'
INSERT @tbl EXEC(@stmt)
SET @stmt = 'SELECT CAST(0x' + SUBSTRING(@LSN, 10, 8) + ' AS INT)'
INSERT @tbl EXEC(@stmt)
SET @stmt = 'SELECT CAST(0x' + SUBSTRING(@LSN, 19, 4) + ' AS INT)'
INSERT @tbl EXEC(@stmt)

(SELECT i FROM @tbl WHERE id = 1) + ':' + (SELECT i FROM @tbl WHERE id = 2) + ':' + (SELECT i FROM @tbl WHERE id = 3)

FROM ::fn_dblog(@LSN_HEX, NULL)

Wednesday, June 28, 2006

Missing checkpoints in Profiler

Earlier on, I reported the new possibility to trace the start of a checkpoint from within profiler. I tested that new event with manual checkpoint-statements, because that's exactly what happens in the production environment of our application where we force them.

Anyway, we just discovered that with a 'normal' checkpoint, triggered by the database engine itself, the event isn't raised. You can easily simulate this behavior by comparing a profiler trace with the perform counter MSSQL:Buffer Manager-Checkpoint pages/sec on a database with some load. Compare the periodic checkpoint with one you forced yourself. Weird and disappointing.

Tuesday, June 20, 2006

Deployed package version

You develop SSIS-packages from within the Business Intelligence Development Studio (or better, just call it Visual Studio) and you use Management Studio for deploying and managing packages on a certain environment. Simple question I got today from a colleague: how do you determine the version that is deployed on a server?

In the old days with DTS-packages that was straightforward, but now with SSIS, I couldn't find the elegant answer straightaway. Yes, you can export the package to the file system and open it in VS, but that's no decent solution. Querying msdb like the solution below - it's an option:

WITH Folder (FolderID, ParentFolderID, FolderName) AS
SELECT folderid, parentfolderid, cast(foldername as varchar(max))
FROM msdb.dbo.sysdtspackagefolders90
WHERE parentfolderid IS NULL


SELECT f.folderid, f.parentfolderid, cast(pf.FolderName + '\' + f.foldername as varchar(max))
FROM msdb.dbo.sysdtspackagefolders90 f
INNER JOIN Folder pf on pf.FolderID = f.parentfolderid
SELECT f.FolderName, as PackageName, p.vermajor, p.verminor, p.verbuild
FROM msdb.dbo.sysdtspackages90 p
INNER JOIN Folder f on f.FolderID = p.folderid

That was the best I could come up with at that time, but at home I have more time to look into it. And actually it's quite simple, because of the great Summary-window (F7) in SSMS. It contains one report (General) when you connect to SSIS, click on a package and there is all the info you can consult in the system tables. Powerful stuff!

Monday, May 29, 2006

Monitoring checkpoints in Profiler

Back in the days when I was still in the stresstest-team of our application, I tried to monitor as much as possible from within the SQL Server Profiler. Ok, you still need to use Perfmon of course to capture the most common Windows performance counters, but I find it very handy to have an indication of when exactly checkpoints occur along with the statements that are less performant because of it.

In SQL Server 2000, I was forced to write a little routine to trigger my own (user defined) events for the Profiler as there are no standard events that can be monitored. I used the sysperfinfo view to provide me of an indication of when a checkpoint was in progress. I guess that reading the transaction log of the monitored database is a valid alternative, but I haven't actually tried that one (ah... the greatly undocumented ::fn_dblog()).
Either way, both methods require custom scripting and are not accurate.

Now, in SQL Server 2005, you get of course the option to correlate performance counters with events logged in Profiler, which is a very neat feature, but I like to store all the info in one SQL Trace file.
Luckily, you also get the Security Audit Database Operation Event which is raised when a checkpoint begins. It's a pity that there's no counterpart for the end of a checkpoint :(.

Friday, May 12, 2006

Performance tuning guide for SQL Server

I'm no Google-guru, so I actually had a lot of trouble finding the following documents. But because this is my long term memory, I couldn't do without them.
So here it is, some indispensable old school performance tuning stuff: Microsoft SQL Server 2000 RDBMS Performance Tuning Guide for Data Warehousing. And the even more unfindable document version.

And it also has a nice equivalent for 2005: Troubleshooting Performance Problems in SQL Server 2005.

Sunday, April 30, 2006

Return types of IDENTITY-functions

I ran into some strange inconsistency a couple of days ago when I was reading about identity-columns, -functions and -properties in the SQL Server Books OnLine (BOL).
I was actually only looking for the correct syntax of DBCC CHECKIDENT for reseeding an identity-column of a table, as I was wondering what the datatype of the new_reseed_value has to be.

DBCC CHECKIDENT ('table_name' , RESEED , new_reseed_value)

No big secret though, the datatype of an identity column can be int, bigint, smallint, tinyint, or decimal or numeric with a scale of 0. That explains why the return type for the functions @@IDENTITY and SCOPE_IDENTITY() is numeric, actually numeric(38,0), which is the datatype with the biggest precision of the whole list.
So far so good, but why has the similar function IDENT_CURRENT('table_name') a return type of sql_variant according to the BOL? It just doesn't make sense, certainly because I have no indication that the function actually returns a sql_variant. The way I see it now, IDENT_CURRENT simply returns numeric-data like the other functions, not sql_variant.


--Determining the datatype, precision and scale
SELECT @sql_variant = SCOPE_IDENTITY()
SQL_VARIANT_PROPERTY(@sql_variant, 'BaseType') AS [BaseType],
SQL_VARIANT_PROPERTY(@sql_variant, 'Precision') AS [Precision],
SQL_VARIANT_PROPERTY(@sql_variant, 'Scale') AS [Scale]

--Determining the return type for the identity-functions
INTO tmpReturnType

EXEC sp_help 'tmpReturnType'
DROP TABLE tmpReturnType

Friday, April 28, 2006

We went to Devconnections Europe

Just back from Nice where we attended Devconnections Europe! At first I was very enthusiastic about it and in fact the speakers and sessions didn't let me down, but the organizers of the event seemed much less pleased after noticing that the attendance was rather low. But the beautiful city of Nice easily made up for the mess-up with the sessions and the poor service during the breaks.

The session of Itzik Ben-Gan about row numbers (Discover the Power of Row Numbers) was very inspiring. It really made the attendees think about T-SQL and the reason why new additions in 2005 can help us to overcome everyday problems in a more efficient fashion.

Grant Dickinson had a talk about programming against SSIS (Integrating Integration Services into Your Custom Applications) which gave me some ideas. Very hands-on stuff that I tend to use as soon as I see the opportunity.

And then there was the SQL-guru Gert Drapers of whom we attended most sessions (SQL Server 2005 Memory Internals / SQL Server 2005 Bulk Loading / SQL-CLR: A Performance Evaluation / Database Mirroring Inside Out / Exploring and Optimizing SQL Server 2005 Using Dynamic Management Views). Really too much interesting stuff to sum it up, just follow the link ;-)

Last but not least, Matt Nunn gave us some interesting news about the future development on SQL Server and Visual Studio. His session about offline (Occasionally Connected Systems) gave us some input for our future development.

Sunday, April 23, 2006

We are going to Devconnections Europe

I had never thought they would allow us to do so, but we are in fact going to Devconnections Europe! The SQL leads for development and operations and I are leaving for Nice tomorrow for hopefully a very enriching experience.
The subjects for the sessions of the SQL Server Magazine Connections look very interesting to say the least. And the abstracts promise us to treat some in-depth information on all the cool stuff in SQL Server 2005. I’m really looking forward to learn more about database mirroring, performance tuning, service broker, dynamic management views, memory internals, etc.
Experienced speakers, great sessions and a nice setting: Nice, here we come ;-)

Read more about the event and the sessions at

Wednesday, April 05, 2006

Everyone else is blogging, so why can't I?

Hey, if all my colleagues at work can start a technical blog, why can't I? Okay, I for myself can imagine a million of reasons why I shouldn't, but it's already too late, you're reading my first post :-).
All this blog-stuff looks very funky to me now, but I will have a hard time producing some meaningful content. Like my 'favorite colleague' I'm considering this blog primarily as my long term memory of my professional experiences at work and at home.

Quick introduction to conclude my first post: I'm a software engineer, more specifically a SQL Server database developer on a long-running, Microsoft-minded project at a large Belgian company. It's still my first job, already for more than four years now, and still loving it because of the dynamism we try to maintain here.