Last month, I showed how snapshot integrity can really mess up your triggers, promised you a workaround, and then went on holiday. Some events in my life kept me from posting the promised workaround for longer than intended, so I hope that you haven’t been holding your breath for it! J
The workaround is actually quite simple. Just remember the first part of this series, where I showed how SQL Server 2005 prevents violation of foreign key constraints when using snapshot isolation – it automatically and silently switches to a less concurrent isolation level. Of course, if Microsoft can do it, then so can I!
I’ll use the same sample I did in the previous part, so there’s no need to repeat them here. You can copy the DDL from my last post. To mimic what Microsoft does when checking FOREIGN KEY constraints (i.e. automatically disable snapshot integrity) I added the command below as the very first command in both triggers:
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
This change is all I need to fix last month’s bad data. I don’t even have to reset the isolation level to snapshot at the end of the trigger code: since changing the isolation level in a trigger or stored procedure never affects the isolation level of the calling procedure, resetting the isolation level at the end of the trigger would do nothing but waste a few CPU cycles.
Now, when I rerun my tests, the second connection waits for the first one to finish, then aborts with an error message. Just as when I tested the FOREIGN KEY constraint, except that the error message is now different from the awfully non-descriptive error message from my first tests:
Msg 50000, Level 16, State 1, Procedure Customers_d, Line 13
Customers with type A orders can’t be deleted
Msg 3609, Level 16, State 1, Line 9
The transaction ended in the trigger. The batch has been aborted.
That’s it. Just one simple modification to fix the problems, and no catch, right? No, of course not – there is a catch. There always is. You see, these triggers will also execute if customers are deleted and if orders are inserted or updated from a procedure that uses a higher isolation level. If I set my isolation level to repeatable read, I expect the code in the trigger to honour that isolation level, not to disregard it!
To ensure that the isolation level is only changed to read committed if it was snapshot isolation, I’ll have to enclose the SET command in an IF statement that tests the current isolation level. Unfortunately, that’s not as straightforward as it sounds. The current isolation level is available from DBCC USEROPTIONS, which doesn’t return a parameter or return code, but produces a table. I’ll have to catch that output into a temporary table, then find the row with the isolation level and use that in my IF statement. Here is the code that has to go at the beginning of booth triggers, instead of the single line above:
CREATE TABLE #Options
(OptName varchar(128) PRIMARY KEY,
Value varchar(50));
INSERT INTO #Options (OptName, Value)
EXEC (‘DBCC USEROPTIONS WITH NO_INFOMSGS;’);
IF (SELECT Value
FROM #Options
WHERE OptName = ‘isolation level’) = ‘snapshot’
BEGIN;
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
END;
Repeating my tests shows that this code still escapes from snapshot isolation to read committed isolation to ensure that my data integrity can not be violated. But I now also know that the isolation level will not be changed if the trigger was invoked with any isolation level other than snapshot isolation.
The downside of this workaround is that improved data integrity goes hand in hand with reduced concurrency. The whole reason why snapshot isolation has been added to the product is to ensure that people who have to read data and people who have to change data won’t block each other. But since Microsoft will automatically disable snapshot isolation when checking foreign key constraints, and I disable snapshot isolation in my triggers, then there are not too many situations left for readers and writers to co-exist without blocking.
In the last part of this series, I’ll look at some other, less obvious methods that can be used to guard data integrity, and check how they are affected by snapshot integrity.
4 Comments. Leave new
Hugo instead of using user options can’t you use the following code?
and basically checking for transaction_isolation_level 5 from the sys.dm_exec_sessions catalogue view?
SELECT CASE transaction_isolation_level
WHEN 0 THEN ‘Unspecified’
WHEN 1 THEN ‘ReadUncomitted’
WHEN 2 THEN ‘Readcomitted’
WHEN 3 THEN ‘Repeatable’
WHEN 4 THEN ‘Serializable’
WHEN 5 THEN ‘Snapshot’ END AS TRANSACTION_ISOLATION_LEVELFROM
FROM sys.dm_exec_sessions
where session_id = @@SPID
small typo
END AS TRANSACTION_ISOLATION_LEVELFROM
should be
END AS TRANSACTION_ISOLATION_LEVEL
Hi Hugo,
I have a feeling that because of the temporary table #Options the trigger will recompile every time it fires, and that is bad for performance. Makes sense?
Alex,
I don’t know the exact details about what does and what doesn’t trigger a recompilation, but you might well be right. I wasn’t too happy with the temp table either, but I knew (note the use of past tense here) no other way to catch the transaction isolation level.
Denis,
Great suggestion! I really should find the time to study the new dynamic management views. With your code, I don’t need the temp table anymore.
IF (SELECT transaction_isolation_level
FROM sys.dm_exec_session
WHERE session_id = @@SPID) = 5
BEGIN;
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
END;