AFTER Triggers

AFTER triggers fire after the firing statement has already taken place. You use these triggers to react to changes against the database server.

DML AFTER triggers can be created only on permanent tables. They cannot be created on views or temporary tables. You create such triggers on a specific table and for a specific DML statement or statement list, including INSERT, UPDATE, and DELETE.

If a constraint defined on a table rejects a modification, a trigger defined on that table will not fire.

AFTER triggers are fired per statement, not per row. Regardless of how many rows were affected by the firing statement (zero, one, or multiple), an AFTER trigger will fire only once.

You can create multiple AFTER triggers (both DML and DDL) on each object for each statement type. If you have multiple triggers on the same table for the same type of statement, they will fire synchronously (one after the other). SQL Server allows you to mark the trigger that will fire first and the one that will fire last by using the sp_settriggerorder stored procedure. The order in which triggers between the first and last are fired is undefined.

AFTER triggers are useful for automated reactive activities that you want to issue as a result of the firing statement–for example, enforcing integrity rules that you cannot enforce with constraints, auditing, maintaining denormalized data, and so on.

The inserted and deleted Special Tables

Within DML triggers you can access the old and new image of the affected rows through special tables called inserted and deleted. The inserted table contains the new image of the affected rows, and deleted contains the old image. Naturally, inserted will contain rows only for INSERT and UPDATE triggers, and it will be empty for DELETE triggers. Similarly, deleted will contain rows only for DELETE and UPDATE triggers, and it will be empty for INSERT triggers.

The inserted and deleted tables are structured the same as the table on which the trigger was defined–that is, they have the same columns as the base table. Note that these tables are not indexed; therefore, every time you query them, you’re scanning the whole thing. There are some exceptions, of course. For example, if you use the EXISTS predicate or a TOP query with no ORDER BY clause, SQL Server won’t need to scan the whole table.

These special tables are implemented in completely different ways in SQL Server 2000 and SQL Server 2005.

In SQL Server 2000, these tables are actually views built against the section of the transaction log that contains the log records for the statement that fired the trigger. This fact has many performance implications. Whenever you query inserted or deleted, you’re scanning a portion of the transaction log. Remember that the transaction log is written sequentially, and that it is typically a bottleneck in online transaction processing (OLTP) systems. Any interference with the transaction log’s activity ultimately postpones the flushing of changes to the data portion of the database. Intensive trigger activity might very well cause the transaction log to become even a more serious bottleneck in the system.

Typically, it’s sufficient to place the transaction log on a RAID 1 disk system, which doesn’t provide striping of the data, because the transaction log writes sequentially and will not benefit from striping. However, if you have intensive activities (such as triggers or transaction log replication) that read from the log, it’s a good idea to stripe the log on multiple disk drives using a RAID 10 controller. By taking this approach, you can have different disk arms working on the different read/write activities.

In SQL Server 2005, inserted and deleted point to row-versioned data in tempdb. Row versioning is a new technology in SQL Server 2005 supporting various aspects of the product, including new snapshot isolation levels, triggers, online index operations, and multiple active result sets (MARS). This technology allows storing earlier versions of a row in tempdb. Regarding triggers, SQL Server stores row versions for both data changed by the statement firing the trigger and data changed by the trigger. The inserted and deleted tables now point to row-versioned data and not to a section in the transaction log.

When you upgrade your system from SQL Server 2000 to SQL Server 2005, you have an immediate change in internal activity for triggers. The load is removed from the transaction log, but it shifts to tempdb. This change in the architecture is very interesting because the transaction log writes sequentially and you remove interference with the log activity. Multiple activities against tempdb can be performed in parallel, enjoying striping of the data. However, it is important to be aware of this architectural change and be prepared with sufficient storage and a sufficient RAID system for tempdb before you upgrade to SQL Server 2005.

In both SQL Server 2005 and earlier versions of SQL Server, intensive trigger activity can cause performance issues. In both versions, inserted and deleted are not indexed, so you might end up scanning large portions of data when querying those. In SQL Server 2000, triggers place tension on the transaction log, and in SQL Server 2005, they place tension on tempdb. If you really need to scan inserted or deleted completely and you can obtain the data you need with a single scan (for example, a set-based join between inserted or deleted and the base table), there’s not much that you would need to do to improve performance. However, if you need to access inserted or deleted with multiple iterations–for example, if you need to treat each row individually–it would be wise to spool the data and put it aside in a temporary table that you can index, as in:

SELECT * INTO #I FROM inserted
  CREATE UNIQUE CLUSTERED INDEX idx_keycol ON #I(keycol);

Understanding the way inserted and deleted work is the key to developing robust and efficient triggers. On the other hand, misunderstanding them could lead to serious performance problems.

Identifying the Number of Affected Rows

Remember that triggers are fired per statement, not per row. This means that a trigger will fire once for zero, one, or multiple affected rows. In most cases, if a trigger was fired for zero affected rows, you don’t want it to react to the statement that fired it. If one row was affected, you can typically apply simple logic and query inserted or deleted directly, knowing that they contain only one row. For example, you can safely use an assignment SELECT like the following one to grab data from that row:

SELECT @var1 = col1, @var2 = col2, ... FROM inserted;

However, you will face logical issues with such a statement if zero or multiple rows were affected. With zero affected rows, such a statement does not perform any assignment at all, so the variables will have the values they had earlier. With multiple affected rows, this statement will perform multiple assignments–one for each row. However, the variables will have the values assigned to them in the last assignment performed. If you just assume that your triggers will fire only for single-row modifications, but they don’t, you’ll end up with logical bugs.

Suppose that you use a SET command to assign a value from inserted or deleted to a variable:

SET @var1 = (SELECT col1 FROM inserted);

If one row was affected, this assignment will work perfectly fine. With zero affected rows, the subquery will return NULL. With multiple affected rows, this code will break, generating an error.

You need to take these things into consideration when programming triggers so that you can avoid bugs and realize good performance. You can save the value returned by the @@rowcount function in a local variable. A DECLARE statement will not cause a change in the value that @@rowcount returns, so you have the opportunity to declare a local variable and assign it with the return value of the function. If you do so right at the beginning of the trigger, @@rowcount will return the number of rows affected by the firing statement. You can then inspect the value in the local variable and determine a course of action based on the number of rows affected by the firing statement.

With zero affected rows, you typically just want to return from the trigger–there’s no point in wasting resources if you need not do anything. If more than zero rows were affected, the course of action you should take depends on the type of activity you want to issue from the trigger. If you can achieve the activity by using a set-based join between inserted or deleted and the base table, you can apply the same code for both single and multiple affected rows. For example, suppose that you’re writing an INSERT trigger for a table called T1 that is supposed to modify an attribute called col1 in the new rows. You can achieve this by joining inserted and T1:

WITH C AS
(
  SELECT T1.col1 AS T1_col1 -- plus other columns of interest
  FROM inserted AS I
    JOIN T1
      ON T1.keycol = I.keycol
)
UPDATE C
  SET T1_col1 = <expression>;

The purpose of the join is just to filter the new rows added to T1.

However, suppose that you need to deal with each new row individually using iterative logic. You need to retrieve values from each row individually and take some action. Remember that if only one row was affected, you can safely use an assignment SELECT to grab values from the row in inserted or deleted:

SELECT @var1 = col1, @var2 = col2, ... FROM inserted;

So you can check the number of affected rows, and if it is one, use such logic. Otherwise, you need to apply iterative logic. Remember that inserted and deleted are not indexed. You can copy the data from the special table into a temporary table, index it, and then iterate through the rows in the temporary table using a loop. A more efficient solution is to use a cursor based on a query against inserted or deleted. I feel more comfortable with the former approach, in which I copy the data to my own temporary table and index it; so if performance is not an issue I typically apply this approach (for example, with a small number of rows). Of course, you can benchmark both approaches for a given task before you decide which to use. In this section, I’ll demonstrate the approach using a temporary table, and later in the chapter, I’ll demonstrate the approach using a cursor.

To demonstrate the flow handling based on the number of affected rows, first run the following code to create the table T1:

SET NOCOUNT ON;
USE tempdb;
GO
IF OBJECT_ID('dbo.T1') IS NOT NULL
  DROP TABLE dbo.T1;
GO

CREATE TABLE dbo.T1
(
  keycol  INT         NOT NULL PRIMARY KEY,
  datacol VARCHAR(10) NOT NULL
);

Run the code in Example 8-1 to create the trigger trg_T1_i.

Example 8-1. Creation script for trg_T1_i trigger

CREATE TRIGGER trg_T1_i ON T1 FOR INSERT
AS

DECLARE @rc AS INT;
SET @rc = @@rowcount;

IF @rc = 0 RETURN;

DECLARE @keycol AS INT, @datacol AS VARCHAR(10);

IF @rc = 1 -- single row
BEGIN
  SELECT @keycol = keycol, @datacol = datacol FROM inserted;
  PRINT 'Handling keycol: '
    + CAST(@keycol AS VARCHAR(10))
    + ', datacol: ' + @datacol;
END
ELSE -- multi row
BEGIN
  SELECT * INTO #I FROM inserted;
  CREATE UNIQUE CLUSTERED INDEX idx_keycol ON #I(keycol);

  SELECT @keycol = keycol, @datacol = datacol
  FROM (SELECT TOP (1) keycol, datacol
        FROM #I
        ORDER BY keycol) AS D;

  WHILE @@rowcount > 0
  BEGIN
    PRINT 'Handling keycol: '
      + CAST(@keycol AS VARCHAR(10))
      + ', datacol: ' + @datacol;

    SELECT @keycol = keycol, @datacol = datacol
    FROM (SELECT TOP (1) keycol, datacol
          FROM #I
          WHERE keycol > @keycol
          ORDER BY keycol) AS D;
  END
END
GO

Note

Note

You can use either the keyword FOR or the keyword AFTER to define an AFTER trigger.

This trigger simply prints the values provided in each new row loaded by the firing INSERT statement. It demonstrates how to deal with each row individually.

The trigger first stores the value returned by the @@rowcount function in the local variable @rc. If zero rows were affected, the trigger returns. If one row was affected, the code uses an assignment SELECT to grab the values from the row inserted into local variables. If multiple rows were affected, the code copies the rows from inserted into the temporary table #I and creates a clustered index on #I.keycol. The code then grabs the values from the row with the minimum key by using a TOP (1) query. The code then enters a loop that continues for as long as the last statement found a row. In each iteration, the code prints the values from the current row and grabs the values from the next row (the row with the first key that is greater than the current one).

To test the trigger, first run the following code, which loads zero rows:

INSERT INTO dbo.T1 SELECT 1, 'A' WHERE 1 = 0;

As expected, you will get no output from the trigger.

Next load a single row:

INSERT INTO dbo.T1 SELECT 1, 'A';

You will get the following output:

Handling keycol: 1, datacol: A

Finally, load multiple rows:

INSERT INTO dbo.T1
  SELECT 2, 'B'
  UNION ALL
  SELECT 3, 'C'
  UNION ALL
  SELECT 4, 'D';

You will get the following output:

Handling keycol: 2, datacol: B
Handling keycol: 3, datacol: C
Handling keycol: 4, datacol: D

When you’re done, run the following code for cleanup:

IF OBJECT_ID('dbo.T1') IS NOT NULL
  DROP TABLE dbo.T1;

Identifying the Type of Trigger

There might be cases in which you’d rather create one trigger for multiple statement types but still be able to identify the type of statement that fired the trigger. For example, for auditing purposes, you want to use the same code for INSERT, UPDATE, and DELETE to audit the fact that a change took place, but you want to also audit the type of change.

To identify the type of statement that fired the trigger, you can inspect inserted and deleted. Of course, if zero rows were affected, both tables will be empty, but in such a case you’d typically rather do nothing. If at least one row was affected, you can check which table contains rows to determine the type of change. As a result of an INSERT statement, you will find rows only in inserted; for a DELETE statement, you will find rows only in deleted; for an UPDATE statement, you will find rows in both tables.

To demonstrate this, first create the table T1 by running the following code:

IF OBJECT_ID('dbo.T1') IS NOT NULL
  DROP TABLE dbo.T1;
GO

CREATE TABLE dbo.T1
(
  keycol  INT         NOT NULL PRIMARY KEY,
  datacol VARCHAR(10) NOT NULL
);

Run the code in Example 8-2 to create the trigger trg_T1_iud:

Example 8-2. Creation script for trg_T1_iud trigger

CREATE TRIGGER trg_T1_iud ON dbo.T1 FOR INSERT, UPDATE, DELETE
AS

DECLARE @rc AS INT;
SET @rc = @@rowcount;

IF @rc = 0
BEGIN
  PRINT 'No rows affected';
  RETURN;
END

IF EXISTS(SELECT * FROM inserted)
BEGIN
  IF EXISTS(SELECT * FROM deleted)
  BEGIN
    PRINT 'UPDATE identified';
  END
  ELSE
  BEGIN
    PRINT 'INSERT identified';
  END
END
ELSE
BEGIN
  PRINT 'DELETE identified';
END
GO

The trigger’s code first checks whether zero rows were affected. If that is the case, it simply returns. If at least one row was affected, the code uses the EXISTS predicate to determine which type of statement fired the trigger by examining which tables contain rows. Remember that the optimizer supports a short-circuiting capability when optimizing the EXISTS predicate. Once a row is identified, there’s no reason to scan other rows to determine whether the table contains rows or not.

Tip

Tip

In SQL Server 2000, if you used the COUNT(*) aggregate to check for existence of rows—for example, IF (SELECT COUNT(*) FROM Table) > 0—the technique was typically expensive. The optimizer typically used a full scan of the leaf level of the narrowest index created on the table to determine that count. This meant that it was always a better idea to use the EXISTS predicate. The optimizer in SQL Server 2005 does a much better job of optimizing logical expressions, such as the one just shown. It typically produces the same plan for IF (SELECT COUNT(*)...) >0 to the one it generates for a logically equivalent expression based on the EXISTS predicate, by applying short-circuiting.

To test the trigger, run the following INSERT statement that loads zero rows:

INSERT INTO T1 SELECT 1, 'A' WHERE 1 = 0;

You will get the output:

No rows affected

Insert one row:

INSERT INTO T1 SELECT 1, 'A';

You will get the output:

INSERT identified

Issue an UPDATE statement:

UPDATE T1 SET datacol = 'AA' WHERE keycol = 1;

You will get the output:

UPDATE identified

Finally, issue a DELETE statement:

DELETE FROM T1 WHERE keycol = 1;

You will get the output:

DELETE identified

When you’re done, run the following code for cleanup:

IF OBJECT_ID('dbo.T1') IS NOT NULL
  DROP TABLE dbo.T1;

Not Firing Triggers for Specific Statements

There’s no built-in way to suppress a trigger for a particular statement. You can only disable a trigger completely using an ALTER TABLE DISABLE TRIGGER command. If you want to prevent a trigger from firing for a particular statement, you have to develop your own programmatic solution. You need to somehow signal the trigger that you don’t want it to run its code.

One way to achieve this is by creating a temporary table with a particular name in the calling batch. Remember that a local temporary table is visible only to the creating session, in the calling level, and all levels inner to it. The trigger can first check whether a temporary table with that particular name exists and, if it does, return. Otherwise, the code can continue running normally. Back in the calling batch, you can drop the temporary table when you don’t want to prevent the trigger from running its code anymore.

To demonstrate this solution, first run the following code, which creates the table T1:

USE tempdb;
GO
IF OBJECT_ID('dbo.T1') IS NOT NULL
  DROP TABLE dbo.T1;
GO
CREATE TABLE dbo.T1(col1 INT);

Next run the following code to create the trigger trg_T1_i:

CREATE TRIGGER trg_T1_i ON dbo.T1 FOR INSERT
AS

IF OBJECT_ID('tempdb..#do_not_fire_trg_T1_i') IS NOT NULL RETURN;

PRINT 'trg_T1_i in action...';
GO

The trigger uses the OBJECT_ID function to determine whether a temporary table called #do_not_fire_trg_T1_i exists. If the table does exist, it returns. If the table doesn’t exist, the code continues running normally.

Note

Note

Remember that temporary tables are created in tempdb. Make sure that when using the OBJECT_ID function to determine whether a temporary table exists, you database-qualify the object name. If you are connected to another database and invoke the function without the tempdb prefix, you’ll always get a NULL back.

If you don’t want to prevent the trigger’s code from running, just submit your usual modifications. For example, run the following INSERT statement:

INSERT INTO dbo.T1 VALUES(1);

You will get the output:

trg_T1_i in action...

This output tells you that the trigger’s code ran fully.

If you want to prevent the trigger’s code from running in full, signal it by creating the temporary table with the expected name:

-- Setting signal
CREATE TABLE #do_not_fire_trg_T1_i(col1 INT);
INSERT INTO T1 VALUES(2);
-- Clearing signal
DROP TABLE #do_not_fire_trg_T1_i;

This solution works, but it does have an impact on tempdb’s activity.

Another solution that you can implement uses the session’s context info. Logically, context info is a VARBINARY(128) variable owned by the session. At any point in the session, you can change it by using the SET CONTEXT_INFO command or query it by using the CONTEXT_INFO function.

Note

Note

The CONTEXT_INFO function is new in SQL Server 2005. To acquire the session’s context info in SQL Server 2000, use the following query:

SELECT context_info FROM master.dbo.sysprocesses WHERE spid = @@spid

You can rely on the session’s context info to communicate between different levels of code–in our case, between the calling batch and the trigger. Think of context info as a global session variable. For example, a batch can store a specific GUID in a section of the session’s context info when it wants to send a signal to the trigger. The trigger will look for that particular GUID to determine whether or not to continue running the code.

To generate the GUID that you will use as your signal, you can use the NEWID() function, as in:

SELECT CAST(NEWID() AS BINARY(16));

The GUID is converted to a binary value because you want to store it in the session’s context info, which is binary. You invoke this code only once to acquire a GUID, and then specify it explicitly in your code. When I invoked this code, I got the value 0x7EDBCEC5E165E749BF1261A655F52C48. I will use this GUID in my examples. Of course, you should use the one you get. If you’re using the session’s context info for multiple tasks (for example, to send signals to multiple different triggers), make sure that you dedicate a different section within it for each task. Whenever you set the value of context info, make sure that you don’t override it completely. Rather, just substitute the relevant section within it dedicated to the task at hand.

For encapsulation purposes, I’ll create three stored procedures: one that sets the signal, one that clears it, and one that returns it. I’ll create the stored procedures as special ones so that they can be used in all databases without database-qualifying the procedure name.

In our example, let’s assume that we dedicate the first 16 bytes of the context info (starting at position 1) for our trigger’s signal.

Run the following code to create the sp_TrgSignal_Set stored procedure, which sets the signal:

USE master;
GO
IF OBJECT_ID('dbo.sp_TrgSignal_Set') IS NOT NULL
  DROP PROC dbo.sp_TrgSignal_Set;
GO
CREATE PROC dbo.sp_TrgSignal_Set
  @guid AS BINARY(16),
  @pos  AS INT
AS

DECLARE @ci AS VARBINARY(128);
SET @ci =
  ISNULL(SUBSTRING(CONTEXT_INFO(), 1, @pos-1),
         CAST(REPLICATE(0x00, @pos-1) AS VARBINARY(128)))
  + @guid +
  ISNULL(SUBSTRING(CONTEXT_INFO(), @pos+16, 128-16-@pos+1), 0x);
SET CONTEXT_INFO @ci;
GO

The stored procedure accepts two inputs; @guid is the 16-byte GUID used as the signal, and @pos is the starting position (byte number) in which you want to store the signal in context info (in our case, 1). Notice that the procedure doesn’t override the whole value stored in context info, rather just the relevant 16 bytes. It achieves this by querying the CONTEXT_INFO function. The code extracts the surrounding sections from the existing context info, concatenates the signal between the preceding and following sections, and then stores the concatenated string back in the session’s context info.

In a similar manner, the following sp_TrgSignal_Clear stored procedure clears the signal from the section dedicated to it by zeroing the relevant bits:

IF OBJECT_ID('dbo.sp_TrgSignal_Clear') IS NOT NULL
  DROP PROC dbo.sp_TrgSignal_Clear;
GO
CREATE PROC dbo.sp_TrgSignal_Clear
  @pos  AS INT
AS

DECLARE @ci AS VARBINARY(128);
SET @ci =
  ISNULL(SUBSTRING(CONTEXT_INFO(), 1, @pos-1),
         CAST(REPLICATE(0x00, @pos-1) AS VARBINARY(128)))
  + CAST(REPLICATE(0x00, 16) AS VARBINARY(128)) +
  ISNULL(SUBSTRING(CONTEXT_INFO(), @pos+16, 128-16-@pos+1), 0x);
SET CONTEXT_INFO @ci;
GO

And finally, the following sp_TrgSignal_Get stored procedure returns the signal by querying the CONTEXT_INFO function:

IF OBJECT_ID('dbo.sp_TrgSignal_Get') IS NOT NULL
  DROP PROC dbo.sp_TrgSignal_Get;
GO
CREATE PROC dbo.sp_TrgSignal_Get
  @guid AS BINARY(16) OUTPUT,
  @pos  AS INT
AS

SET @guid = SUBSTRING(CONTEXT_INFO(), @pos, 16);
GO

Now you can alter the trg_T1_i trigger to look for the signal in the session’s context info instead of using the temporary table technique:

USE tempdb;
GO
ALTER TRIGGER trg_T1_i ON dbo.T1 FOR INSERT
AS

DECLARE @signal AS BINARY(16);
EXEC dbo.sp_TrgSignal_Get
  @guid = @signal OUTPUT,
  @pos  = 1;
IF @signal = 0x7EDBCEC5E165E749BF1261A655F52C48 RETURN;

PRINT 'trg_T1_i in action...';
GO

To test the trigger, first issue an INSERT statement without setting the signal:

INSERT INTO dbo.T1 VALUES(1);

The trigger’s code will run in full, and you will get the output:

trg_T1_i in action...

To prevent the trigger’s code from firing, set the signal:

EXEC dbo.sp_TrgSignal_Set
  @guid = 0x7EDBCEC5E165E749BF1261A655F52C48,
  @pos = 1;

Now issue an INSERT statement:

INSERT INTO T1 VALUES(2);

You will get no output, telling you that the trigger got the signal and aborted.

When you want to clear the signal, issue the following code:

EXEC dbo.sp_TrgSignal_Clear @pos = 1;

Now that the signal is cleared, issue an INSERT statement:

INSERT INTO T1 VALUES(3);

The trigger’s code will run in full again, producing the output:

trg_T1_i in action...

When you’re done, run the following code for cleanup:

USE tempdb;
GO
IF OBJECT_ID('dbo.T1') IS NOT NULL
  DROP TABLE dbo.T1;
GO
USE master;
GO
IF OBJECT_ID('dbo.sp_TrgSignal_Set') IS NOT NULL
  DROP PROC dbo.sp_TrgSignal_Set;
GO
IF OBJECT_ID('dbo.sp_TrgSignal_Clear') IS NOT NULL
  DROP PROC dbo.sp_TrgSignal_Clear;
GO
IF OBJECT_ID('dbo.sp_TrgSignal_Get') IS NOT NULL
  DROP PROC dbo.sp_TrgSignal_Get;

Nesting and Recursion

SQL Server supports both nesting and recursion of triggers. Nesting of triggers takes place when a statement issued from one trigger causes another trigger to fire. Recursion takes place when a trigger ends up firing itself, either directly or through a series of other triggers. Nesting of triggers is controlled at the server level via the ′nested triggers′ server configuration option and is turned on by default. Recursion of triggers is controlled at the database level via the RECURSIVE_TRIGGERS database option and is turned off by default. Suppose that you want to allow trigger recursion in a database called HR; you would use the following code:

ALTER DATABASE HR SET RECURSIVE_TRIGGERS ON;

Recursion in general needs a termination check to stop the recursion from going infinitely. With triggers, of course, recursion won’t be able to go infinitely because SQL Server has a hard-coded limit of 32 nesting levels of routines. However, once you reach that limit, the attempt to fire the 33rd trigger instance will break and all activity will be rolled back. Remember that triggers also fire for zero affected rows. If within the trigger you issue a modification that fires itself recursively, and you have no termination check, the trigger will continue firing recursively until it breaks the nesting limit. In short, make sure that you introduce a recursion termination check where you first verify whether you really need to issue a modification.

For example, suppose that you have an Employees table in the HR database. You want to write a trigger that will, upon a deletion of employees, delete direct subordinates of the deleted employees. You want the trigger to fire recursively so that all subordinates of the deleted employees in all levels will be deleted. The code for such a trigger involves a simple join between deleted and the Employees table. As the recursion termination check, you can inspect the value of @@rowcount. If the value is zero, abort the procedure. The result of this approach is that as soon as the previous invocation of the trigger deletes no employees, the trigger will abort and recursion will stop. Here’s an example of what such a trigger might look like:

CREATE TRIGGER trg_Employees_d ON dbo.Employees FOR DELETE
AS

IF @@rowcount = 0 RETURN; -- recursion termination check

DELETE E
FROM dbo.Employees AS E
  JOIN deleted AS M
    ON E.mgrid = M.empid;
GO

Remember that if a constraint defined on the table rejects a modification, the trigger will not have a chance to fire. This would be the case for this trigger if you had a self-referencing foreign key defined on the mgrid column pointing to the empid column. Such a foreign key will reject any attempt to delete an employee who has subordinates, and the trigger will not have a chance to fire. To allow the trigger to fire, you would have to remove or disable the foreign key. But then, you will need to enforce referential integrity with your own code–for example, by using other triggers.

UPDATE and COLUMNS_UPDATED

When writing a trigger for an UPDATE statement, you sometimes want to react to the change only if certain columns were modified. For example, if you want to reject an attempt to modify a primary key value, you want to react only if the primary key column was specified as the target column in the SET clause of the firing UPDATE statement. SQL Server gives you two tools that allow you to identify whether certain columns were modified–the UPDATE predicate and the COLUMNS_UPDATED function.

The UPDATE predicate accepts a column name as input and returns TRUE if the input column was specified in the SET clause of the firing UPDATE statement. For example, to check whether a column called empid was modified, you would use the following:

IF UPDATE(empid) ...

The UPDATE function will return TRUE for any column if you use it in an INSERT trigger.

The COLUMNS_UPDATED function returns a binary string with a bit for each column. You typically use it when you need to inspect multiple columns and you don’t want to specify the UPDATE predicate many times. A bit representing a column will be turned on (1) if the column was modified and off (0) if it wasn’t. The bytes within the string are organized from left to right–that is, the leftmost byte represents the first 8 columns (columns with ordinal positions 1 through 8), the second byte from the left represents the next 8 columns (columns with ordinal positions 9 through 16), and so on. Within each byte, the bits are organized from right to left–that is, the rightmost bit in the leftmost byte represents the first column, the second bit from the right represents the second column, and so on.

This organization of the bits might seem strange, but in practice, it makes a lot of sense. To check whether a certain column was modified, you need to use the bitwise AND (&) operator between the bitmap returned by COLUMNS_UPDATED and your own bitmask, which contains only the relevant bits turned on. However, bitwise operators in SQL Server require integer inputs (or inputs that can be implicitly converted to integers). COLUMNS_UPDATED might be longer than 8 bytes (the size of the largest supported integer–BIGINT). In that case, you would need to extract portions of the return value of COLUMNS_UPDATED using the SUBSTRING function. And for the SUBSTRING function, you specify an offset from the left of the input string. Thus, it’s convenient that the bytes are organized from left to right.

For example, suppose that you want to isolate the byte containing the bit that represents a column with an ordinal position @i. The byte number (from the left) holding the relevant bit is this: (@i – 1) / 8 + 1. To extract that byte, you would use the following expression:

SUBSTRING(COLUMNS_UPDATED(),(@i - 1) / 8 + 1, 1)

As for the mask that you need to prepare for checking whether a certain bit is turned on, you generate it by raising 2 to the power that is one less than the bit position within the byte (2bitpos-1). The expression calculating the bit position from the right within the byte (bitpos) for a column with an ordinal position @i is the following: (@i – 1) % 8 + 1. So the expression generating the mask would be this: POWER(2, (@i – 1) % 8). To check whether that bit is actually turned on, you perform a bitwise AND (&) operation between the relevant byte and your mask. If the result is greater than zero, that bit is turned on. Here’s the full test:

IF SUBSTRING(COLUMNS_UPDATED(),(@i - 1) / 8 + 1, 1) & POWER(2,(@i-1)%8) > 0

As a more tangible example, run the code in Example 8-3 to create a table called T1 with 100 columns in addition to the key column, and query that table.

Example 8-3. Create the table T1 with 100 columns

USE tempdb;
GO
IF OBJECT_ID('dbo.T1') IS NOT NULL
  DROP TABLE dbo.T1;
GO

DECLARE @cmd AS NVARCHAR(4000), @i AS INT;

SET @cmd =
  N'CREATE TABLE dbo.T1(keycol INT NOT NULL IDENTITY PRIMARY KEY';

SET @i = 1;
WHILE @i <= 100
BEGIN
  SET @cmd =
    @cmd + N',col' + CAST(@i AS nvarchar(10)) +
    N' INT NOT NULL DEFAULT 0';
  SET @i = @i + 1;
END

SET @cmd = @cmd + N'),'

EXEC sp_executesql @cmd;

INSERT INTO dbo.T1 DEFAULT VALUES;

SELECT * FROM T1;
GO

The last query in Example 8-3 produces the output shown in abbreviated form in Table 8-1.

Table 8-1. Contents of T1 with 100 Columns (Abbreviated)

keycol

col1

col2

col3

col4

col5

...

col100

1

0

0

0

0

0

...

0

Suppose that you need to write an UPDATE trigger that identifies which columns were modified (or more accurately, which columns were specified as the target for the modification in the SET clause of the UPDATE statement). You might need this information for auditing or other purposes. For the sake of our example, our trigger will simply return the set of modified columns to show that it could identify them. Run the code in Example 8-4 to create the trg_T1_u_identify_updated_columns, which achieves this task.

Example 8-4. Trigger that identifies which columns were updated

CREATE TRIGGER trg_T1_u_identify_updated_columns ON dbo.T1 FOR UPDATE
AS
SET NOCOUNT ON;

DECLARE @i AS INT, @numcols AS INT;
DECLARE @UpdCols TABLE(ordinal_position INT NOT NULL PRIMARY KEY);

SET @numcols =
 (SELECT COUNT(*)
  FROM INFORMATION_SCHEMA.COLUMNS
  WHERE TABLE_SCHEMA = 'dbo'
    AND TABLE_NAME = 'T1'),

SET @i = 1;
WHILE @i <= @numcols
BEGIN
  IF (SUBSTRING(COLUMNS_UPDATED(),(@i - 1) / 8 + 1, 1))
       & POWER(2, (@i - 1) % 8) > 0
    INSERT INTO @UpdCols VALUES(@i);
  SET @i = @i + 1;
END

SELECT COLUMN_NAME AS updated_column
FROM INFORMATION_SCHEMA.COLUMNS AS C JOIN @UpdCols AS U
  ON C.ORDINAL_POSITION = U.ordinal_position
WHERE TABLE_SCHEMA = 'dbo'
  AND TABLE_NAME = 'T1'
ORDER BY C.ORDINAL_POSITION;
GO

The trigger defines a few local variables first. The variable @i will be used as a loop iterator, and it also represents the ordinal position of the current column being handled. The variable @numcols will hold the number of columns in the table. And @UpdCols is a table variable that will hold the set of ordinal positions representing the modified columns.

Following the variable declaration section, the trigger’s code queries the INFORMATION_SCHEMA.COLUMNS view to acquire the number of columns in the table, and it then assigns that number to @numcols. The code then sets the variable @i to 1, enters a loop that iterates once per column, and increments @i by 1 in each iteration. Within the loop’s body, the code checks whether the column with the ordinal position represented by @i was modified by using the expression that I described earlier. If the column was modified, the code inserts a row with the current value of @i to the @UpdCols table variable.

Finally, the trigger’s code joins INFORMATION_SCHEMA.COLUMNS with @UpdCols based on ordinal position match, returning the names of the modified columns.

To test the trigger, issue the following UPDATE statement, and you will get the output shown in Table 8-2:

UPDATE dbo.T1
  SET col4 = 2, col8 = 2, col90 = 2, col6 = 2
WHERE keycol = 1;

Table 8-2. Set of Modified Columns

updated_column

col4

col6

col8

col90

When you’re done, run the following code for cleanup:

IF OBJECT_ID('dbo.T1') IS NOT NULL
  DROP TABLE dbo.T1;

Auditing Example

In the previous example, in which I discussed the COLUMNS_UPDATED function, I provided a technique to identify which columns appeared as the target for the modification in the SET clause of the firing UPDATE statement. I mentioned that you might want to use that technique for auditing. However, you might want to audit actual changes in column values and not just the fact that a column was a target of a modification. To demonstrate the technique that will allow you to achieve such auditing, first run the code in Example 8-5. This code creates a table called T1 and an audit table called T1Audit, in which you will store audit information about updates against T1.

Example 8-5. Creation script for the T1 table and audit table

SET NOCOUNT ON;
USE tempdb;
GO

IF OBJECT_ID('dbo.T1') IS NOT NULL
  DROP TABLE dbo.T1;
GO
IF OBJECT_ID('dbo.T1Audit') IS NOT NULL
  DROP TABLE dbo.T1Audit;
GO

CREATE TABLE dbo.T1
(
  keycol INT NOT NULL PRIMARY KEY,
  intcol INT NULL,
  varcharcol VARCHAR(10) NULL
);
GO

CREATE TABLE dbo.T1Audit
(
  lsn     INT         NOT NULL IDENTITY PRIMARY KEY, -- log serial number
  keycol  INT         NOT NULL,
  colname sysname     NOT NULL,
  oldval  SQL_VARIANT NULL,
  newval  SQL_VARIANT NULL
);
GO

Run the code in Example 8-6 to create the trigger trg_T1_U_Audit, which records audit information about updates against T1 in T1Audit.

Example 8-6. Creation script for the trg_T1_U_Audit trigger

CREATE TRIGGER trg_T1_u_audit ON dbo.T1 FOR UPDATE
AS

-- If 0 affected rows, do nothing
IF @@rowcount = 0 RETURN;

INSERT INTO dbo.T1Audit(keycol, colname, oldval, newval)
  SELECT *
  FROM (SELECT I.keycol, colname,
          CASE colname
            WHEN N'intcol' THEN CAST(D.intcol AS SQL_VARIANT)
            WHEN N'varcharcol' THEN CAST(D.varcharcol AS SQL_VARIANT)
          END AS oldval,
          CASE colname
            WHEN N'intcol' THEN CAST(I.intcol AS SQL_VARIANT)
            WHEN N'varcharcol' THEN CAST(I.varcharcol AS SQL_VARIANT)
          END AS newval
        FROM inserted AS I
          JOIN deleted AS D
            ON I.keycol = D.keycol
          CROSS JOIN
            (SELECT N'intcol' AS colname
             UNION ALL SELECT N'varcharcol') AS C) AS D
  WHERE oldval <> newval
     OR (oldval IS NULL AND newval IS NOT NULL)
     OR (oldval IS NOT NULL AND newval IS NULL);
GO

The trigger’s code first checks whether zero rows were affected by the firing UPDATE statement. If that is the case, it aborts. There’s nothing to audit if nothing changed.

The code then uses a query that joins inserted and deleted by matching their keycol values. The query uses an unpivoting technique to rotate each column value from both inserted and deleted to its own row.

More Info

More Info

For details about unpivoting techniques, please refer to my book Inside Microsoft SQL Server 2005: T-SQL Querying (Microsoft Press, 2006).

The query filters only rows where the new value is different than the old value, taking NULLs into consideration as well. An INSERT statement loads the result of the query into the audit table.

To test the audit trigger, first load a few rows to T1:

INSERT INTO dbo.T1(keycol, intcol, varcharcol) VALUES(1, 10, 'A'),
INSERT INTO dbo.T1(keycol, intcol, varcharcol) VALUES(2, 20, 'B'),
INSERT INTO dbo.T1(keycol, intcol, varcharcol) VALUES(3, 30, 'C'),

Then issue the following UPDATE:

UPDATE dbo.T1
  SET varcharcol = varcharcol + 'X',
      intcol = 40 - intcol
WHERE keycol < 3;

Query the tables T1Audit and T1A, and you will get the results shown in Table 8-3 and Table 8-4, respectively:

SELECT * FROM dbo.T1;
SELECT * FROM dbo.T1Audit;

Table 8-3. Contents of T1Audit

keycol

intcol

varcharcol

1

30

AX

2

20

BX

3

30

C

Table 8-4. Contents of T1

lsn

keycol

colname

oldval

newval

1

2

varcharcol

B

BX

2

1

Intcol

10

30

3

1

varcharcol

A

AX

As you can see, only column values that actually changed were audited.

Tip

Tip

Suppose that your trigger performs an integrity check and issues a rollback if the integrity rule is violated. And suppose that you still want to audit the attempt even though you’re rolling it back. Of course, a rollback issued after the auditing activity will also roll back the auditing activity. On the other hand, if the rollback is issued before the auditing activity, inserted and deleted will be empty after the rollback, and you’ll be left with no data to audit. It’s sort of a Catch-22 situation.

Remember from Chapter 2 that table variables—like any other variables—are not affected by a rollback, as they are not considered part of an external transaction. You can use this behavior to your advantage. Upon detection of the integrity rule violation, copy the content of inserted and deleted into your own table variables; issue the rollback; and then, in a new transaction within the trigger, audit that data.

When you’re done, run the following code for cleanup:

IF OBJECT_ID('dbo.T1') IS NOT NULL
  DROP TABLE dbo.T1;
GO
IF OBJECT_ID('dbo.T1Audit') IS NOT NULL
  DROP TABLE dbo.T1Audit;
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.40.63