APPENDIX B

image

DML Trigger Basics and Templates

In this appendix, I briefly introduce triggers and how they are written and present templates that you can use as the basis for creating DML triggers for all sorts of uses. There are two different types of DML triggers that are coded very similarly, but serve different purposes:

  • The AFTER trigger is made to be called after an INSERT, UPDATE, or a DELETE, and the constraints it references, is called. It is used to apply complex checks and to add side effects to a DML operation as may be needed. AFTER triggers are usually used for handling rules that won’t fit into the mold of a constraint, for example, rules that require data to be stored, such as a logging mechanism. You may have a virtually unlimited number of AFTER triggers that fire on INSERT, UPDATE, and DELETE, or any combination of them.
  • The INSTEAD OF type of trigger is called before a DML operation, even before constraints. The actual operation on the table is not done, and if you want the data to go in the table, you have to re-perform the operation. In this way, you can do whatever you want with the data, either doing exactly what was requested by the user or doing something completely different (you can even just ignore the operation altogether). You can have a maximum of one INSTEAD OF INSERT, UPDATE, and DELETE trigger of each type per table. It is allowed, but not a generally good idea, to combine all three into one and have a single trigger that fires for all three operations. Instead of triggers can also be used on views to make any view updatable.

Coding DML triggers is very much like coding a stored procedure, with some important differences. Instead of having data passed in to the stored procedure, you use two special in-memory tables that are instantiated for the life of the operation and are scoped specifically to code executing directly in the trigger. These tables are named inserted and deleted. The inserted table contains new or updated rows for an INSERT or UPDATE operation, and the deleted table contains the row values from the affected rows as they were before the DML operation that have been deleted for a DELETE statement execution or that have been modified by an UPDATE statement. It is important to note that these tables will have multiple rows in them if you modify more than one row in your DML statement. Also, since they are scoped to the executing trigger, if you call a stored procedure, the tables will not be accessible, and if your trigger causes another trigger to fire, the contents of the tables will be for the currently executing trigger.

This appendix introduces a set of templates that are used in several chapters of this book, as well as providing fairly exhaustive examples of using the template. (In Chapter 7, I present the more “realistic” uses of triggers, while here in this appendix I am trying to show some of the more esoteric ways triggers can be used.)

While triggers are similar to procedures, there are some fairly major differences that are important. In the following bulleted list are overviews of some of the concepts you need to be cognizant of when writing triggers, including some settings that can be very useful:

  • Multi-row considerations : Triggers fire once for a DML operation, regardless of how many rows are affected. Hence, statements within triggers have to be coded with multiple rows in mind. This can be confusing, because unlike what seems to be natural, trigger code for typically needs to look for rows that don’t meet your criteria, instead of those that do. Unless you want to force users into entering one row at a time, you have to code your triggers in a way that recognizes that more than one row in the table might be being modified.
  • Performance : When a reasonable number of rows are dealt with in a trigger (certainly any amount that are part of typical OLTP operations), they are usually quite fast, but as the number of modified rows increases into the thousands, triggers can become tremendous performance drains. This is largely because the inserted and deleted tables aren’t “real” tables so they don’t have indexes that the optimizer can use to optimize for. Hence, the plans chosen for the queries can be fairly optimistic about the number of rows in inserted and deleted tables. As said, because OLTP systems usually deal with small numbers of rows at a time, there’s rarely a major performance hit because of using triggers, but it is something you have to be cognizant of anytime you create a new trigger and particularly when you have to do a large data load/purge.
  • Determining modified columns : For performance reasons, you may not want to validate data that’s in a column that isn’t affected by a DML statement. You can tell which columns were part of the INSERT or UPDATE statement by using the UPDATE(<columnName>) function to check the column to see whether it was involved in the DML operation. Note that this does not indicate that a value has changed, just that the column was referenced. For example, given the simple statement UPDATE tableName SET column1 = column1, the values would not change, but UPDATE(column1) would return true. (There is also another method using the function COLUMNS_UPDATED(columnBitmask) to check the columns by their position in the table. 1 + 2 + 4 = 7 would mean the first three columns were updated, but it’s generally a bad practice to address columns in a table positionally, for future maintenance purposes.)
  • Having multiple AFTER triggers for the same action: It’s possible to have many different triggers on a table, which gives you the ability to add triggers to third-party systems without touching triggers that the third-party created. However, often the order of triggers can be important, especially when you have to deal with validating data that another trigger might modify. You do get some minor control over the order in which triggers fire. Using the sp_settriggerorder system stored procedure you can choose the first and the last trigger to fire. Usually this is all you need, because there are places where you want to set the first trigger (often the third-party trigger) and the last trigger (such as a trigger to implement an audit trail, as we do in a later section).
  • Nesting triggers : Take care when building AFTER triggers that modify data (the same table or other tables) because these updates could in turn cause other triggers to fire. INSTEAD OF triggers always cause other triggers to fire. I am not going to go into deep detail with the concerns with nesting triggers, but it is important to know that unlike any DML statement, DML in triggers may or may not cause additional triggers to fire based on the following settings. Make sure that you test your triggers and settings to ensure that what you expect to occur does occur. There are two important settings to be concerned with:
  • Server option— sp_serveroption —nested triggers : When this setting is set to 1, it indicates that if you modify a different table, that table’s trigger will be fired. This setting is usually set to 1, because it allows for data validations to occur in the other tables without coding every business rule again.
  • Database option— ALTER DATABASERECURSIVE_TRIGGERS : When set to ON, when an AFTER trigger modifies the data in the same table, the triggers for that table execute again. This setting is usually set to OFF. Because it’s common practice to modify the same table in the trigger, it’s assumed that any modifications done in a trigger will meet all business rules for the same table.
  • Server option— sp_serveroption —disallow results from triggers : It is a very bad practice to return results from a trigger during production code. However, when debugging a trigger it can be a very useful practice. Turn this setting on will ensure that any trigger that tries to return data to the client will get the following error message: Msg 524, Level 16, State 1, Procedure test$InsertTrigger, A trigger returned a resultset and the server option 'disallow results from triggers ' is true.

Because multi-row operations are the most frequently messed up aspect of trigger writing, it’s worth discussing this aspect in more detail. If you insert a thousand rows, the inserted table will have a thousand rows. The deleted table will remain empty on an insert. When you delete rows, the deleted table is filled, and the inserted table remains empty. For an UPDATE, both tables are filled with the rows in the updated table that had been modified as they appeared before and after the update.

Because of this, writing validations must take this into consideration. For example, the following all-too-typical approach wouldn’t be a good idea:

SELECT @column1 = column1 FROM inserted;

IF @column1 < 0

  BEGIN

    --handle the error

This is wrong because only a single row would be checked—in this case, the last row that the SELECT statement came to. (There’s no order, but @column1 would be set to every value in the inserted table and would end up with the last value it came upon.) Instead, the proper way to code this (assuming column1 does not allow nulls) would be as follows:

If EXISTS (SELECT *

           FROM inserted

           WHERE column1 < 0)

   BEGIN

       --handle the error

This works because each row in the inserted table is checked against the criteria. If any rows do match the criteria, the EXISTS Boolean expression returns True, and the error block is started.

You’ll see this more in the example triggers. However, you need to make a conscious effort as you start to code triggers to consider what the effect of modifying more than one row would be on your code, because you certainly don’t want to miss an invalid value because of coding like the first wrong example.

If you need a full reference on the many details of triggers, refer to SQL Server Books Online. In the following section, we’ll look at the different types of triggers, the basics of coding them, and how to use them to handle the common tasks for which we use triggers. Luckily, for the most part triggers are straightforward, and the basic settings will work just fine.

image  Note I don’t think I could stress nearly enough about the need to understand multi-row operations in triggers. Almost every time a question is raised on the forums about triggers, the code that gets posted contains code that will handle only one row.

Triggers often get a bad name because they can be pretty quirky, especially due to the fact that they can kill performance when you are dealing with large updates. For example, if you have a trigger on a table and try to update a million rows, you are likely to have issues. However, for most OLTP operations in a relational database, operations shouldn’t be touching more than a handful of rows at a time. Trigger usage does need careful consideration, but where they are needed, they are terribly useful. My recommendation is to use triggers when you need to do the following:

  • Perform cross-database referential integrity
  • Check inter-row rules, where just looking at the current row isn’t enough for the constraints
  • Check inter-table constraints, when rules require access to data in a different table
  • Introduce desired side effects to your data-modification queries, such as maintaining required denormalizations
  • Guaranteeing that no insert, update, or delete operations can be executed on a table, even if the user does have rights to perform the operation

Some of these operations could also be done in an application layer, but for the most part, these operations are far easier and safer, particularly for data integrity, when done automatically using triggers. When it comes to data protection, the primary advantage that triggers have over constraints is the ability to access other tables seamlessly and to operate on multiple rows at once. In a trigger, you can run almost every T-SQL command, except for the following:

ALTER DATABASE RECONFIGURE
CREATE DATABASE RESTORE DATABASE
DROP DATABASE LOAD LOG
RESTORE LOG LOAD DATABASE

Also, you cannot use the following commands on the table that the trigger protects:

CREATE INDEX ALTER PARTITION FUNCTION
ALTER INDEX DROP TABLE
DROP INDEX ALTER TABLE
DBCC REINDEX

It wouldn’t be a very good design to change the schema of any table or do any of the things in this list in a trigger anyhow, much less the one that the trigger is built on, so these aren’t overly restrictive requirements at all.

When using triggers, it is important to keep them as lean as possible. Avoid using cursors, calling stored procedures, or doing any sort of looping operation, and instead get the job done fast. If you need to do some extra processing, such as sending an e-mail for every row affected by the trigger, create a table that can be used as a queue for another process to work on. When your code is executing in a trigger, you can be holding locks, unnecessarily forcing other users to wait, and you cannot be completely certain that the rows that were modified to fire the trigger will actually be committed, and if it gets rolled back, you probably won’t want something like an e-mail sent. If you send the e-mail directly via the trigger, it may have already sent the e-mail by the time the transaction is rolled back.

image  Note In SQL Server 2005 and later, the SQL-based mail object was changed to use Service Broker to implement mail, so if you roll back the transaction, it will roll back the mail command, but you would still have to use a cursor to call the send mail procedure rather than letting that occur asynchronously.

Error handling for triggers will use the simple error logging procedure that is implemented in Chapter 12 as part of the “Database Template Objects” section. It will use the following code (it is repeated in the downloads for this appendix):

--[Error logging section]

DECLARE@ERROR_NUMBER int = ERROR_NUMBER(),

       @ERROR_PROCEDURE sysname = ERROR_PROCEDURE(),

       @ERROR_MESSAGE varchar(4000) = ERROR_MESSAGE()

EXEC Utility.ErrorLog$Insert @ERROR_NUMBER,@ERROR_PROCEDURE,@ERROR_MESSAGE;

It captures the error values from the ERROR_ system functions values that are populated after an error occurs, and then calls the procedure to write the data to a table named Utility.ErrorLog . This can be commented out if you don’t car to log errors, and it can also be used in any of your code to capture errors.

AFTER Triggers

AFTER triggers fire after all the constraints pass all constraint requirements. For instance, it wouldn’t be useful to insert rows in a child table, causing its entire trigger/constraint chain to fire, when the far cheaper operation of checking a foreign key reference or a check constraint might fail the operation. Equally, you wouldn’t want to check the status of all the rows in your table until you’ve completed all your changes to them; the same could be said for cascading delete operations.

All triggers I write in this book will use a common template that sets up the code that is used over and over again (and is pretty tedious to set up the first time anyhow). The template that I use follows. I have done my best to comment the code here and in all uses, so it will be easy for you to reuse. I personally use this for all triggers I create as well.

CREATE TRIGGER <schema>.<tablename>$<actions>[<purpose>]Trigger

ON <schema>.<tablename>

AFTER <comma delimited actions> AS

BEGIN

  SET NOCOUNT ON; --to avoid the rowcount messages

  SET ROWCOUNT 0; --in case the client has modified the rowcount

  --use inserted for insert or update trigger, deleted for update or delete trigger

  --count instead of @@rowcount due to merge behavior that sets @@rowcount to a number

  --that is equal to number of merged rows, not rows being checked in trigger

  DECLARE @msg varchar(2000), --used to hold the error message

  --use inserted for insert or update trigger, deleted for update or delete trigger

  --count instead of @@rowcount due to merge behavior that sets @@rowcount to a number

  --that is equal to number of merged rows, not rows being checked in trigger

      @rowsAffected int = (select count(*) from inserted);

  --    @rowsAffected int = (select count(*) from deleted);

  --no need to continue on if no rows affected

  IF @rowsAffected = 0 return;

  BEGIN TRY

     --[validation section]

     --[modification section]

  END TRY

  BEGIN CATCH

   IF @@trancount > 0

     ROLLBACK TRANSACTION;

     --[Error logging section]

   DECLARE @ERROR_NUMBER int = ERROR_NUMBER(),

           @ERROR_PROCEDURE sysname = ERROR_PROCEDURE(),

           @ERROR_MESSAGE varchar(4000) = ERROR_MESSAGE();

   EXEC Utility.ErrorLog$Insert @ERROR_NUMBER,@ERROR_PROCEDURE,@ERROR_MESSAGE;

   THROW; --will halt the batch or be caught by the caller's catch block

 END CATCH

END

image  Tip The AFTER keyword was introduced in the 2000 version of SQL Server when INSTEAD OF triggers were introduced. Prior to this, the keyword was FOR, since the trigger was for certain actions. Both are still quite common, but it is best to use AFTER in all new code.

I generally write triggers so that when the first error occurs, an error will be raised, and I roll back the transaction to halt any further commands. In the trigger template, there are three areas where code is added, denoted by comments headers:

  • --[validation section ]: In this section you will add validation logic that will be executed after the DML has been performed on the table. This would be used instead of a constraint when you need to code some complex validation that doesn’t fit the mold of a constraint well.
  • --[modification section ]: Used for DML statements to modify the contents of tables or to do some other operation that has side effects of some sort. In this section, you might modify the same table as the triggering table or any other table.
  • --[Error logging section ]: This is the part of the trigger where you log any errors, either by inserting them into a table, or to the error log using xp_logevent, In the trigger templates, I use the Utility.ErrorLog$Insert procedure we created in Chapter 12 that writes to the Utility.ErrorLog table.

The form I use for almost every [validation section] I build is similar to the following code. I typically try to code different messages for the case when one row was affected by the DML operation by checking the @rowsAffected variable that is set earlier in the trigger by checking the number of rows in the inserted or deleted tables. This allows for better error messages (like to include the invalid value) for more typical singleton case, and a more generic explanation when many rows were changed. (If you commonly get errors in multi-row operations, you can enhance the multi-row error message to use an aggregate to return a single invalid value, but this can be costly for large updates.

IF EXISTS (<Boolean condition, commonly using inserted and/or deleted tables>)

   BEGIN

      IF @rowsAffected = 1 --custom error message for single row

             SELECT @msg = CONCAT('<reason>', inserted.value)

             FROM inserted; --and/or deleted, depending on action

      ELSE

         SELECT @msg = '<more generic reason>';

      --in the TRY . . . CATCH block, this will redirect to the CATCH

      THROW 50000, @msg, 16;

   END;

The [modification section ] section in SQL Server 2005 and later can simply be set up as simple INSERT, UPDATE, or DELETE statements thanks to the TRY-CATCH construct. Any errors raised because of the DML, such as from a constraint or another trigger, will be caught and sent to a CATCH block. In the CATCH block, I use a procedure called utility.ErrorLog$insert to log the error for later debugging, which was mentioned earlier.

It is necessary here to provide further explanation about how I capture the number of rows affected. In earlier editions of the book, I simply captured the output of @@rowcount. Until SQL Server 2005, this was sufficient. In 2008, the MERGE statement changed that (and in the last edition of the book I got it wrong) because every action clause in the MERGE statement calls the trigger, and the @@rowcount value is the one from the MERGE call, not the individual clauses. So instead of @@rowcount, I use the following (comments from trigger removed for clarity):

DECLARE @msg varchar(2000), --used to hold the error message

        @rowsAffected int = (SELECT COUNT(*) FROM inserted)

        --@rowsAffected int = (SELECT COUNT(*) FROM deleted)

        ;

For the insert and update triggers, I count rows in inserted, and for the delete operation, it will use the deleted virtual table. As an example of the issue, I take the following simple table (In the downloaded code, I put this code in a new database called TriggerDemo :

CREATE TABLE test

(

   testId int

);

And I add the following simple trigger that covers all actions. All the trigger does is to output a row that tells us the value of @@rowcount global variable, the number of rows in each of the virtual table, and then a calculated value that tells you if the operation was an insert, update, or a delete.

CREATE TRIGGER test$InsertUpdateDeleteTrigger

ON test

AFTER INSERT, UPDATE, DELETE AS

BEGIN

         DECLARE @rowcount int = @@rowcount, --stores the number of rows affected

                 @rowcountInserted int = (SELECT count(*) FROM inserted),

                 @rowcountDeleted int = (SELECT COUNT(*) FROM deleted);

         SELECT  @rowcount as [@@rowcount],

                 @rowcountInserted as [@rowcountInserted],

                 @rowcountDeleted as [@rowcountDeleted],

                        CASE WHEN @rowcountInserted = 0 THEN 'DELETE'

                           WHEN @rowcountDeleted = 0 THEN 'INSERT'

                                 ELSE 'UPDATE' END AS Operation;

END;

Remember, if you have set the disallow results from triggers; you will need to turn that off for this to work. Be sure and turn it back on after our demonstrations, as while this configuration value is an excellent way to allow yourself to use results for testing, in practice you do not want to have results returned from your triggers.

EXEC sp_configure 'show advanced options',1;

RECONFIGURE;

GO

EXEC sp_configure 'disallow results from triggers',0;

RECONFIGURE;

So now, let's add two rows to the table:

INSERT INTO test

VALUES (1),

       (2);

From the trigger, you will see the following output:

@@rowcount @rowcountInserted @rowcountDeleted Operation
---------- ----------------- ---------------- ---------
2 2 0 INSERT

As expected, @@rowcount was 2, and the count of rows from the inserted table is also 2. Now, execute this simple MERGE that uses a CTE that will give us three operations, a delete for testId = 1, an update for testId = 3, and an insert for testId = 3.

WITH testMerge as (SELECT *

                   FROM (Values(2),(3)) as testMerge (testId))

MERGE test

USING (SELECT testId FROM testMerge) AS source (testId)

       ON (test.testId = source.testId)

WHEN MATCHED THEN

        UPDATE SET testId = source.testId

WHEN NOT MATCHED THEN

        INSERT (testId) VALUES (source.testId)

WHEN NOT MATCHED BY SOURCE THEN

    DELETE;

Now you will see the following output, which appears as if 9 rows have been modified, even though what actually happened was that 1 row was inserted, 1 rows was updated, and 1 more row was deleted:

@@rowcount @rowcountInserted @rowcountDeleted Operation
---------- ----------------- ---------------- ---------
3 1 0 INSERT
@@rowcount @rowcountInserted @rowcountDeleted Operation
---------- ----------------- ---------------- ---------
3 1 1 UPDATE
@@rowcount @rowcountInserted @rowcountDeleted Operation
---------- ----------------- ---------------- ---------
3 1 0 DELETE

In my normal triggers, I generally do not mix any DML operations (so I have an insert trigger, and update trigger, and a separate delete trigger), and if I used the @rowcount value for anything (such as we usually quit the trigger if 0 rows were affected), we would have treated each trigger as having multiple rows. In either case, in this trigger, I had three calls with a single row modification, but @@rowcount indicated three rows.

In Chapter 7, AFTER triggers will be used to implement triggers to solve realistic the following sort of problems:

  • Range checks on multiple rows: Make sure that a summation of values of column, usually over some grouping, is within some specific range of values.
  • Maintaining summary values (only as necessary): Basically updating one value whenever one or more values change in a different table.
  • Cascading inserts: After a row is inserted into a table, one or more other new rows are automatically inserted into other tables. This is frequently done when you need to initialize a row in another table, quite often a status of some sort.
  • Child-to-parent cascades: Performing cascading operations that cannot be done using a typical foreign key constraint.
  • Maintaining an audit trail: Logging changes made to a table in the background.
  • Relationships that span databases and servers: Basic referential integrity only works within the confines of a database.

In this appendix, I will present an extended example that demonstrates some of the power and uses of triggers. I create a trigger than makes sure that data, grouped on a given value always stays > 0. The triggers in this example are some of the most complex triggers I have written (since the advent of constraints, at least), and I did this to make the trigger a bit complex to show some of the power of triggers that are seldom needed (but certainly interesting when needed).

I take the data from a table called Example.AfterTriggerExample , which has a simple integer key; a column called GroupingValue , which serves as a kind of Account to group on; and Example.AfterTriggerExampleGroupingBalance , which holds the running balance.

CREATE SCHEMA Example;

GO

--this is the "transaction" table

CREATE TABLE Example.AfterTriggerExample

(

    AfterTriggerExampleId int CONSTRAINT PKAfterTriggerExample PRIMARY KEY,

    GroupingValue   varchar(10) NOT NULL,

    Value           int NOT NULL

);

GO

--this is the table that holds the summary data

CREATE TABLE Example.AfterTriggerExampleGroupBalance

(

     GroupingValue varchar(10) NOT NULL

          CONSTRAINT PKAfterTriggerExampleGroupBalance PRIMARY KEY,

     Balance int NOT NULL

);

Then I create the following insert trigger. The code is commented as to what is occurring in there. There are two major sections, one to validate data, which does a summation on the items in the table and makes sure the sums are greater than 0, and another to write the denormalization/summary data. I could have implemented the non-negative requirement by putting a constraint on the Example.AfterTriggerExampleGroupBalance to require a Balance >= 0, but I am trying to show triggers with a validation and a cascading/modification action .

CREATE TRIGGER Example.AfterTriggerExample$InsertTrigger

ON Example.AfterTriggerExample

AFTER INSERT AS

BEGIN

  SET NOCOUNT ON; --to avoid the rowcount messages

  SET ROWCOUNT 0; --in case the client has modified the rowcount

  DECLARE @msg varchar(2000), --used to hold the error message

  -use inserted for insert or update trigger, deleted for update or delete trigger

  --count instead of @@rowcount due to merge behavior that sets @@rowcount to a number

  --that is equal to number of merged rows, not rows being checked in trigger

      @rowsAffected int = (SELECT COUNT(*) FROM inserted);

  --      @rowsAffected int = (SELECT COUNT(*) FROM deleted);

  --no need to continue on if no rows affected

  IF @rowsAffected = 0 RETURN;

  BEGIN TRY

        --[validation section]

        --Use a WHERE EXISTS to inserted to make sure not to duplicate rows in the set

        --if > 1 row is modified for the same grouping value

        IF EXIStS (SELECT AfterTriggerExample.GroupingValue

             FROM Example.AfterTriggerExample

             WHERE EXISTS (SELECT *

             FROM Inserted

             WHERE AfterTriggerExample.GroupingValue =

                            Inserted.Groupingvalue)

                  GROUP BY AfterTriggerExample.GroupingValue

                  HAVING SUM(Value) < 0)

        BEGIN

           IF @rowsAffected = 1

              SELECT @msg = CONCAT('Grouping Value "', GroupingValue,

                   '" balance value after operation must be greater than 0')

              FROM inserted;

           ELSE

              SELECT @msg = CONCAT('The total for the grouping value must ',

                         'be greater than 0'),

           THROW 50000, @msg, 16;

        END;

      --[modification section]

      --get the balance for any Grouping Values used in the DML statement

      WITH GroupBalance AS

      (SELECT AfterTriggerExample.GroupingValue, SUM(Value) as NewBalance

       FROM Example.AfterTriggerExample

       WHERE EXISTS (SELECT *

                     FROM Inserted

                     WHERE AfterTriggerExample.GroupingValue = Inserted.Groupingvalue)

       GROUP BY AfterTriggerExample.GroupingValue )

      --use merge because there may not be an existing balance row for the grouping value

      MERGE Example.AfterTriggerExampleGroupBalance

      USING (SELECT GroupingValue, NewBalance FROM GroupBalance)

                           AS source (GroupingValue, NewBalance)

      ON (AfterTriggerExampleGroupBalance.GroupingValue = source.GroupingValue)

      WHEN MATCHED THEN --a grouping value already existed

          UPDATE SET Balance = source.NewBalance

      WHEN NOT MATCHED THEN --this is a new grouping value

          INSERT (GroupingValue, Balance)

          VALUES (Source.GroupingValue, Source.NewBalance);

   END TRY

   BEGIN CATCH

    IF @@trancount > 0

      ROLLBACK TRANSACTION;

    --[Error logging section]

      DECLARE @ERROR_NUMBER int = ERROR_NUMBER(),

              @ERROR_PROCEDURE sysname = ERROR_PROCEDURE(),

              @ERROR_MESSAGE varchar(4000) = ERROR_MESSAGE()

      EXEC Utility.ErrorLog$Insert @ERROR_NUMBER,@ERROR_PROCEDURE,@ERROR_MESSAGE;

    THROW; --will halt the batch or be caught by the caller's catch block

  END CATCH

END;

GO

After adding the trigger, we can test it. Let’s try to add two new rows, each as single row. The first two inserts will work and data will be added. In the first insert, it will cause the MERGE statement to add a new row to the Example.AfterTriggerExampleGroupBalance table, the second updating that row:

INSERT INTO Example.AfterTriggerExample(AfterTriggerExampleId,GroupingValue,Value)

VALUES (1,'Group A',100);

GO

INSERT INTO Example.AfterTriggerExample(AfterTriggerExampleId,GroupingValue,Value)

VALUES (2,'Group A',-50);

Before we look at the summary table, let’s check the case where the balance ends up being less than 0 with this row with -100 added the 100 and -50 from earlier causing a negative balance:

INSERT INTO Example.AfterTriggerExample(AfterTriggerExampleId,GroupingValue,Value)

VALUES (3,'Group A',-100);

This will cause the following error. Note that it uses the single row error message we set up, telling us the group that caused the error to help make error tracking easier:

Msg 50000, Level 16, State 16, Procedure AfterTriggerExample$InsertTrigger, Line 39

Grouping Value "Group A" balance value after operation must be greater than 0

Next, to show a multi-row error , I try to add two new rows, but not enough to make the sum greater than 0.

INSERT INTO Example.AfterTriggerExample(AfterTriggerExampleId,GroupingValue,Value)

VALUES (3,'Group A',10),

       (4,'Group A',-100);

This causes the following error far more generic answer:

Msg 50000, Level 16, State 16, Procedure AfterTriggerExample$InsertTrigger, Line 39

The total for the grouping value must be greater than 0

Of course, if you need better messages, you can clearly build a more interesting error handler, but since almost all code that may cause an error in most systems is going to be row at a time, it is probably not worth it. While it is fairly easy to build a single row error message the multi-row message is not going to be quite so easy to do, because you would need to know which rows were wrong, and that would make the cost of validation a lot more costly and a lot more coding.

For the error, that was raised, you will be able to see the error in the utility.ErrorLog table , if you implemented it:

SELECT *

FROM   utility.ErrorLog;

This will return at least the following row, depending on what other calls you have made:

ErrorLogId Number Location Message
---------- ------ -------- -----------
22 5000 AfterTriggerExample$InsertTrigger Grouping Value "Group A"
value after operation must be
greater than 0
LogTime ServerPrincipal
---------------------- --------------------
2012-02-21 00:06:27.35 DENALI-PCAlienDrsql

Next, we will do another multi-row update that does not fail validation:

INSERT INTO Example.AfterTriggerExample(AfterTriggerExampleId,GroupingValue,Value)

VALUES (5,'Group A',100),

    (6,'Group B',200),

    (7,'Group B',150);

Now, let’s look at the data that has been created:

SELECT *

FROM   Example.AfterTriggerExample;

SELECT *

FROM   Example.AfterTriggerExampleGroupBalance;

This returns:

AfterTriggerExampleId GroupingValue Value
--------------------- ------------- -----
1 Group A 100
2 Group A -50
5 Group A 100
6 Group B 200
7 Group B 150
GroupingValue Balance
------------- -------
Group A 150
Group B 350

Take a look at the data, and make sure that the numbers add up. Test your code in as many ways as you possibly can. One of the reasons we don’t generally write such messy code in triggers is that they will need a lot of testing. Also, all of the triggers that we are building basically maintain summary data that replaces optimizing a fairly simple query:

SELECT GroupingValue, SUM(Value) as Balance

FROM   Example.AfterTriggerExample

GROUP  BY GroupingValue;

image  Note Sometimes it is useful/interesting to do an exercise like this to learn the difficulties in using code in certain ways to help you when you do need to do some complex code, and to show you why not to do this in your own code.

Next we move on to the UPDATE trigger. It is very similar in nature, and the validation section will be the same with the slight change to the FROM clause of the subquery to use a UNION of a query to the inserted and deleted table. You could make one validation trigger for INSERT, UPDATE, and DELETE, but I prefer to stick with one trigger with a bit of duplicated code, to make things easier to manage at the DBA level since multiple triggers complicates management.

Next we move on to the UPDATE trigger

Logically, an update is a delete and an insert of a new row. So when you change a row value, it deletes the old (represented in the deleted table), and creates a new row (in the inserted table). The MERGE statement for the UPDATE trigger has to deal with one additional branch to delete a group that has been added, so when you get a value, it could be a new group that was created by the update, a delete because the group from the deleted table no longer exists.

CREATE TRIGGER Example.AfterTriggerExample$UpdateTrigger

ON Example.AfterTriggerExample

AFTER UPDATE AS

BEGIN

  SET NOCOUNT ON; --to avoid the rowcount messages

  SET ROWCOUNT 0; --in case the client has modified the rowcount

  DECLARE @msg varchar(2000), --used to hold the error message

  --use inserted for insert or update trigger, deleted for update or delete trigger

  --count instead of @@rowcount due to merge behavior that sets @@rowcount to a number

  --that is equal to number of merged rows, not rows being checked in trigger

      @rowsAffected int = (SELECT COUNT(*) FROM inserted);

  --   @rowsAffected int = (SELECT COUNT(*) FROM deleted);

  --no need to continue on if no rows affected

  IF @rowsAffected = 0 RETURN;

  BEGIN TRY

      --[validation section]

      --Use a WHERE EXISTS to inserted to make sure not to duplicate rows in the set

      --if > 1 row is modified for the same grouping value

      IF EXISTS (SELECT AfterTriggerExample.GroupingValue

                 FROM Example.AfterTriggerExample

                 --need to check total on any rows that were modified, even if key change

                 WHERE EXISTS (SELECT *

                               FROM Inserted

                               WHERE AfterTriggerExample.GroupingValue =

                                          Inserted.Groupingvalue

                               UNION ALL

                               SELECT *

                               FROM Deleted

                               WHERE AfterTriggerExample.GroupingValue =

                                          Deleted.Groupingvalue)

                               GROUP BY AfterTriggerExample.GroupingValue

                               HAVING SUM(Value) < 0)

                 BEGIN

                       IF @rowsAffected = 1

                         SELECT @msg = CONCAT('Grouping Value "',

                           COALESCE(inserted.GroupingValue,deleted.GroupingValue),

                          '" balance value after operation must be greater than 0')

                         FROM inserted --only one row could be returned…

                             CROSS JOIN deleted;

                       ELSE

                         SELECT @msg = CONCAT('The total for the grouping value must',

                                   'be greater than 0'),

                        THROW 50000, @msg, 16;

                       END

      --[modification section]

      --get the balance for any Grouping Values used in the DML statement

      SET ANSI_WARNINGS OFF; --we know we will be summing on a NULL, with no better way

      WITH GroupBalance AS

      (SELECT ChangedRows.GroupingValue, SUM(Value) as NewBalance

       FROM Example.AfterTriggerExample

       --the right outer join makes sure that we get all groups, even if no data

       --remains in the table for a set

            RIGHT OUTER JOIN

                        (SELECT GroupingValue

                         FROM Inserted

                         UNION

                         SELECT GroupingValue

                         FROM Deleted ) as ChangedRows

            --the join make sure we only get rows for changed grouping values

               ON ChangedRows.GroupingValue = AfterTriggerExample.GroupingValue

       GROUP BY ChangedRows.GroupingValue )

      --use merge because the user may change the grouping value, and

      --That could even cause a row in the balance table to need to be deleted

      MERGE Example.AfterTriggerExampleGroupBalance

      USING (SELECT GroupingValue, NewBalance FROM GroupBalance)

                      AS source (GroupingValue, NewBalance)

      ON (AfterTriggerExampleGroupBalance.GroupingValue = source.GroupingValue)

      WHEN MATCHED and Source.NewBalance IS NULL --should only happen with changed key

           THEN DELETE

      WHEN MATCHED THEN --normal case, where an amount was updated

           UPDATE SET Balance = source.NewBalance

           WHEN NOT MATCHED THEN --should only happen with changed

                                 --key that didn't previously exist

              INSERT (GroupingValue, Balance)

              VALUES (Source.GroupingValue, Source.NewBalance);

      SET ANSI_WARNINGS ON; --restore proper setting, even if you don't need to

   END TRY

   BEGIN CATCH

    IF @@trancount > 0

      ROLLBACK TRANSACTION;

    --[Error logging section]

      DECLARE @ERROR_NUMBER int = ERROR_NUMBER(),

              @ERROR_PROCEDURE sysname = ERROR_PROCEDURE(),

              @ERROR_MESSAGE varchar(4000) = ERROR_MESSAGE()

      EXEC Utility.ErrorLog$Insert @ERROR_NUMBER,@ERROR_PROCEDURE,@ERROR_MESSAGE;

    THROW; --will halt the batch or be caught by the caller's catch block

  END CATCH

END;

As a reminder, this is the where the balance is after the INSERT statement section:

GroupingValue Balance
------------- -------
Group A 150
Group B 350

So now, we update one row, setting a row that used to be 100 down to 50:

UPDATE Example.AfterTriggerExample

SET    Value = 50 --Was 100

where  AfterTriggerExampleId = 5;

Looking at the balance again:

SELECT *

FROM   Example.AfterTriggerExampleGroupBalance;

You will see that the balance has gone down to 100 for Group A.

GroupingValue Balance
------------- -------
Group A 150
Group B 350

The next thing to test is changing the value that is being grouped on. This will cause a row in the balance table to be deleted, and a new one to be added by the MERGE statement.

--Changing the key

UPDATE Example.AfterTriggerExample

SET    GroupingValue = 'Group C'

WHERE  GroupingValue = 'Group B';

Looking at the balance again:

GroupingValue Balance
------------- -------
Group A 150
Group C 350

You can see that the GroupValue has now changed from B to C, because the source data has all been changed. To set up the next example, let’s change all of the rows to 10:

--all rows

UPDATE Example.AfterTriggerExample

SET    Value = 10 ;

This changes our data to look like:

SELECT *

FROM   Example.AfterTriggerExample;

SELECT *

FROM   Example.AfterTriggerExampleGroupBalance;

Returning:

AfterTriggerExampleId GroupingValue Value
--------------------- ------------- -----
1 Group A 10
2 Group A 10
5 Group A 10
6 Group C 10
7 Group C 10
GroupingValue Balance
------------- -------
Group A 30
Group C 20

Check to make sure a multi-statement failure works:

--violate business rules

UPDATE Example.AfterTriggerExample

SET    Value = -10;

This returns:

Msg 50000, Level 16, State 16, Procedure AfterTriggerExample$UpdateTrigger, Line 45

The total for the grouping value must be greater than 0

Finally, we work on the DELETE trigger . It very much resembles the other triggers, except in this trigger we use the deleted table as our primary table, and we don’t have to deal with the cases where new data is introduced, so we have one less matching criteria in the MERGE statement.

CREATE TRIGGER Example.AfterTriggerExample$DeleteTrigger

ON Example.AfterTriggerExample

AFTER DELETE AS

BEGIN

  SET NOCOUNT ON; --to avoid the rowcount messages

  SET ROWCOUNT 0; --in case the client has modified the rowcount

  DECLARE @msg varchar(2000), --used to hold the error message

  --use inserted for insert or update trigger, deleted for update or delete trigger

  --count instead of @@rowcount due to merge behavior that sets @@rowcount to a number

  --that is equal to number of merged rows, not rows being checked in trigger

  --   @rowsAffected int = (SELECT COUNT(*) FROM inserted);

         @rowsAffected int = (SELECT COUNT(*) FROM deleted);

  --no need to continue on if no rows affected

  IF @rowsAffected = 0 RETURN;

  BEGIN TRY

     --[validation section]

     --Use a WHERE EXISTS to inserted to make sure not to duplicate rows in the set

     --if > 1 row is modified for the same grouping value

     IF EXIStS (SELECT AfterTriggerExample.GroupingValue

                FROM Example.AfterTriggerExample

                WHERE EXISTS (SELECT * --delete trigger only needs check deleted rows

                              FROM Deleted

                              WHERE AfterTriggerExample.GroupingValue =

                                   Deleted.Groupingvalue)

                GROUP BY AfterTriggerExample.GroupingValue

                HAVING SUM(Value) < 0)

     BEGIN

       IF @rowsAffected = 1

               SELECT @msg = CONCAT('Grouping Value "', GroupingValue,

                      '" balance value after operation must be greater than 0')

               FROM deleted; --use deleted for deleted trigger

       ELSE

        SELECT @msg = 'The total for the grouping value must be greater than 0';

       THROW 50000, @msg, 16;

    END

    --[modification section]

    --get the balance for any Grouping Values used in the DML statement

     SET ANSI_WARNINGS OFF; --we know we will be summing on a NULL, with no better way

     WITH GroupBalance AS

     (SELECT ChangedRows.GroupingValue, SUM(Value) as NewBalance

      FROM Example.AfterTriggerExample

      --the right outer join makes sure that we get all groups, even if no data

      --remains in the table for a set

           RIGHT OUTER JOIN

                 (SELECT GroupingValue

                  FROM Deleted ) as ChangedRows

             --the join make sure we only get rows for changed grouping values

             ON ChangedRows.GroupingValue = AfterTriggerExample.GroupingValue

     GROUP BY ChangedRows.GroupingValue)

    --use merge because the delete may or may not remove the last row for a

    --group which could even cause a row in the balance table to need to be deleted

     MERGE Example.AfterTriggerExampleGroupBalance

     USING (SELECT GroupingValue, NewBalance FROM GroupBalance)

                                  AS source (GroupingValue, NewBalance)

     ON (AfterTriggerExampleGroupBalance.GroupingValue = source.GroupingValue)

     WHEN MATCHED and Source.NewBalance IS Null --you have deleted the last key

        THEN DELETE

     WHEN MATCHED THEN --there were still rows left after the delete

        UPDATE SET Balance = source.NewBalance;

    SET ANSI_WARNINGS ON; --restore proper setting

 END TRY

 BEGIN CATCH

  IF @@trancount > 0

    ROLLBACK TRANSACTION;

    --[Error logging section]

    DECLARE @ERROR_NUMBER int = ERROR_NUMBER(),

            @ERROR_PROCEDURE sysname = ERROR_PROCEDURE(),

            @ERROR_MESSAGE varchar(4000) = ERROR_MESSAGE();

    EXEC Utility.ErrorLog$Insert @ERROR_NUMBER,@ERROR_PROCEDURE,@ERROR_MESSAGE;

  THROW;--will halt the batch or be caught by the caller's catch block

 END CATCH

END;

To test this code, I set up all of the balances to be 0 by setting several rows to -5 to balance out a 10, and one -10 to balance out the other 10.

UPDATE Example.AfterTriggerExample

SET    Value = -5

WHERE  AfterTriggerExampleId in (2,5);

UPDATE Example.AfterTriggerExample

SET    Value = -10

WHERE  AfterTriggerExampleId = 6;

This leaves the data in the following state:

AfterTriggerExampleId GroupingValue Value
--------------------- ------------- -----
1 Group A 10
2 Group A -5
5 Group A -5
6 Group C -1
7 Group C 10
GroupingValue Balance
------------- -------
Group A 0
Group C 0

First, we will try to delete the positive value from Group A:

DELETE FROM Example.AfterTriggerExample

WHERE  AfterTriggerExampleId = 1;

This gives you the following message:

Msg 50000, Level 16, State 16, Procedure AfterTriggerExample$DeleteTrigger, Line 40

Grouping Value "Group A" balance value after operation must be greater than 0

Next, we will try deleting both of the positive values:

DELETE FROM Example.AfterTriggerExample

WHERE  AfterTriggerExampleId in (1,7);

This returns the generic multi-row error message that we created:

Msg 50000, Level 16, State 16, Procedure AfterTriggerExample$DeleteTrigger, Line 38

The total for the grouping value must be greater than 0

Finally, we will systematically unload the table:

DELETE FROM Example.AfterTriggerExample

WHERE  AfterTriggerExampleId = 6;

Now you can see that the C group is 10 because we deleted the negative value:

GroupingValue Balance
------------- -------
Group A 0
Group C 10

Now, we add back a Group B row:

INSERT INTO Example.AfterTriggerExample

VALUES (8, 'Group B',10);

Now our data looks like the following:

AfterTriggerExampleId GroupingValue Value
--------------------- ------------- -----
1 Group A 10
2 Group A -5
5 Group A -5
6 Group C 10
7 Group B 10
GroupingValue Balance
------------- -------
Group A 0
Group B 10
Group C 10

Delete the entire Group A:

DELETE FROM Example.AfterTriggerExample

WHERE  AfterTriggerExampleId in (1,2,5);

Now the summary table looks like:

GroupingValue Balance
------------- -------
Group B 10
Group C 10

Finally, just clear the table:

DELETE FROM Example.AfterTriggerExample;

Leaving the tables both empty:

AfterTriggerExampleId GroupingValue Value
--------------------- ------------- -------
GroupingValue Balance
------------- -------

image  Tip All of this code to test the structures may seem like overkill but I definitely suggest that you do very much the same tasks on your own tables with triggers. I had to fix a lot of errors in my code as I worked through this example because in many cases, you have to deal with all of the particulars of inserts, updates, and deletes that a user can do.

At this point, we have a set of triggers to maintain a summary table, but we haven’t as yet covered the entire case. To deal with this completely you would need to add triggers to the Example.AfterTriggerExampleGroupBalance table to make sure that the row couldn’t be modified unless it meets the criteria of summing up to values in the Example.AfterTriggerExample table or find a way to lock the table down completely, although any measures that prevent the dbo from changing the data would prevent the triggers we created from doing its job. I won’t present that work here, but it is basically the same problem that is covered in Chapter 7 in the AFTER trigger section covering range checks on multiple rows.

INSTEAD OF Triggers

INSTEAD OF triggers are different from AFTER triggers in that they fire prior to the DML action being affected by the SQL engine. In fact, when you have an INSTEAD OF trigger on a table, it’s the first thing that’s done when you INSERT, UPDATE, or DELETE from a table. These triggers are named INSTEAD OF because they fire instead of the native action the user executed. Inside the trigger, you perform the action manually—either the action that the user performed or some other action.

Instead of triggers have a fairly narrow set of use cases. The most typical use is to automatically populate a value such as the last time a change to a row occurred (and since you perform the action in your code, no DML operation can override your action.

A second value of instead of triggers is that they can be created on a view to make a view editable in a straightforward manner for the client. Why some views are editable, you are only allowed to update a single table at a time. By applying an instead of trigger to the view, you control the DML operation and you can insert to multiple tables in the background. Doing this, you encapsulate calls to all the affected tables in the trigger, much like you would a stored procedure, except now this view has all the properties of a physical table, hiding the actual implementation from users.

Probably the most obvious limitation of INSTEAD OF triggers is that you can have only one for each action (INSERT, UPDATE, and DELETE) on the table, or you can combine them just as you can for AFTER triggers, which I strongly advise against for INSTEAD OF triggers. We’ll use pretty much the same trigger template that we used for the T-SQL AFTER triggers, with only the modification that now you have to add a step to perform the action that the user was trying to do, which I comment as <perform action> . This tag indicates where we will put the DML operations to make modifications.

CREATE TRIGGER <schema>.<tablename>$InsteadOf<actions>Trigger

ON <schema>.<tablename>

INSTEAD OF <comma delimited actions> AS

BEGIN

  SET NOCOUNT ON; --to avoid the rowcount messages

  SET ROWCOUNT 0; --in case the client has modified the rowcount

  DECLARE @msg varchar(2000), --used to hold the error message

  --use inserted for insert or update trigger, deleted for update or delete trigger

  --count instead of @@rowcount due to merge behavior that sets @@rowcount to a number

  --that is equal to number of merged rows, not rows being checked in trigger

      @rowsAffected = (SELECT COUNT(*) FROM inserted);

  --@rowsAffected = (SELECT COUNT(*) FROM deleted);

  --no need to continue on if no rows affected

  IF @rowsAffected = 0 RETURN;

  BEGIN TRY

     --[validation section]

     --[modification section]

     --<perform action>

  END TRY

  BEGIN CATCH

   IF @@trancount > 0

       ROLLBACK TRANSACTION

   --[Error logging section]

   DECLARE @ERROR_NUMBER int = ERROR_NUMBER(),

           @ERROR_PROCEDURE sysname = ERROR_PROCEDURE(),

           @ERROR_MESSAGE varchar(4000) = ERROR_MESSAGE();

   EXEC Utility.ErrorLog$Insert @ERROR_NUMBER,@ERROR_PROCEDURE,@ERROR_MESSAGE;

   THROW; --will halt the batch or be caught by the caller's catch block

  END CATCH

END;

The most difficult part of the INSTEAD OF trigger is that you have to perform the operation yourself, meaning you have to maintain the triggers anytime the table needs to change. Technically, performing the action is optional, and in the examples in Chapter 7, I use INSTEAD OF triggers to prevent a DML operation from occurring altogether.

I most often use INSTEAD OF triggers to set or modify values in my statements automatically so that the values are set to what I want, no matter what the client sends in a statement. A good example is a column to record the last time the row was modified. If you record last update times through client calls, it can be problematic if one of the client’s clock is a minute, a day, or even a year off. (You see this all the time in applications. My favorite example was in one system where phone calls appeared to be taking negative amounts of time because the client was reporting when something started and the server was recording when it stopped.) It’s generally a best practice not to use INSTEAD OF triggers to do validations and to use them only to shape the way the data is seen by the time it’s stored in the DBMS. In Chapter 7, I demonstrated four ways you can use INSTEAD OF triggers:

  • Automatically maintained columns: Automatically setting a value, like the point in time when a row was last updated
  • Formatting user input: Forcing a value in a table to meet a given format, like forcing values in a column to be all CAPS
  • Redirecting invalid data to an exception table: Taking values that are outside of a given tolerance, and instead of returning an error, pushing the error off to be looked at later
  • Forcing no action: Stopping a DML action to be performed on a table, even by someone who technically has proper rights

image  Note INSTEAD OF triggers tend to really annoy some developers that want complete control over the changing of data, even more than an AFTER trigger that usually does something that the non-data layer might not be able to do. I am not asserting that triggers are always the best way, though I do prefer having data tier level control over some functions so that certain operations (such as an capturing who modified a row, and when it was modified) can be guaranteed to occur, no matter how many ways the developer ends up affecting changes to the table.

I will do an example very similar to the automatically maintained columns example. The table follows, with an integer key, a column that will be formatted, and a couple of columns included in some tables to tell when they were last modified.

CREATE TABLE Example.InsteadOfTriggerExample

(

    InsteadOfTriggerExampleId int NOT NULL

            CONSTRAINT PKInsteadOfTriggerExample PRIMARY KEY,

    FormatUpper varchar(30) NOT NULL,

    RowCreatedTime datetime2(3) NOT NULL,

    RowLastModifyTime datetime2(3) NOT NULL

);

This example is a lot simpler than the AFTER trigger example, simply replacing the insert statement with one that does a bit of formatting on the incoming data. Generally speaking, the operations you will need to do in INSTEAD OF triggers are going to be very simple and straightforward.

CREATE TRIGGER Example.InsteadOfTriggerExample$InsteadOfInsertTrigger

ON Example.InsteadOfTriggerExample

INSTEAD OF INSERT AS

BEGIN

  SET NOCOUNT ON; --to avoid the rowcount messages

  SET ROWCOUNT 0; --in case the client has modified the rowcount

  DECLARE @msg varchar(2000), --used to hold the error message

  --use inserted for insert or update trigger, deleted for update or delete trigger

  --count instead of @@rowcount due to merge behavior that sets @@rowcount to a number

  --that is equal to number of merged rows, not rows being checked in trigger

       @rowsAffected int = (SELECT COUNT(*) FROM inserted);

  --   @rowsAffected int = (SELECT COUNT(*) FROM deleted);

  --no need to continue on if no rows affected

  IF @rowsAffected = 0 RETURN;

  BEGIN TRY

        --[validation section]

        --[modification section]

        --<perform action> --this is all I change other than the name and table in the

                           --trigger declaration/heading

        INSERT INTO Example.InsteadOfTriggerExample

                    (InsteadOfTriggerExampleId,FormatUpper,

                     RowCreatedTime,RowLastModifyTime)

        --uppercase the FormatUpper column, set the %time columns to system time

         SELECT InsteadOfTriggerExampleId, UPPER(FormatUpper),

                SYSDATETIME(),SYSDATETIME()

         FROM inserted;

  END TRY

  BEGIN CATCH

   IF @@trancount > 0

       ROLLBACK TRANSACTION;

   --[Error logging section]

   DECLARE @ERROR_NUMBER int = ERROR_NUMBER(),

           @ERROR_PROCEDURE sysname = ERROR_PROCEDURE(),

           @ERROR_MESSAGE varchar(4000) = ERROR_MESSAGE();

   EXEC Utility.ErrorLog$Insert @ERROR_NUMBER,@ERROR_PROCEDURE,@ERROR_MESSAGE;

   THROW; --will halt the batch or be caught by the caller's catch block

  END CATCH

END;

Start out with a simple insert, referencing only two of the columns and not the row modification columns:

INSERT INTO Example.InsteadOfTriggerExample (InsteadOfTriggerExampleId,FormatUpper)

VALUES (1,'not upper at all'),

Now, checking the data:

SELECT *

FROM   Example.InsteadOfTriggerExample;

You can see that the FormatUpper value has been set to all uppercase, and the RowCreatedTime and RowLastModify time values have been set:

InsteadOfTriggerExampleId FormatUpper RowCreatedTime RowLastModifyTime
------------------------- ------------ -------------- -----------------
1 NOT UPPER AT ALL 2012-05-22 21:47:50.56 2012-05-22 21:47:50.56

Now add two rows at a time to make sure that multi-row operations work:

INSERT INTO Example.InsteadOfTriggerExample (InsteadOfTriggerExampleId,FormatUpper)

VALUES (2,'UPPER TO START'),(3,'UpPeRmOsT tOo!'),

This will also be formatted as expected. Check the data:

InsteadOfTriggerExampleId FormatUpper RowCreatedTime RowLastModifyTime
------------------------- ------------ -------------- -----------------
1 NOT UPPER AT ALL 2012-05-22 21:47:50.56 2012-05-22 21:47:50.56
2 UPPER TO START 2012-05-22 21:48:18.28 2012-05-22 21:48:18.28
3 UPPERMOST TOO! 2012-05-22 21:48:18.28 2012-05-22 21:48:18.28

Now, check the error handler . Unlike AFTER triggers, you shouldn’t really expect any errors since almost any error you might check with an instead of trigger would be better served in a constraint in almost all cases. However, when you have an instead of trigger, you do get the constraint errors being trapped by the trigger (thankfully, with the error re-throwing capabilities of THROW in 2012 the error will be the native error that occurs):

--causes an error

INSERT INTO Example.InsteadOfTriggerExample (InsteadOfTriggerExampleId,FormatUpper)

VALUES (4,NULL) ;

This returns the following error, which you can see claims to be coming from the InsteadOfTriggerExample$InsteadOfTrigger, line 23 in case you need to debug:

Msg 515, Level 16, State 2, Procedure InsteadOfTriggerExample$InsteadOfInsertTrigger, Line 23

Cannot insert the value NULL into column 'FormatUpper', table 'tempdb.Example.InsteadOfTriggerExample'; column does not allow nulls. INSERT fails.

Finally, we write the UPDATE version of the trigger. In this case, the UPDATE statement again forces the values of the RowLastModifyTime column to ignore whatever is passed in, and ensures that the RowCreatedTime never changes.

CREATE TRIGGER Example.InsteadOfTriggerExample$InsteadOfUpdateTrigger

ON Example.InsteadOfTriggerExample

INSTEAD OF UPDATE AS

BEGIN

  SET NOCOUNT ON; --to avoid the rowcount messages

  SET ROWCOUNT 0; --in case the client has modified the rowcount

  DECLARE @msg varchar(2000), --used to hold the error message

  --use inserted for insert or update trigger, deleted for update or delete trigger

  --count instead of @@rowcount due to merge behavior that sets @@rowcount to a number

  --that is equal to number of merged rows, not rows being checked in trigger

       @rowsAffected int = (SELECT COUNT(*) FROM inserted);

  --   @rowsAffected int = (SELECT COUNT(*) FROM deleted);

  --no need to continue on if no rows affected

  IF @rowsAffected = 0 RETURN;

  BEGIN TRY

     --[validation section]

     --[modification section]

     --<perform action>

     --note, this trigger assumes non-editable keys. Consider adding a surrogate key

     --(even non-pk) if you need to be able to modify key values

     UPDATE InsteadOfTriggerExample

     SET FormatUpper = UPPER(inserted.FormatUpper),

        --RowCreatedTime, Leave this value out to make sure it was updated

        RowLastModifyTime = SYSDATETIME()

     FROM inserted

         JOIN Example.InsteadOfTriggerExample

           ON inserted.InsteadOfTriggerExampleId =

                InsteadOfTriggerExample.InsteadOfTriggerExampleId;

  END TRY

  BEGIN CATCH

   IF @@trancount > 0

       ROLLBACK TRANSACTION;

   --[Error logging section]

   DECLARE @ERROR_NUMBER int = ERROR_NUMBER(),

           @ERROR_PROCEDURE sysname = ERROR_PROCEDURE(),

           @ERROR_MESSAGE varchar(4000) = ERROR_MESSAGE();

   EXEC Utility.ErrorLog$Insert @ERROR_NUMBER,@ERROR_PROCEDURE,@ERROR_MESSAGE;

   THROW;--will halt the batch or be caught by the caller's catch block

 END CATCH

END;

Simple, really; just formatting data. Now update our data, two rows set to final test, and not that I set the modify times in the UPDATE statement, though they won’t be honored in the final results:

UPDATE Example.InsteadOfTriggerExample

SET    RowCreatedTime = '1900-01-01',

       RowLastModifyTime = '1900-01-01',

       FormatUpper = 'final test'

WHERE  InsteadOfTriggerExampleId in (1,2);

Now check the data:

InsteadOfTriggerExampleId FormatUpper RowCreatedTime RowLastModifyTime
------------------------- ------------ -------------- -----------------
1 FINAL TEST 2012-05-22 21:47:50.56 2012-05-22 21:50:23.70
2 FINAL TEST 2012-05-22 21:48:18.28 2012-05-22 21:50:23.70
3 UPPERMOST TOO! 2012-05-22 21:48:18.28 2012-05-22 21:48:18.28

The row created times are the same as they were, and the modify times for the two rows are set to the time when the rows were updated (at the time I wrote this example).

It’s important to note that if you use a column with the identity property for a surrogate key, using an instead of trigger makes the SCOPE_IDENTITY() function cease to work because the modification statement is not in the same scope. For example, take the following small table:

CREATE TABLE testIdentity

(

        testIdentityId int IDENTITY CONSTRAINT PKtestIdentity PRIMARY KEY,

        value varchar(30) CONSTRAINT AKtestIdentity UNIQUE,

);

Without an instead of trigger, you can do the following:

INSERT INTO testIdentity(value)

VALUES ('without trigger'),

SELECT SCOPE_IDENTITY() as scopeIdentity;

And this will return:

scopeIdentity

-------------

1

But add a trigger such as the following (which does nothing but insert the data as-is, for the example):

CREATE TRIGGER testIdentity$InsteadOfInsertTrigger

ON testIdentity

INSTEAD OF INSERT AS

BEGIN

  SET NOCOUNT ON; --to avoid the rowcount messages

  SET ROWCOUNT 0; --in case the client has modified the rowcount

  DECLARE @msg varchar(2000), --used to hold the error message

  --use inserted for insert or update trigger, deleted for update or delete trigger

  --count instead of @@rowcount due to merge behavior that sets @@rowcount to a number

  --that is equal to number of merged rows, not rows being checked in trigger

       @rowsAffected int = (SELECT COUNT(*) FROM inserted);

  --     @rowsAffected int = (SELECT COUNT(*) FROM deleted);

  --no need to continue on if no rows affected

  IF @rowsAffected = 0 RETURN;

  BEGIN TRY

        --[validation section]

        --[modification section]

        --<perform action>

        INSERT INTO testIdentity(value)

        SELECT value

        FROM inserted;

  END TRY

  BEGIN CATCH

   IF @@trancount > 0

      ROLLBACK TRANSACTION;

    --[Error logging section]

    DECLARE @ERROR_NUMBER int = ERROR_NUMBER(),

            @ERROR_PROCEDURE sysname = ERROR_PROCEDURE(),

            @ERROR_MESSAGE varchar(4000) = ERROR_MESSAGE();

    EXEC Utility.ErrorLog$Insert @ERROR_NUMBER,@ERROR_PROCEDURE,@ERROR_MESSAGE;

    THROW ;--will halt the batch or be caught by the caller's catch block

  END CATCH

END;

And you will see that running very similar code:

INSERT INTO testIdentity(value)

VALUES ('with trigger'),

SELECT SCOPE_IDENTITY() as scopeIdentity;

Results in a NULL value:

scopeIdentity

-------------

NULL

My typical solution is to use my knowledge of the data structures to simply use the natural key:

INSERT INTO testIdentity(value)

VALUES ('with trigger two'),

SELECT testIdentityId AS scopeIdentity

FROM   testIdentity

WHERE  value = 'with trigger two'; --use an alternate key

And this returns:

scopeIdentity

-------------

3

Or, even using a sequence-based key (see Chapter 6), which can be defaulted, or the caller can fetch the next value and pass it into the insert. In any case, the problem is often that the tool the developer uses expects the SCOPE_IDENTITY function to work, so it can obviate the ability to use an instead of insert trigger.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.68.18