CHAPTER  5

Image

From SQL Trace to Extended Events

By Mladen Prajdić

In this chapter we’ll look at the currently existing tracing and diagnostic infrastructure in SQL Server 2012. There are three areas we’ll cover: SQL Trace, Event Notifications, and Extended Events. SQL Trace is the oldest of the three and has been put on the deprecated features list in SQL Server 2012. This means that it will be removed from the SQL Server in a future version. Future version usually means current version + 2. Because of that I assume for this chapter that you’re familiar with the terminology and basic use of SQL Profiler. Event notifications were introduced in SQL Server 2005 and are an asynchronous eventing system based on SQL Server Service Broker. The events that are sent can be database or server scoped. Extended Events were introduced in SQL Server 2008 and are a completely new diagnostic and tracing infrastructure built directly inside SQLOS.

SQL Trace

The SQL Trace infrastructure consists of 2 parts: the server side and the client side. The server side is the whole tracing infrastructure built into the SQL Server: the extended stored procedures that actually control the tracing capabilities and the trace points in SQL Server code from where the trace event data is collected from. In SQL Server 2005 we got introduced to SQLOS (SQL Operating System), the part of SQL Server that handles memory management, IO access, CPU scheduling, locking, exception handling, etc… Because SQLOS has an API that the other components of SQL Server use, SQL Trace is built on top of SQLOS and uses this API. The client side includes the SQL Server Profiler tool for creating and analyzing traces and the .NET object model that offers the same functionality as SQL Server Profiler but lets you automate the whole collect/analyze/store process that is usually present with continuous monitoring. Figure 5-1 shows the overview of the SQL Trace architecture.

Image

Figure 5-1. SQL Trace infrastructure: events fire, are sent to traces, filtered there and sent to the target provider (file or rowset)

There are about 180 trace events (the exact number depends on the SQL Server version you’re using) and 66 columns that contain various trace data. There are also 10 user configurable events that you can specify in your T-SQL code that are collected just as any other trace event. They are named UserConfigurable:0 to UserConfigurable:9 and are fired with the sp_trace_generateevent stored procedure. Because it requires ALTER TRACE permissions we must grant this permission to any account executing the stored procedure. The sp_trace_generateevent stored procedure is called like this:

DECLARE @msg NVARCHAR(128);
SELECT @msg = N'Message that will be displayed in the SQL Trace or SQL Profiler';
-- user configurable events have ID's from 82 (UserConfigurable:0) to 91
(UserConfigurable:9)
EXEC sp_trace_generateevent 82, @msg;

There are two providers for collecting data: the file provider used by the server side tracing, and the rowset provider used by client side tracing. That means that there is not a built in direct-to-table provider. At first this can be confusing to people new to SQL Server tracing but if you think about it is quite logical. To have a direct-to-table provider you’d have to follow the rules of SQL Server data writing, transaction logs, locking, blocking, resources, etc… this way you just have a simple file or a memory buffer that the trace data gets written to and SQL Server is done with it. One important thing to note is that although you can use filters when creating traces, those filters are not applied in the data collection phase. Every single column for any event is fully collected and sent to the provider where it gets filtered down to the columns we specified in the trace definition. It is also better to have a few smaller traces than one huge trace. This is because each trace has its own target destination and you can put these targets on different drives spreading out the load.

All traces are managed inside the SQL Server in a thread called background management thread. This thread is started as soon as any trace starts running. You can view details about this thread in the sys.sysprocesses view or in sys.dm_exec_requests dynamic management view if you’re on SQL Server 2005 and up. There is a default trace always running in SQL Server 2005 and up this will always return at least one row, unless you explicitly turn off the default trace. We’ll talk about the default trace later in the chapter. Code shows how to query the view and the DMV. The results will vary across machines and you can see what the columns contain in Books Online.

-- SQL Server 2000
SELECT  *
FROM    sys.sysprocesses
WHERE   cmd = 'TRACE QUEUE TASK';

-- SQL Server 2005 and up
SELECT  *
FROM    sys.dm_exec_requests
WHERE   command = 'TRACE QUEUE TASK';

The SQL Trace engine has what is known as an event send buffer. The trace event data is first collected into this buffer and then sent to the providers. The background management thread monitors the trace providers and for file providers it flushes them every 4 seconds. This is better for performance because it can then use large block writes (1 large continuous write to disk instead of many small ones) for faster IO write speed. For rowset providers it closes every provider that has been dropping events for more than 10 minutes. We’ll talk more about event dropping shortly.

Trace rowset provider

The SQL Server rowset provider sends trace data to SQL Server Profiler and .NET SMO (Server Management Objects) object model. You’ve probably heard of stories about the running SQL Profiler on the production box bringing the whole server to a standstill. You haven’t? Well, they are true. This happens because the rowset provider is memory-based. It takes all the memory it needs and if you run it on a production server, it takes that valuable memory and makes in unusable to SQL Server. This is the reason why it’s crucial that you run the SQL Profiler or a .NET SMO collector service (this is coded by a programmer to suit specific tracing needs, and not part of the SQL Server product) on a non-production machine.

The rowset provider uses internal buffers to process incoming trace data, and if those buffers get full, the events can be dropped from the trace output. The traces begin to be dropped after the internal buffers have been full for 20 seconds. After that, the rowset provider will drop them to get things moving again. When this happens, the SQL Server Profiler will show a special message that the events have been dropped. Internal buffers waiting to be flushed also increment the TRACEWRITE wait type. This way you can see if your rowset provider based traces are problematic for your system by using the sys.dm_os_wait_stats dynamic management view.

As mentioned there are 2 ways you can view the rowset provider, the SQL Profiler and SMO object model. We’ll take a look at both starting with SQL Profiler.

SQL Server Profiler

SQL Profiler is not available in SQL Express edition but there is a free open source project called SQL Express Profiler that fills that gap. SQL Profiler comes with 9 preinstalled templates that help with common tracing scenarios like tracing statement duration or taken locks. The default trace output is in the profiler window where you can see the traces being written. The more trace events you have, the more memory it will take. One thing to keep in mind is that even though the profiler UI shows the duration column in milliseconds, its value is actually in microseconds. This can be seen if you save the trace to a table. The duration value in the SMO object model is also in microseconds. In my conversations with people there seems to be a misconception that the same thing happened with the CPU column. This is not so. The CPU column is still in milliseconds.

When creating traces with SQL Profiler there might be some confusion about various options you can choose. Figure 5-2 shows the new trace dialog.

Image

Figure 5-2. The New Trace Properties dialog box

At the top there are the standard input options like naming your trace and choosing a template. The interesting part is in the middle of the dialog. There are options like “Save to file” and “Save to table”. If you remember from earlier there is no direct to-table provider and the rowset provider SQL Profiler uses is memory based. The catch here is that although those options do create a file and save to a table, the data first goes through the rowset provider memory buffer. Notice that there’s an option “Server processes trace data”. This is the worst thing you can choose. The name is a bit misleading as it makes you think that you’re actually creating a server-side trace. What actually happens is you’re creating two exactly identical traces. One goes through the rowset provider and the other through the file provider. Figure 5-3 shows the chosen options and the following code shows we really do have 2 traces running.

-- This query returns 2 rows.
-- The rowset provider has the
-- is_rowset column set to 1 and
-- NULL in the path column.
-- The file provider has the path column set
-- to the path we've set in options (E:MyTrace.trc).
SELECT  *
FROM    sys.traces
WHERE   is_default != 1;
Image

Figure 5-3. Save to file options that you should avoid

If you choose the Save to table option, trace data still goes through the rowset provider and is then inserted into a table row by row. The Set maximum rows option lets you set the maximum row size for the table. After that size is reached the trace data is no longer stored into a table. Tracing however still continues. Another very useful and not well known feature of SQL Profiler is the ability to merge already saved traces with saved PerfMon binary (.blg) or text (.csv) data files. To do this, your trace must have StartTime and EndTime columns included so that it can be synchronized with the PerfMon data. PerfMon merging doesn’t work on live traces, only on already saved ones. To merge them open the SQL Trace file, go to File->Import Performance Data and choose the PerfMon data file. Figure 5-4 shows what the correlated data looks like in SQL Profiler.

Image

Figure 5-4. A saved trace correlated with Performance Monitor data, displayed in SQL Profiler

.NET SMO objects

Now that we’ve looked at the SQL Profiler let’s take a quick look at the .NET SMO object model. Its capabilities exactly match those of the SQL Profiler. Because of this it can be used for automated tracing, filtering and storing of the filtered traces into various tables or files. One example is to have all statements that last longer than 1 second saved into one separate file, and all the others into another one. Figure 5-5 shows what the object model looks like.

Image

Figure 5-5. The SMO Object Model

The following code example shows how to read an existing trace file called DurationTrace.trc with SMO, and filter it so that only statements with duration longer than 500 ms are saved into a new file. The DurationTrace.trc file can be created by running SQL Profiler with Duration template for a few minutes. One thing to note is where the filtering takes place. The first time I used SMO for tracing I thought that since we have a reader and a writer class, we have to read a trace line, perform the filtering there, and then either send it to the writer or not. However this is not the case. You can stream the reader directly into the writer without any filtering. The writer class has a WriteNotify event. This event gets called every time a row from the reader is read. In this event you have to perform any filtering and determine whether to skip the record or let it be written. This way of doing things still seems a bit backwards to me, but since this is the case we have to live with it. The example is well documented and is not that hard to follow. By understanding how the classes fit together you can build powerful tracing solutions directly in .NET. However be aware that this still uses the rowset provider and can still drop events.

static void Main(string[] args)
{
             // create the reader that reads from the existign trace file
            TraceFile traceFileReader = new TraceFile();
            string traceFileReaderPath =
Path.Combine(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location),
"DurationTrace.trc");
            traceFileReader.InitializeAsReader(traceFileReaderPath);

            // create the trace writer class with the file destination
            string traceFileWriterPath =
Path.Combine(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location),
"DurationTraceFiltered.trc");
            TraceFile traceFileWriter = new TraceFile();
            traceFileWriter.InitializeAsWriter(traceFileReader, traceFileWriterPath);
            // add the event where the filtering occurs
            traceFileWriter.WriteNotify += new
WriteNotifyEventHandler(traceFileWriter_WriteNotify);

            // simply looping the writer until the end is enough.
            // Internally it reads from the reader and filters
            // through the WriteNotify event
            while (traceFileWriter.Write()) ;

            // close both to dispose of the resources
            traceFileReader.Close();
            traceFileWriter.Close();

       }

       static void traceFileWriter_WriteNotify(object sender, TraceEventArgs args)
       {
           try
           {
               object durationInMicroSeconds = args.CurrentRecord["Duration"];
               if (durationInMicroSeconds == null || (long)durationInMicroSeconds < (500
/*ms*/ * 1000 /* to get to microseconds*/))
               {
                    args.SkipRecord = true;
                    return;
               }
            }
            catch (Exception ex)
            {
                Console.WriteLine(ex.Message);
            }
       }

Trace file provider

Unlike the rowset provider we discussed earlier, the file provider writes directly to files and guarantees lossless delivery of trace data. Just like rowset provider it uses internal buffers to dispatch the trace data. The buffers get flushed every 4 seconds to maximize the IO performance by using large block writes. It’s less expensive to write one large continuous block of data than many small ones. Because the file provider guarantees lossless delivery, it can’t drop events and if there are too many of them the whole system can stop. That can be seen with increasing SQLTRACE_LOCK and IO_COMPLETION waits in the sys.dm_os_wait_stats dynamic management view. To avoid this, never write your trace file to the same disk subsystem where your databases are. Use either a separate local physical disk or a UNC share. Kepp in midn that whichever you choose the account under which SQL Server runs has to have write permissions on that folder.

The only parts of the SQL Trace that we can use are the extended stored procedures and functions. Those take care of creating, altering, starting, stopping and viewing trace files. Table 5-1 shows the extended stored procedures and functions names and what they do.

Image

If you look at these stored procedures and functions in greater detail you’ll notice they’re not the most intuitive things to work with. For example the trace events and columns are not referenced by their names but their ID’s. This is where SQL Profiler comes to the rescue. It has the ability to export a defined trace to a .sql file that you can use to start the SQL trace on the server. This makes life much easier than having to code everything by hand. The created file does have to be checked and things like trace name and path have to be manually entered. To do this, start a trace, stop it, then go to File->Export->Script Trace Definition and choose the desired SQL Server version.

There are two out of the box traces that come with SQL Server: a blackbox trace and a default trace. The blackbox trace is not running by default. As its name suggests, it’s a trace you can’t modify in any way. Theirs is no adding of events, filtering, or changing the target file settings. You can either start it or stop it. You start it by using sp_trace_create with option 8, it contains only 4 events (RPC starting, Batch starting, Exception, Attention) and is saved to a 5 MB file at %SQLDIR%MSSQLDATAlackbox.trc. When that file is full it creates a second rolled over filed called blackbox_1.trc. When the second file gets full it again starts writing into the first one and so on. It has been used for troubleshooting and auditing in SQL Server 2000 but now other functionality is used for that, such as the default trace and custom solutions.

The second built in trace is called the Default Trace. It was introduced in SQL Server 2005 as a lightweight diagnostic trace and unlike the blackbox trace, it runs by default unless you specifically turn it off by enabling advanced options with sp_configure. Like the blackbox trace it can’t be modified. It uses 5 files of 20 MB that are rolled over. The following code shows how to see if the default trace is running and how to enable it if not.

-- See if the default trace is enabled
SELECT * FROM sys.configurations WHERE configuration_id = 1568
GO
-- See if the default trace is running
SELECT * FROM sys.traces WHERE is_default = 1
GO
-- if not start it
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'default trace enabled', 1;
GO
RECONFIGURE;
GO

It contains quite a few more events than the blackbox trace, 32 to be exact. They events are from 6 categories: Server, Database, Error and Warnings, Full Text, Object related, and Security audit. The complete set of events can be found in Books Online. As useful as the default trace is, if you have an application that creates, alters, or drops a lot of objects (like temporary tables in tempdb) the needed information may be quickly lost. The following code shows three events that get written to the default trace: the Sort Warning, the Missing Join Predicate and the Audit Addlogin Event with Add and Drop subclasses. If you’re going to run this code, make sure to change the path to the Default Trace.

-- make a suboptimal plan that gets reused and creates a sort warning.
-- creates the Sort Warning event in the default trace
USE tempdb
-- create test table
IF (OBJECT_ID('TestSortWarning', 'U') IS NOT NULL)
    DROP TABLE TestSortWarning
GO
CREATE TABLE TestSortWarning
 (
    c1 NVARCHAR(100) NOT NULL,
    c2 NVARCHAR(40) NOT NULL
 )
GO
-- insert 100000 random rows
INSERT INTO TestSortWarning(c1, c2)
SELECT TOP 100000 c1.name, CAST(NEWID() AS NVARCHAR(40))
FROM   sys.all_columns AS c1,
       sys.all_columns AS c2
GO
CREATE CLUSTERED INDEX CI_TestSortWarning_c1 ON TestSortWarning(c1)
GO
-- Clear the cache
DBCC freeproccache
GO
-- returns 0 rows
EXEC sp_executesql N'SELECT * FROM TestSortWarning WHERE c1 LIKE @p1 ORDER BY c2',
                    N'@p1 NVARCHAR(20)',
                    @p1 = '#%'
GO
-- Sort warning because of suboptimal plan (see row count estimation for sort operator)
EXEC sp_executesql N'SELECT * FROM TestSortWarning WHERE c1 LIKE @p1 ORDER BY c2',
                    N'@p1 NVARCHAR(20)',
                    @p1 = 'B%'
GO
USE master
GO
SELECT TE.name, T.DatabaseName, T.DatabaseID
FROM   sys.fn_trace_gettable('C:Program FilesMicrosoft SQL
ServerMSSQL11.SQL2012MSSQLLoglog.trc', DEFAULT) T
        JOIN sys.trace_events TE ON T.EventClass = TE.trace_event_id
WHERE    TE.name = 'Sort Warnings'
ORDER BY T.StartTime DESC
-----------------------------------------------------------------------------
-----------------------------------------------------------------------------
-- Make a CROSS JOIN between 2 columns.
-- creates the Missing Join Predicate event in the default trace
USE master
GO
SELECT top 100 *
FROM   master..spt_values t1,
       master..spt_values t2
GO
SELECT TE.name, T.DatabaseName, T.DatabaseID
FROM   sys.fn_trace_gettable('C:Program FilesMicrosoft SQL
ServerMSSQL11.SQL2012MSSQLLoglog.trc', DEFAULT) T
        JOIN sys.trace_events TE ON T.EventClass = TE.trace_event_id
WHERE   TE.name = 'Missing Join Predicate'
ORDER BY T.StartTime DESC
GO

-----------------------------------------------------------------------------
-----------------------------------------------------------------------------
-- Create a login and drop it.
-- This creates the Audit Addlogin Event: Add
-- and Audit Addlogin Event: Drop events in the default trace
USE master
CREATE LOGIN DT_LOGIN_TEST WITH PASSWORD = 'I_AM_PASSWORD!';
GO
DROP LOGIN DT_LOGIN_TEST
GO
SELECT TE.name AS [EventName], TSV.subclass_name, T.DatabaseName, T.DatabaseID
FROM   sys.fn_trace_gettable('C:Program FilesMicrosoft SQL
ServerMSSQL11.SQL2012MSSQLLoglog.trc', DEFAULT) T
        JOIN sys.trace_events TE ON T.EventClass = TE.trace_event_id
        JOIN sys.trace_subclass_values TSV ON TSV.trace_event_id = TE.trace_event_id AND
TSV.subclass_value = t.EventSubClass
WHERE   te.name LIKE 'Audit Addlogin Event'
ORDER BY T.StartTime DESC

If you need to collect more data than the default trace, you should create another trace with just the extra data and correlate the two to reduce overhead. Even though the default trace is lightweight, one can never be too careful.

Event Notifications

Event notifications were introduced in SQL Server 2005. They are based on Service Broker, a asynchronous addition to the database engine. Figure 5-6 shows the basic architecture of Service Broker. You can see the Service Broker building blocks: messages, services, queues and dialogs. Service Broker can send messages between databases and servers in transactional manner keeping the properties of ACID intact. The communication between two endpoints can be open or encrypted (transport security). Even messages themselves can be encrypted (dialog security). We’ll cover just the very basics of Service Broker but for deeper understanding you can read about it in Books Online.

Image

Figure 5-6. SQL Server Service Broker basic architecture

What you need to know about Service Broker to understand Event Notifications is that when an event you’re interested in fires, a message with that event data is asynchronously sent in a dialog to a service that is backed by a queue where the actual message is stored until processed. The message is processed inside a custom stored procedure called activation stored procedure where we RECEIVE the messages from the queue and do with them whatever we want. RECEIVE is a new T-SQL keyword that is used to retrieve a message from the queue. It’s basically a SELECT statement for messages in a queue. A message is always stored in an XML form that can also be validated with a schema if needed. Message size can be up to 2 GB. Because the messages are always in XML format there is a very small overhead of converting the event data into XML format and sending the event notification. Because of this Service Broker dependency you have to enable Service Broker on the databases you’re going to use Event Notifications in.

There are 2 categories the Event Notifications events fall into: server-scoped and database-scoped. Server-scoped events are a subset of SQL Trace event classes, while database-scoped ones are all DDL events that can be used for either an event notification or a DDL trigger. Both have a fixed XML schema that they conform to and are validated against when creating a message for sending.

There are a few differences between SQL Trace and Event notifications though. If a DDL statement is rolled back, the message is not fired. That means that the event notifications get sent only after a transaction is successfully committed. While a transaction is running the messages are stored in internal Service Broker queues. The next difference is that the Event Notifications unlike SQL Trace traces (except default trace) persist across server restarts. They can also be received remotely to a different server for processing.

You can monitor anything related to Event Notifications with the following catalog views. You can find their details in Books Online.

  • sys.event_notification_event_types: Contains all server and database scoped events. They are hierarchicaly grouped, which means you can subsribe to anything from a group of events to a single event. This kind of granularity provides a powerful tracing model.
  • sys.server_event_notifications and sys.server_events: contains all server level event notifications and the events each notification contains
  • sys.event_notifications and sys.events: contains all database level event notifications and the events each notification contains

Because Event Notifications are based on Service Broker, they fire an activation stored procedure each time an event is fired. This way you can have different activation stored procedures for different events. I have found this very useful and Event Notifications have become my number one way to get immediately notified about important events like deadlocks or blocks. You just have to subscribe to a DEADLOCK_GRAPH and BLOCKED_PROCESS_REPORT events, and when you get one, you can save the notification to a table and send an email to the person in charge. Sure beats constantly running a trace and events without immediate notifications. The following examples shows how to subscribe to a DEADLOCK_GRAPH server scoped event and a short stored procedure snippet to process it. The full code can be found in the book's code appendix. This is actually part of the code I use on my servers for immediate deadlock notificatios. They work great and let me investigate problems as soon as they happen.

-- INSTANT DEADLOCK NOTIFICATIONS
USE tempdb;
GO
-- this procedure will write our event data into the table and
-- send the notification email in this example the stored procedure
-- is missing error and transaction handling!
CREATE PROCEDURE usp_ProcessNotification
AS
    DECLARE @msgBody XML;
    DECLARE @dlgId uniqueidentifier;
    -- you can change this to get all messages at once
    WHILE(1=1)
    BEGIN
            -- receive messages from the queue one by one
            ;RECEIVE TOP(1)
                    @msgBody = message_body,
                    @dlgId   = conversation_handle
            FROM    dbo.EventNotificationsQueue;
            -- insert event data into our table
            INSERT INTO TestEventNotification(eventMsg)
            SELECT @msgBody;
            -- we can also send an email after processing the event
            -- DECLARE @MailBody NVARCHAR(MAX)
            -- SELECT @MailBody = CAST(@msgBody AS NVARCHAR(MAX));
            -- send an email with the defined email profile.
            -- since this is async it doesn't halt execution
            -- EXEC msdb.dbo.sp_send_dbmail
            --         @profile_name = 'your mail profile', -- your defined email profile
            --         @recipients = '[email protected]', -- your email
            --         @subject = 'occured notification',
            --         @body = @MailBody;
    END
GO
-- create the notification queue that will receive the event notification messages
-- add the activation stored procedure that will process the messages in the queue
-- as they arrive
CREATE QUEUE EventNotificationsQueue
    WITH STATUS = ON,
    ACTIVATION (
        PROCEDURE_NAME = usp_ProcessNotification,
        MAX_QUEUE_READERS = 1,
        EXECUTE AS 'dbo' );
GO
-- crete the notofication service for our queue with the pre-defined message type
CREATE SERVICE NotificationsService
    ON QUEUE
EventNotificationsQueue(
[http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]);
GO
-- create the route for the service
CREATE ROUTE NotificationsRoute
    WITH SERVICE_NAME = 'NotificationsService',
    ADDRESS = 'LOCAL';
GO
-- create the event notification for the DEADLOCK_GRAPH event.
CREATE EVENT NOTIFICATION ServerNotificationEvent
ON SERVER
FOR DEADLOCK_GRAPH
TO SERVICE 'NotificationsService',
           'current database'
-- CASE sensitive string that specifies USE OF server broker IN CURRENT db;
GO
-- check to see if our event notification has been created ok
SELECT * FROM sys.server_event_notifications WHERE name = 'ServerNotificationEvent';
GO
-- create the table that will hold our data
-- every time a deadlock happens its information will be written into this table
CREATE TABLE TestEventNotification(Id INT IDENTITY(1,1), EventMsg xml, EventDate datetime default(GETDATE()));

The other example shows how to subscribe to a database scoped DDL_TABLE_EVENTS event group that consists of CREATE_TABLE, ALTER_TABLE, and DROP_TABLE events. Everything works exactly the same in regard to the activation stored procedures and message handling. The example is running in a tempdb that has service broker enabled by default. If you want to run it in another database, you have to enable service broker in it.

-- SUBSCRIBE TO DDL_TABLE_EVENTS EVENT GROUP
-- view the full DDL_TABLE_EVENTS event group we're going subscribe to
WITH cteEvents AS
(
    SELECT 0 AS LEVEL, *
    FROM    sys.event_notification_event_types
    WHERE    TYPE_NAME = 'DDL_TABLE_EVENTS'
    UNION ALL
    SELECT LEVEL + 1 AS LEVEL, EN.*
    FROM    sys.event_notification_event_types EN
            JOIN cteEvents C ON C.parent_type = EN.type
)
SELECT *
FROM    cteEvents
ORDER BY TYPE;

-- create the event notification for the DDL_TABLE_EVENTS events.
-- send it to the same service that we used for the Server Events
CREATE EVENT NOTIFICATION DatabaseNotificationEvent
ON DATABASE
FOR DDL_TABLE_EVENTS
TO SERVICE 'NotificationsService',
           'current database';
-- CASE sensitive string that specifies USE OF server broker IN CURRENT db
GO
-- Test if it works
CREATE TABLE testEN (id INT);
GO
INSERT INTO testEN VALUES (1), (2);
GO
ALTER TABLE testEN ADD VALUE VARCHAR(20);
GO
DROP TABLE testEN;
GO
SELECT * FROM TestEventNotification;

Extended Events

Extended events are a new addition to the SQL Server database engine in SQL Server 2008. They are a new lightweight high performance eventing system that is built directly inside the SQLOS. The extended events engine itself is just a pool of dispatchers that send event data to their targets. It contains no metadata and is fully pluggable. This way Microsoft can extended it without the need to modify other code. In SQL Server 2012 the SQL Trace infrastructure for the database engine was deprecated (however the SSAS part is not yet deprecated). Extended events are its replacement. As the data sizes and loads increase so does the need for tracing with as little overhead as possible. In SQL Server 2008, SQL Trace was still the main tracing engine and had more events. SQL Server 2012 fixed this and now an exact mapping exists between SQL Trace events and Extended events. Because they are more complete in SQL Server 2012 all code is based on that version. Since there have been various changes not all of it will be runnable on SQL Server 2008.

Figure 5-7 shows how all the pieces of extended events fit together. A module is dll or an exe file. Each module can contain multiple packages and each package contains its own extended events objects. This way a grouping can be done based on event functionality like auditing or SQLOS events or general SQL Server events. There are 4 packages in SQL Server 2008 and 9 in SQL Server 2012. They can be viewed by querying the sys.dm_xe_packages catalog view.

Not all of the packages can be used, and not all of them have entirely public objects. For example you can’t use anything from SecAudit package since it’s reserved for the built in auditing functionality. In the following sections, we’ll take a short look at the objects that are in a package.

Image

Figure 5-7. Extended events architecture

Events

Events are points in code that we can trace. They are defined with the standard ETW (event tracing for windows) model that includes channel, keyword, and payload attributes. Channel tells the target audience for the event like an administrator or a developer. Keywords give us the ability to group events into extra categories. Events always execute synchronously but the extended events engine can forward them to their targets synchronously or asynchronously. The payload is just a fancy name for the data collected for the event. The cool thing about the events is that we can unlike with SQL Trace extend the collected data with actions. We’ll look at those shortly. Existing events and their payload can be viewed with these SQL statements:

-- view all defined events in sql server
SELECT *
FROM sys.dm_xe_objects
WHERE object_type = 'event'
-- show only targets that we can use
    AND (capabilities IS NULL OR capabilities & 1 = 0)
ORDER BY name;

-- view all columns for each event - the event payload
SELECT    *
FROM    sys.dm_xe_object_columns
WHERE    -- show only targets that we can use
        (capabilities IS NULL OR capabilities & 1 = 0)
ORDER BY object_name, name;

Predicates

Predicates are similar to filters in SQL Trace but much more powerful. They function like a WHERE clause in a SQL statement. They filter which events will be sent to the target, again unlike SQL Trace, which sends an event to all targets and leaves the filtering to them. Predicates fully support short-circuiting so it's a good idea to first specify simple predicates followed by more complex ones. All predicates are processed synchronously so be aware that heavy duty conditions can cause some performance degradation. It's best to keep predicates simple like filtering the event based on a transaction ID or object ID or something similar. Existing predicates can be viewed with this SQL statement:

SELECT *
FROM sys.dm_xe_objects
WHERE object_type in ('pred_compare', 'pred_source')
-- show only targets that we can use
    AND (capabilities IS NULL OR capabilities & 1 = 0)
ORDER BY name;

Actions

Actions are commands that provide additional information for an event. You can view them as environment variables that give information about the current database engine state. However you can also perform a task with them inside the database engine, like inserting a breakpoint or collecting a mini dump. Just be careful with those breakpoints because they stop the SQL Server execution. They are bound to the event in the session definition and a few (such as database_name, event_sequence) are direct column mappings from SQL Trace. Any action can be linked to any event. They are also processed synchronously after all predicates have been evaluated and before the event is sent to its target. Actions reflect the global state of the database engine and they are not hard linked to an event. For example there’s an action called sqlserver.sql_text that can be collected when an event fires. This action returns the same information as DBCC INPUTBUFFER command. A first time user will probably think (like I did) that it is collecting the SQL statement that fired the event. This is not the case.

Existing actions can be viewed with this SQL statement:

SELECT *
FROM sys.dm_xe_objects
WHERE object_type = 'action'
-- show only targets that we can use
    AND (capabilities IS NULL OR capabilities & 1 = 0)
ORDER BY name;

Types and Maps

A type is a simple or complex data type that is used in the event payload. Maps are dictionaries that map some numeric value to a text. Existing types and maps can be viewed with these SQL statements:

SELECT *
FROM sys.dm_xe_objects
WHERE object_type in ('map', 'type')
-- show only targets that we can use
    AND (capabilities IS NULL OR capabilities & 1 = 0)
ORDER BY name;

SELECT *
FROM sys.dm_xe_map_values;

Targets

Targets are the destinations where the extended event data is stored. A target can be a memory buffer or a file and can be either synchronous or asynchronous in the way it saves data. If a target is synchronous then the engine waits until the event data is successfully stored in the target. In both SQL Server 2008 and 2012 there are 6 targets. Because some target names and some of their properties have changed between versions I will use SQL Server 2012 names and properties.

First target is called the Histogram. It is memory based and asynchronous. It counts the number of occurrences that an event fires. The basis for a histogram group can be event data or an action. As this target only stored counts and has a low performance overhead it’s a good choice to use when we’re trying to find which events fire the most in the database engine.

The Event Counter target is a memory-based and synchronous target. It is similar to the Histogram but only in that it counts events. The difference is that it can’t be modified or filtered in any way and counts all events that happen in the engine. Using this counter is a good way to find out the workload characteristics of your system.

The Pair Matching target is an interesting target because with it you can track events that are still happening. It takes an event pair, for example sql statement starting and sql statement completed, matches them on one or many columns in the event data and discards them from further processing if they match up. For each event it captures the whole event data and all the actions that have been specified for that event. It stores all this data in a memory buffer until the paired event has fired. This memory buffer can’t be set, is internally handled by the extended events engine, and is based on available system resources. If there is memory pressure and the stored unpaired events have to be flushed, the incoming events that would be matched with the flushed ones will appear as unmatched in the target. There’s one thing you have to be careful about here. Not all events can be paired up nicely. Lock Acquired is one of them. There is the Lock Released event that you might think fires for every Lock Acquired event but that is not the case. One such occurrence is when lock escalation happens from row to page to table. The Lock Acquired event fires for each of those but Lock Released fires only once when the page or table lock is released.

Ring Buffer is a memory based and asynchronous target. It stores events in a cyclical memory space that gets used over and over again. It can work in two modes. The first is strict FIFO (the default) in which the oldest event gets removed from the buffer when the buffer fills up. The second is per-event FIFO, in which, as the name suggest a number of the events of the same types are kept in memory. In this mode the last event of each type is removed from the buffer. The ring buffer memory size, the number of events to keep (all and for each type) are all configurable options.

Event File is a file based and asynchronous target. When the session buffer is full it writes its contents into a file. Like in SQL Trace you can specify a filename, the file size, whether the files can be rolled over, and by how much the file will grow once it’s full. To read the contents of the event file we have to use the sys.fn_xe_file_target_read_file function. When specifying the file name you can use the * character anywhere in the name to read all the files that match the partial name. It works in the same way as the % character in a SQL LIKE comparison. To read the event file in SQL Server 2008 we also needed to provide a metadata file (created at session start) to the sys.fn_xe_file_target_read_file function. In SQL Server 2012 this isn’t needed anymore.

The last target is called Event Tracing for Windows – ETW target. It’s a file based synchronous target. To use this target successfully you have to have knowledge how the ETW works in Windows, which is out of the scope of this chapter. If you want to get more information about it search MSDN for “Event Tracing for Windows Target” article (http://msdn.microsoft.com/en-us/library/ff878411.aspx). The target is a singleton in the SQL Server process. This means that the events arriving to it will be processed one at a time no matter how many Extended Events sessions are sending the vents data. Once you enable the ETW target, an ETW session is created on the machine that SQL Server runs on. This session is shared across all instances installed on that machine. To use ETW target, the account SQL Server Service runs under must be a member of the Performance Log Users group.

After reading about ETW target, if you think it’s confusing I would say you are completely right. In my opinion this target is too complex to use effectively in its current state.

Existing targets can be viewed with this SQL statement:

SELECT    *
FROM     sys.dm_xe_objects
WHERE    object_type = 'target'
    -- show only targets that we can use
    AND (capabilities IS NULL OR capabilities & 1 = 0)
ORDER BY name;

Sessions

Sessions are just a grouping of events, actions and targets. There are no limits about which of those you can and cannot mix, except the ones that are private to the database engine by default like objects from SecAudit package. Sessions are not auto-started when created and can be changed while running.

There are quite a few configuration options that each session has. You can look all them up in Books Online but we’ll name 3 most interesting ones. The first is EVENT_RETENTION_MODE which lets us control what’s the event loss our session can handle. It has 3 options: ALLOW_SINGLE_EVENT_LOSS (the default setting), ALLOW_MULTIPLE_EVENT_LOSS, and NO_EVENT_LOSS. The names are pretty self-explanatory. But be careful if you set it to NO_EVENT_LOSS. For events that fire a lot, this can slow down the server considerably.

The second session option is TRACK_CAUSALITY. When turned ON it adds action package0.attach_activity_id to every event. This action is a combination of GUID and sequence number. With it we can track events that happen in a batch or across multiple threads in the parallel query execution. This option is OFF by default.

The third option is the MAX_DISPATCH_LATENCY which sets the number of seconds before the non-full session buffers are sent to the asynchronous target. The default is 30 seconds.

All information about Extended events objects and running sessions can be found in dynamic management views that start with sys.dm_xe_* and catalog views starting with sys.server_event_session*. Figure 5-8 from Books Online shows the lifecycle of a single event.

Image

Figure 5-8. Event life cycle

Now that we’ve familiarized ourselves with the Extended Events objects let’s talk about the format of the data in the targets. All events are stored in schemaless XML that is read with XQuery. We have to get familiar the following XML functions:

  • nodes() transforms XML to a relational form (rowset view of the XML)
  • query() makes an XQuery lookup and returns XML
  • value() makes an XQuery lookup for a scalar value of SQL data type and returns it

The following code example shows the quick school of XML. Make sure to read the comments.

use tempdb;
GO
IF (OBJECT_ID('TextXML', 'U') IS NOT NULL)
    DROP TABLE TextXML;
GO
CREATE TABLE TextXML
(
    ID INT IDENTITY(1,1),
    XmlColumn XML
) ;
GO
INSERT INTO TextXML(XmlColumn)
SELECT
'<root>
    <event name="Fast Event Name 1" timestamp="2012-05-19T11:53:06.395Z">
        <data name="TestComponent 1">
          <type name="TestType 1" />
          <value>100</value>
          <text>This is some Test Text 1</text>
        </data>
    </event>
    <event name="Fast Event Name 2" timestamp="2012-05-20T12:53:06.395Z">
        <data name="TestComponent 2">
          <type name="TestType" />
          <value>200</value>
          <text>This is some Test Text 2</text>
        </data>
        <data name="TestComponent 2 a">
          <type name="TestType a" />
          <value>2001</value>
          <text>This is some Test Text 2 a</text>
        </data>
     </event>
</root>';
-- node depth setting in cross apply
-- Get 2 rows from the XML, one for each event, in XML form
SELECT Tbl.cols.query('.') as ShreddedXML
FROM    TextXML X
       -- we're saying to get all the event nodes
       CROSS APPLY X.XmlColumn.nodes('/root/event') Tbl(cols);
-- Get 2 rows from the XML, one for each event node.
-- Here we get the attributes for each event via .query().value() construct
-- notice the [1] indexer inside the .value(). It is one based, not zero based.
SELECT Tbl.cols.query('.') as EventXML
       , Tbl.cols.query('.').value('(event/@name)[1]', 'nvarchar(100)') as EventName
       , Tbl.cols.query('.').value('(event/@timestamp)[1]', 'datetime') as
EventTimestamp
FROM   TextXML X
       -- we're saying to get all the event nodes
       CROSS APPLY X.XmlColumn.nodes('/root/event') Tbl(cols);
GO
-- Get 2 rows from the XML, one for each event node.
-- We get the attributes from the data nodes and
-- the values of data nodes children via .query().value() construct
-- Event 1 has just one data node so the query().value() returns null
-- when looking for values at index 2.
-- This can be viewed as denormalization of the XML
SELECT Tbl.cols.query('./data') as DataXML
       , Tbl.cols.query('./data').value('(data/@name)[1]', 'nvarchar(100)') as Data1Name
       , Tbl.cols.query('./data').value('(data/@name)[2]', 'nvarchar(100)') as Data2Name
       , Tbl.cols.query('./data/type').value('(type/@name)[1]', 'nvarchar(100)') as
Type1Name
       , Tbl.cols.query('./data/type').value('(type/@name)[2]', 'nvarchar(100)') as
Type2Name
       , Tbl.cols.query('./data/value').value('(value)[1]', 'int') as Value1
       , Tbl.cols.query('./data/value').value('(value)[2]', 'int') as Value2
       , Tbl.cols.query('./data/text').value('(text)[1]', 'nvarchar(100)') as Text1
       , Tbl.cols.query('./data/text').value('(text)[2]', 'nvarchar(100)') as Text2
FROM   TextXML X
       -- we're saying to get all the event nodes
       CROSS APPLY X.XmlColumn.nodes('/root/event') Tbl(cols);
GO
-- Get 3 rows from the XML, one for each data node.
-- The first event has one data node and the second event has two data nodes.
-- We get the attributes from the data nodes and the values of their children
-- There are no nulls here since we're selecting all the values that exist.
-- If the above was denormalization then this is normalization of the XML values
SELECT Tbl.cols.query('.') as DataXML
       , Tbl.cols.query('.').value('(data/@name)[1]', 'nvarchar(100)') as DataName
       , Tbl.cols.query('./type').value('(type/@name)[1]', 'nvarchar(100)') as TypeName
       , Tbl.cols.query('./value').value('(value)[1]', 'int') as Value
       , Tbl.cols.query('./text').value('(text)[1]', 'nvarchar(100)') as [Text]
FROM    TextXML X
       -- we're saying to get all the data nodes from all the event nodes
       CROSS APPLY X.XmlColumn.nodes('/root/event/data') Tbl(cols);

Now that we know how to work with XML to parse the event data let’s take look at a concrete example. We’ll use the ring buffer target to trace all statements that take longer than 500ms.

USE master;
GO
IF EXISTS(SELECT * FROM sys.server_event_sessions WHERE name =
'RingBufferExampleSession')
    DROP EVENT SESSION RingBufferExampleSession ON SERVER;

CREATE EVENT SESSION RingBufferExampleSession
ON SERVER
    ADD EVENT sqlserver.sql_statement_completed
       (
            ACTION (sqlserver.session_id, sqlserver.username)
            WHERE sqlserver.sql_statement_completed.duration > 500000
            -- duration is in microseconds
       )
    ADD TARGET package0.ring_buffer
        (SET max_memory = 4096)
WITH (max_dispatch_latency = 1 seconds);

ALTER EVENT SESSION RingBufferExampleSession ON SERVER STATE=START;
GO
-- this takes longer than half a second
SELECT    TOP 100000 *
FROM   master..spt_values t1
       CROSS JOIN master..spt_values t2;

GO
SELECT -- these are some of the event columns
        Cols.value('(data[@name="duration"]/value)[1]','BIGINT')/1000.0 as
duration_in_ms,
        Cols.value('(data[@name="cpu_time"]/value)[1]','BIGINT')/1000.0 as
cpu_time_in_ms,
        Cols.value('(data[@name="logical_reads"]/value)[1]','BIGINT') as logical_reads,
        Cols.value('(data[@name="row_count"]/value)[1]','BIGINT') as row_count,
        Cols.value('(data[@name="statement"]/value)[1]','NVARCHAR(MAX)') as
sql_statement,
        -- these are actions we specified for the event
        Cols.value('(action[@name="username"]/value)[1]','NVARCHAR(MAX)') as username,
        Cols.value('(action[@name="session_id"]/value)[1]','SMALLINT') as session_id
FROM    (
            SELECT CAST(t.target_data AS XML) AS RingBufferInfo
            FROM   sys.dm_xe_session_targets t
                   JOIN sys.dm_xe_sessions s ON s.address = t.event_session_address
            WHERE  t.target_name = 'ring_buffer' AND s.name = 'RingBufferExampleSession'
        ) RB
        CROSS APPLY RingBufferInfo.nodes('/RingBufferTarget/event') AS Tbl(Cols);
GO
ALTER EVENT SESSION RingBufferExampleSession ON SERVER STATE=STOP;

Built in Health Session

Like the default trace in SQL Server 2005, Extended Events provides an always-running session called health_sesion. This session holds the combination of events and actions that help us troubleshoot various problems like deadlocks which had no built in automatic detection system until now. A few things health_session tracks:

  • sql_text, session_id for any error with severity >=20
  • sql_text, session_id for any sessions that encounters a memory error (not all memory errors are severity >=20)
  • A record of any "non-yielding scheduler" problems
  • Any deadlocks
  • Callstack, sql_text, session_id for sessions waiting on latches (or other interesting resources) for > 15 seconds
  • Callstack, sql_text, session_id for sessions waiting on locks for > 30 seconds
  • Callstack, sql_text, session_id for sessions waiting 5 seconds for "external" waits or "pre-emptive waits"

In SQL Server 2008 the only target for health_sesion was the Ring Buffer, but in SQL Server 2012 the targets are both the Ring Buffer and the Event File. Because we’ve previously shown how to read from the Ring Buffer target the following code example will read from the Event File.

-- just get a few columns from the XML report to show the principle of XML data retrieval
-- the whole XML deadlock report can be seen by viewing the XmlDeadlockReport column
SELECT    Col.query('.') as XmlDeadlockReport,
        Col.query('.').value('(/event/@timestamp)[1]', 'datetime') as EventTime,
        Col.query('.').value('(/event/data/value/deadlock/victim-
list/victimProcess/@id)[1]', 'NVARCHAR(100)') as VictimProcess,
        Col.query('.').value('(/event/data/value/deadlock/process-list/process/@id)[1]',
'NVARCHAR(100)') as Process1_id,
        Col.query('.').value('(/event/data/value/deadlock/process-list/process/@id)[2]',
'NVARCHAR(100)') as Process2_id,
        Col.query('.').value('(/event/data/value/deadlock/process-
list/process/@spid)[1]', 'NVARCHAR(100)') as Process1_spid,
        Col.query('.').value('(/event/data/value/deadlock/process-
list/process/@spid)[2]', 'NVARCHAR(100)') as Process2_spid,
        Col.query('.').value('(/event/data/value/deadlock/process-
list/process/@isolationlevel)[1]', 'NVARCHAR(100)') as Process1_isolationlevel,
        Col.query('.').value('(/event/data/value/deadlock/process-
list/process/@isolationlevel)[2]', 'NVARCHAR(100)') as Process2_isolationlevel,
        Col.query('.').value('(/event/data/value/deadlock/resource-
list/ridlock/@objectname)[1]', 'NVARCHAR(100)') as Resource1_objectName,
        Col.query('.').value('(/event/data/value/deadlock/resource-
list/ridlock/@objectname)[2]', 'NVARCHAR(100)') as Resource2_objectName
FROM    (
            SELECT    CAST(event_data AS XML) AS event_xml_data
            FROM    sys.fn_xe_file_target_read_file('C:Program FilesMicrosoft SQL
ServerMSSQL11.SQL2012MSSQLLogsystem_health*.xel', null, null, null)
            -- because the event file returns event name and full event
            -- XML in tabular form it's easier to filter by event
            WHERE    object_name = 'xml_deadlock_report'
       ) EventFile
       CROSS APPLY EventFile.event_xml_data.nodes('/event') as Tbl(Col);

Extended Events .NET provider

Just like SQL Trace has the .NET rowset provider, the Extended Events have the .NET object model for creating and reading sessions. Unlike the rowset provider, the extended events reader is not really live but rather asynchronous with the MAX_DISPATCH_LATENCY set to 3 seconds when connected. After it disconnects the sessions, MAX_DISPATCH_LATENCY is reset to the original value. If you have multiple readers connected to the same session the first one that disconnects will reset the MAX_DISPATCH_LATENCY to the original value. That means that all other connected readers will have that setting which is probably not the desired behavior. If the Extended Events engine detects any lag in the sending of data to the reader it disconnects that reader. This is because it follows the so called prime directive of diagnostic monitoring: The most important workload on the server is the customer workload; diagnostic systems should not prevent the customer workload from running.

Figure 5-9 shows the Extended Events .NET object model. We can see that it’s split into the metadata information objects and the actual running session objects.

Image

Figure 5-9. Extended Events .NET object model

The following code example shows how to create the Extended Events session in .NET with the ring buffer target and how to read the built-in system_health session event file.

private static void CreateStatementTrackingSession()
{
    // The XEStore is a type of SFC store so we use the SqlStoreConnection
    // and a SqlConnection to pass to the XEStore.
    // Reference SFC and SqlServer.ConnectionInfo from SQL version specific directory,
    // eg: C:Program Files (x86)Microsoft SQL Server110SDKAssemblies
    SqlConnectionStringBuilder conBuild = new SqlConnectionStringBuilder();
    // name of the SQL Server we're connecting to
    conBuild.DataSource = @"BALTAZARSQL2012";
    conBuild.InitialCatalog = "master";
    conBuild.IntegratedSecurity = true;
    // Create the actual SqlStoreConnection...
    Microsoft.SqlServer.Management.Sdk.Sfc.SqlStoreConnection server = new
Microsoft.SqlServer.Management.Sdk.Sfc.SqlStoreConnection(new
SqlConnection(conBuild.ConnectionString.ToString()));
    // ...and then create the XEStore
    XEStore xestore = new XEStore(server);
    // Create a session object and set session options.
    Microsoft.SqlServer.Management.XEvent.Session demoSession =
xestore.CreateSession("XE_DOTNET_DEMO_SESSION");
    demoSession.MaxDispatchLatency = 20; // Set in seconds.
    demoSession.MaxMemory = 2048; // Set in KB
    demoSession.TrackCausality = true; // Turn on correlation tracking.
    // Add the sql_statement_completed event to the session and configure
    // the optional field to collect the statement
    Microsoft.SqlServer.Management.XEvent.Event evt =
demoSession.AddEvent("sql_statement_completed");
    evt.EventFields["collect_statement"].Value = 1;
    // Add an action.
    evt.AddAction("session_id");
    // Add a predicate for this event.
    evt.PredicateExpression = @"duration > 5000";
    // Add a target to the session.
    Microsoft.SqlServer.Management.XEvent.Target targ =
demoSession.AddTarget("ring_buffer");
    // set the max memory for the ring buffer
    targ.TargetFields["max_memory"].Value = 2048;
    // Add the session to the XEStore.
    demoSession.Create();
    demoSession.Start();

    // run some sql statements in your database here

    demoSession.Stop();
    // delete the session
    demoSession.Drop();
}

private static void ReadSystemHealthSessionFileWithAPIReader()
{
    // the example processes:
    // - event data where kernel_mode_time greater than 150000
    // - event action where session_id >= 5
    Microsoft.SqlServer.XEvent.Linq.QueryableXEventData events =
       new QueryableXEventData(@"C:Program FilesMicrosoft SQL
ServerMSSQL11.SQL2012MSSQLLogsystem_health*.xel");

    // Iterate through each event in the file.
    foreach (Microsoft.SqlServer.XEvent.Linq.PublishedEvent evt in events)
    {
       // do any event filtering by name here
       // if (evt.Name == "desiredFilter") ...

       // Iterate through each field in the current event and return its name and value.
       foreach (Microsoft.SqlServer.XEvent.Linq.PublishedEventField fld in evt.Fields)
       {
            if (fld.Name != "kernel_mode_time")
                continue; 
            if ((ulong)fld.Value < 150000)
                continue;

            // any processing of the event is done here
       }
       // Iterate through each action in the current event and
       // return its name and value.
       foreach (Microsoft.SqlServer.XEvent.Linq.PublishedAction action in evt.Actions)
       {
            if (action.Name != "session_id")
                continue;
            if ((UInt16)action.Value < 5)
                continue;
            // any processing of the action is done here
       }
    }
}

Extended Events UI

Since SQL Server 2008 had no UI for Extended Events, SQL Server 2012 got a UI for creating sessions and profiling. Unlike the SQL Server Profiler, the UI for both is built into SSMS.

Session Creation UI

To access the session creation UI, you have to navigate to the Server node, then to the Management node, and finally go to the Extended Events node. Right click on the sessions node, and start the profiler from the context menu from. There are two options you can choose from: the New Session Wizard item and the New Session… item. The wizard lets you create a new session in wizard style, but it limits you to only the Ring Buffer and Event File targets. The New Session… option lets you unleash the full power of the Extended Events. Because the wizard UI is a subset of the New Session… UI we’ll take a look at the latter. There are four main option pages that comprise the session creation UI.

The first page is the General page displayed in Figure 5-10. On it you can name a session, choose an existing template to base the session on, set session scheduling options, and turn on causality tracking. There are nine built in templates and all the data you can see in the figures are based on the Connection Tracking template.

Image

Figure 5-10. Extended Events Session creation page: General

Figure 5-11 shows the first part of the Events page. This page is the heart of the session creation process. Here you choose the events you’ll track, their fields, extra actions, and any event filters. You can filter the events by event names, their descriptions, their fields, or all of the above. This makes the job of finding the correct event much simpler. In the left grid there’s a Category column which tells you which category the events falls under, and the Channel category tells you which Event Tracing for Windows (ETW) channel the event goes into. If you click on the Channel header you’ll notice that the events in the Debug channel are not shown. This is because they aren’t needed for normal DBA tracing. They are usualy used when Microsoft support needs to troubleshoot a problem on your server. You can filter the events on both of these columns. Underneath the left grid there are descriptions of the events and their fields.

The grid on the right shows which events from the grid on the left have you chosen to track. The column with a small lightning bolt lists the number of actions we track for this event and the filter column indicates if a filter is present for this event.

Image

Figure 5-11. Extended Events Session creation page: Events - part 1

Figure 5-12 shows the second part of the Events page. You get to this part by clicking the Configure button in the upper right corner of the page shown in Figure 5-11. The events you selected are now shown in the grid on the left. Below the grid is the description of the selected event. The grid on the right has three tabs. The first one, called Global Fields(action), lets you choose the actions you want to collect for this event. The second tab, called Filter(Predicate), enables us to add various simple and complex filters. In the third tab, named Event Fields, you can see all the fields for the selected event. The fields that are not autocollected by defauld have a checkbox on their left side. In Figure 5-12 you can see the options_text field will be collected but the database_name won’t be since you already have the database_id.

Image

Figure 5-12. Extended Events Session creation page: Events - part 2

Figure 5-13 shows Data Storage page. Data Storage is just a different name for a target. You can see that the ring_bufer target is chosen in this figure. I’ve kept the default number (1000) of kept events in the buffer and set the maximum memory for the ring_buffer to 10 MB. Also I’ve created an all-event FIFO buffer, and not a per-event type FIFO buffer. You can add as many targets as you want on this page.

Image

Figure 5-13. Extended Events Session creation page: Data Storage (Targets)

Figure 5-14 shows the Advanced session options, like the Event retention mode, the Maximum dispatch latency, Min and Max memory size, and Memory partition mode. For an explanation of these values, see the session part of the chapter. I’ve left all the options here to their default values.

Image

Figure 5-14. Extended Events Session creation page: Advanced

After you’re done setting the Session options, click OK to create the sessions. The session autostarts after the dialog closes, if that option was set.

Extended Events Profiler UI

For how the Profiler works, we’ll look at a session called ApressBookSession, which tracks connections. As we set it to already watch live data we get the same window that we would get if we right clicked on the ApressBookSession node and clicked the Watch Live Data menu item. Open a few query windows and close them. That should give you output similar to what is shown in Figure 5-15. Notice that there is also a new toolbar called Extended Events. The top Events grid shows live events as they are being generated. This grid is limited to showing one million events. The extra events that aren’t shown in the grid are not removed from the target though. The bottom Details grid shows each event’s fields. We can add/remove any of those fields into/from the top grid.

If you stop the watching of live data via the Stop button in the toolbar (this does not stop the session) you can see that some options are now enabled in the toolbar. The Grouping button activates the Grouping window shown in Figure 5-16. You can use this window to add columns that you want to group output by.. Once we’ve grouped our data we can do various aggregations on these groups. Clicking the Aggregation toolbar button opens the window shown in Figure 5-17. Since we’re already grouping by client_app_name, session_id, and logical_reads columns we can do aggregates only on timestamp and name columns. Here we’ll only COUNT the name column and sort by it in ascending order. The result of the grouped and aggregated data is shown in Figure 5-18.

Image

Figure 5-15. Extended Events Watch Live Data window in stopped state

Image

Figure 5-16. Extended Events column grouping window.

Image

Figure 5-17. Extended Events column aggregation window.

Image

Figure 5-18. Extended Events grouped and aggregated data

This was a quick overview of the new Extended Events Profiler. It is really easy to use once you get the hang of it. Hopefully this chapter has made it a bit easier.

Conclusion

In this chapter we’ve taken a look at the current state of tracing and diagnostic infrastructure in SQL Server. 5 years ago SQL Trace might have been sufficient to diagnose problems but faced with increased workload in clients’ systems a new infrastructure was needed. That infrastructure was Extended Events, a completely new, highly customizable engine built directly into the SQLOS, that should be enough for the next few years. By actually deprecating SQL Trace in SQL Server 2012 Microsoft showed that we have to learn Extended Events and start using them in our daily troubleshooting and performance tuning endeavors.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.237.246