CHAPTER 13

image

Monitoring Indexes

Throughout this book, I’ve discussed what indexes are, what they do, patterns for building them, and many other aspects for determining how a SQL Server database should be indexed. All of that information is necessary for the final piece in indexing your databases, analyzing your databases to determine which indexes are required. For this, this chapter and the two following will pull together the information you need to implement an indexing methodology.

To start, in this chapter, I’ll discuss a general practice that can be used for monitoring indexes. You’ll look at the steps you can take to observe the behavior of indexes and understand how they impact your environment. This methodology can be applied to a single database, a server, or your entire SQL Server environment. Regardless of the type of operations or business that the database supports, similar monitoring processes can be used.

The main goal behind monitoring indexes is building your ability to collect information about the indexes. This information will come from a variety of sources. The sources for monitoring should be familiar because they are often used with tasks similar to indexing, such as performance tuning. For some sources, the information will be collected over time to provide an idea of general trends. For other sources, a snapshot at a specific point in time is sufficient. It is important to collect information over time to provide a baseline against which to compare performance; this will help you know when changes in indexing are required.

As mentioned, there are a number of sources from which information will be collected to monitor your indexes. The sources that will be discussed in this chapter are

  • Performance counters
  • Dynamic management objects
  • Event tracing

For each of these sources, the subsequent sections will describe what is to be collected and will provide guidance on how to collect this information. At the end of the chapter, you will have a framework that is capable of providing the information necessary to start the Analyze phase.

Image Note  All the monitoring information from this chapter will be collected in a database named IndexingMethod. The scripts can be run in that database or your own performance monitoring database.

Performance Counters

The first source of monitoring information for indexes is SQL Server performance counters. Performance counters are metrics provided by Microsoft to measure the rate of events or state of resources within applications and hardware on the server. With some of the performance counters, there are general guidelines that can be used to indicate when a problem with indexing may exist. For the others, changes in the rate or level of the performance counter may indicate a need to change the indexing on a server.

The primary issue with using performance counters is that they represent the server-level, or SQL Server instance–level, state of the counters. They do not indicate at a database or table level where possible indexing issues may be occurring. This level of detail, though, is acceptable and useful when considering the other tools available for monitoring your indexing and identifying potential indexing needs. One advantage to collecting counter information at this level is that you are forced to consider the whole picture and the effect of all the indexes on performance. In an isolated situation, a couple of poorly performing indexes on a table might be acceptable. However, in conjunction with a few tables with poor indexes, the aggregate performance may reach a tipping point where the indexes need to be addressed. With the server-level statistics provided by performance counters, you will be able to identify when this point has been reached.

There are a large number of performance counters available for both SQL Server and Windows Server. From the perspective of indexing, though, many of the performance counters can be eliminated. The performance counters that are most useful are those that map to operations related to how indexes operate or are accessed, such as forwarded records and index searches. For a definition of the performance counters that are most useful with indexing, see Table 13-1. The reasons for collecting each of the counters and how they impact indexing decisions will be discussed in the next chapter.

Table 13-1. Index-Related Performance Counters

Option Name

Description

Access MethodsForwarded Records/sec

Number of records per second fetched through forwarded record pointers.

Access MethodsFreeSpace Scans/sec

Number of scans per second that were initiated to search for free space in which to insert a new record fragment.

Access MethodsFull Scans/sec

Number of unrestricted full scans per second. These can be either base-table or full-index scans.

Access MethodsIndex Searches/sec

Number of index searches per second. These are used to start range scans and single index record fetches and to reposition an index.

Access MethodsPage Splits/sec

Number of page splits per second that occur as the result of overflowing index pages.

Buffer ManagerPage Lookups/sec

Number of requests to find a page in the buffer pool.

Locks(*)Lock Wait Time (ms)

Total wait time (in milliseconds) for locks in the last second.

Locks(*)Lock Waits/sec

Number of lock requests per second that required the caller to wait.

Locks(*)Number of Deadlocks/sec

Number of lock requests per second that resulted in a deadlock.

SQL Statistics Batch Requests/sec

Number of Transact-SQL command batches received per second.

There are a number of ways to collect performance counters. For the monitoring in this chapter, you’ll use the DMV sys.dm_os_performance_counters. This DMV returns a row for all the SQL Server counters for an instance. The values returned are the raw values for the counters, so depending on the type of counter, the value can be a point-in-time state value or an ever-accumulating aggregate.

To begin collecting performance counter information for monitoring, you’ll first need to create a table for storing this information. The table definition in Listing 13-1 provides for this need. When collecting the performance counters, you will use a table that stores the counter name with the value and then datestamps each row to identify when the information was collected.

For the purposes of collecting information for monitoring indexing, you’ll take the information from sys.dm_os_performance_counters and calculate the appropriate values from the DMV. These would be the same values that are available when viewing performance counter information from other tools, such as Performance Monitor. There are a few steps required to populate dbo.IndexingCounters. As mentioned, the DMV contains raw counter values. To calculate these values properly, it is necessary to take a snapshot of the values in the DMV and then wait a number of seconds before calculating the counter value. In Listing 13-2, the counter value is calculated after ten seconds. Once the time has expired, the counters are calculated and inserted into the dbo.IndexingCounters tables. This script should be scheduled and executed frequently. Ideally, you should collect this information every one to five minutes.

Image Note  Performance counter information can be collected more frequently. For instance, Performance Monitor defaults to every 15 seconds. For the purposes of index monitoring, that frequency is not necessary.

The first time you collect performance counters for your indexes, you won’t be able to compare the counters to other reasonable values for your SQL Server. As time goes on, though, you can retain previous performance counter samples to make comparisons. As part of monitoring, you will be responsible for identifying periods in which values for the performance counters represent the typical activity for your environment. To store these values, insert them into a table similar to the one in Listing 13-3. This table has start and end dates to indicate the range that the baseline represents. Also, there are minimum, maximum, average, and standard deviation columns to store values from the collected counters. The minimum and maximum values allow you to understand the range in which the performance counters vary. The average value provides an idea of what the counter value will be when it is “good.” The standard deviation allows you to understand the variability of the counter values. The lower the number, the more frequently the counter values cluster around the average value. Higher values indicate that the counter values vary more frequently and are often nearer to the minimum and maximum values.

When populating the values into dbo.IndexingCountersBaseline, there are two steps to the population process. First, you need to collect a sample from the performance counters that represents a typical week. If there are no typical weeks, pick this week and collect samples for it. Once you have the typical week, the next step is to aggregate the information into the baseline table. Aggregating the information is a matter of summarizing the information in the table dbo.IndexingCounters for a range of days. In Listing 13-4, the data is from March 1 to March 15, 2012. The next step is to validate the baseline. Just because the average for the past week states that the Forwarded Records/sec value is at 100 doesn’t mean that value is good for your baseline. Use your experience with your servers and databases to influence the values in the baseline. Make adjustments to the baseline as needed if there is a recent trend below or above what is normal.

There are other ways to collect and view performance counters for your SQL Server instances. You can use the Windows application Performance Monitor to view performance counters in real time. It can also be used to log performance counters to a binary or text file. You can use the command-line utility Logman to interact with Performance Monitor to create data collectors and start and stop them as needed. Also, PowerShell is a possibility for assisting in the collection of performance counters.

All these alternatives are valid options for collecting performance counters on your databases and indexes. The key is that if you want to monitor your indexes, you must collect the information necessary to know when potential indexing issues may arise. Pick a tool that you are comfortable with and start collecting these counters today.

Dynamic Management Objects

Some of the best indexing performance information for monitoring indexing is included in dynamic management objects (DMOs). The DMOs contain information on logical and physical uses for the indexes and overall physical structure. For monitoring, there are four DMOs that provide information on the usage of the indexes: sys.dm_db_index_usage_stats, sys.dm_db_index_operational_stats, sys.dm_db_index_physical_stats, and sys.dm_os_wait_stats. In this section, you’ll walk through a process to monitor your indexes using each of these DMOs.

The first three following sections will discuss the sys.dm_db_index_* DMOs. Chapter 3 defined and demonstrated the contents of the DMOs. One thing to remember with these DMOs is that they can be flushed through various operations on the server, such as restarting the service or re-creating the index. The fourth DMO, sys.dm_os_wait_stats, relates to index monitoring and provides information that can help during index analysis.

Image Warning  The indexing DMOs don’t have information at the row level to precisely indicate when the information collected for the index has been reset. Because of this, there can be situations where the statistics reported can be slightly higher or lower than they actually are. While this shouldn’t greatly affect the outcome during analysis, it is something to keep in mind.

Index Usage Stats

The DMO sys.dm_db_index_usage_stats provides information on how indexes are being used and when the index was last used. This information can be useful when you want to track whether indexes are being used and which operations are being executed against the index.

The monitoring process for this DMO, which is similar to the other DMOs, consists of the following steps:

  1. Create a table to hold snapshot information.
  2. Insert the current state of the DMO into the snapshot table.
  3. Compare the most recent snapshot to the previous snapshot and insert the delta between the rows in the output into a history table.

To build the process, you’ll first need to create the snapshot and history tables. The schema for these tables will be identical and will contain all the columns from the DMO and a create_date column (see Listing 13-5). For consistency with the source DMO, the columns for the table will match the schema of the DMO.

The next piece in capturing a history of index usage stats is collecting the current values in sys.dm_db_index_usage_stats. Similar to the performance monitor script, the collection query, shown in Listing 13-6, needs to be scheduled to run about every four hours. The activity in your environment and rate in which indexes are modified should help determine the frequency in which the information is captured. Be certain to schedule a snapshot prior to any index defragmentation processes to capture information that might be lost when indexes are rebuilt.

After populating the snapshot for the index usage stats, the delta between the most recent and the previous snapshot needs to be inserted into the index_usage_stats_history table. Since there isn’t anything in the rows from sys.dm_db_index_usage_stats to identify when the stats for the index have been reset, the process for identifying when a delta between two entries for an index exists is to remove the row if any of the statistics on the index return a negative value. The resulting query, shown in Listing 13-7, implements this logic along with removing any rows where no new activity has happened.

Index Operational Stats

The DMO sys.dm_db_index_operational_stats provides information on the physical operations that happen on indexes during plan execution. This information can be useful for tracking the physical plan operations that occur when indexes are used and the rates for those operations. One of the other things this DMO monitors is the success rate in which compression operates.

As mentioned in the previous section, the process for monitoring this DMO involves a few simple steps. First, you’ll create tables to store snapshot and history information on the DMO output. Then, periodic snapshots of the DMO output are inserted into the snapshot table. After the snapshot is retrieved, the delta between the current and previous snapshot is inserted into the history table.

The process utilizes a snapshot and history table that is nearly identical to the schema of sys.dm_db_index_operational_stats. The chief variance in the schema is the addition of a create_date column, used to identify when the snapshot occurred. The code in Listing 13-8 provides the schema required for the snapshot and history tables.

With the tables in place, the next step is to capture a current snapshot of the information in sys.dm_db_index_operational_stats. The information can be populated using the script in Listing 13-9. Since the Indexing Method is geared toward capturing information on indexing for all databases on the server, the values for the parameters for sys.dm_db_index_operational_stats are set to NULL. This will return results for all partitions of all indexes on all tables in all databases on the server. Like the index usage stats, this information should be captured about every four hours, with one of the scheduled points being before the index maintenance on the server.

The step after populating the snapshot is populating the history table. As before, the purpose of the history table is to store statistics on the deltas between two snapshots. The deltas provide information on which operations occurred, and they also help to timebox those operations so that, if needed, more focus can be placed on operations during core versus noncore hours. The business rule identifying when the statistics have been reset is similar to index usage stats: if any of the statistics on the index return a negative value, the row from the previous snapshot will be ignored. Also, any rows that return all zero values will not be included. Listing 13-10 shows the code used to generate the history delta.

Index Physical Stats

The indexing DMO for monitoring indexes is sys.dm_db_index_physical_stats. This DMO provides statistics on the current physical structure of the indexes in the databases. The value of this information is in determining the fragmentation of the index, which is discussed more in Chapter 6. From a monitoring perspective, you are collecting the physical statistics to aid with later analysis. The goal is to identify potential issues that may be affecting the efficiency in how the index is stored, or vice versa, thus impacting query performance because of how the index is stored.

With the physical stats DMO, the statistics are collected a bit differently than with the other DMOs. The main difference between this DMO and the other is the impact that can be placed on the database while collecting the information. While the other two reference in-memory tables, index_physical_stats reads the pages in the index to determine the actual fragmentation and physical layout of the indexes. You’ll learn more about the impact of using sys.dm_db_index_physical_stats in Chapter 3. To accommodate this difference, the statistics are stored only in a history table; the deltas between the points in which the history is retrieved are not determined. Also, because of the nature of the statistics contained in the DMO, there would be little value in calculating delta values.

The first piece needed to begin collecting statistics on index physical stats is the previously mentioned history table. This table, shown in Listing 13-11, uses the same schema as the DMO, with the addition of the create_date column.

Image Tip  When generating the table schema needed for the DMOs, a table-valued function first introduced in SQL Server 2012 was utilized. The function sys.dm_exec_describe_first_result_set can be used to identify the column names and data types for a query.

The collection of the history for index_physical_stats differs from the previous two DMOs. Since it’s just history, there is no need to capture the snapshot information to build the delta between the two snapshots for the history. Instead, the current statistics are inserted directly into the history table, as shown in Listing 13-12. Also, since index_physical_stats performs physical operations on the index while collecting the statistics, there are a few things to keep in mind when generating the history information. First, the script will collect information from each database independently from the other databases through a CURSOR-drive loop. This provides a batched separation between the collections of statistics for each database and limits the impact of the DMO. Second, you should be certain that the query is executed during noncore hours. The start of the daily maintenance window would be ideal. It is important that this information is collected prior to defragmentation or re-indexing since these operations will change the information provided by the DMO. Usually, this information is collected as a step in the defragmentation process, which is discussed in Chapter 6.

Wait Statistics

One other DMO that provides information related to indexing is sys.dm_os_wait_stats. This DMO collects information related to resources that SQL Server is waiting for in order to start or continue executing a query or other request. Most performance tuning methodologies include a process for collecting and analyzing wait statistics. From an indexing perspective, there are a number of wait resources that can indicate that there may be indexing issues on the SQL Server instance. By monitoring these statistics, you can be informed when these issues may exist. Table 13-2 provides a short list of wait types that most often indicate that indexing issues may exist.

Table 13-2. Index-Related Wait Statistics

Option Name

Description

CXPACKET

Synchronizes threads involved in a parallel query. This wait type only means a parallel query is processing with either an unbalanced workload or a worker is blocked by a preceding request.

IO_COMPLETION

Indicates a wait for I/O for operation (typically synchronous) like sorts and various situations where the engine needs to do a synchronous I/O. This wait type represents nondata page I/Os.

LCK_M_*

Occurs when a task is waiting to acquire a lock on an index or table.

PAGEIOLATCH_*

Occurs when a task is waiting on a latch for a buffer that is in an I/O request. Long waits may indicate problems with the disk subsystem.

Similar to performance counters, wait statistics are general indicators of health that reflect information about the SQL Server instance as a whole. They do not point directly to resources; instead, they collect information on when there was a wait for a specific resource on the SQL Server instance.

Image Note  Many performance monitoring tools from third-party vendors collect wait statistics as a part of their monitoring. If there is a tool already installed in your environment, check to see whether wait statistics information can be retrieved from that tool.

The process for collecting wait statistics follows the pattern of using snapshot and history tables. To do this, the data will be collected first in a snapshot table with the deltas between snapshots stored in a history table. The snapshot and history tables, shown in Listing 13-13, contain the columns needed to support the snapshot and history patterns.

To collect the wait statistics information, the output from sys.dm_os_wait_stats is queried. Unlike the other DMOs discussed in this chapter, there is some summarization of the information that needs to occur prior to inserting the data. In previous versions of SQL Server, the wait_stats DMO contains two rows for the wait type MISCELLANEOUS. To accommodate for this variance, the sample script in Listing 13-14 uses aggregations to get around the issue. Another difference between wait_stats_snapshot and the other snapshots is the frequency in which the information should be collected. Wait_stats reports information on when requested resources were not available. Being able to tie this information to specific times of the day can be critical. As such, wait_stats information should be collected about once every hour.

With each snapshot collected, the delta between it and the previous snapshot needs to be added in the wait_stats_history table. For determining when the information in sys.dm_os_wait_stats has been reset, the column waiting_tasks_count is utilized. If the value in the column is lower than the previous snapshot, the information in the DMO is reset. Listing 13-15 provides the code for populating the history table.

Data Cleanup

While all the information for monitoring is needed for the index analysis, this information is not needed indefinitely. The process for monitoring would not be complete without tasks in place to clean up the information collected after a reasonable amount of time. A generally acceptable schedule for cleaning up information is to purge snapshots after 3 days and history information after 90 days.

The snapshot information is used simply to prepare the history information and is really not needed after the delta is created. Since SQL Agent jobs can error and collection points may be a day apart from the previous, a three-day window generally provides the leeway needed to support the process and accommodate any issues that may arise.

The data in the history tables is more crucial than the snapshot information and needs to be kept longer. This information feeds the activities during index analysis. The window for retaining this information should match the amount of time that it generally takes to go through the Indexing Method three or more times. This way, the information retained can be used for reference in a few cycles of the process.

When scheduling the cleanup process, it should be at least daily and during noncore processing hours. This will minimize the amount of information deleted in each execution and reduce the possible contention of the delete with other activity on the server. The delete script, shown in Listing 13-16, covers each of the tables discussed throughout this section.

Event Tracing

The last set of information that should be collected for monitoring indexes is event tracing. The trace information collects SQL statements that represent production activity that can be used during index analysis to identify indexes that could be useful based on the query activity in your production environment and on the data that is being stored there. While the statistics collected so far provide information on the effect of activity on indexes and other resource use on the SQL Server instance, event tracing collects the activity that is causing those statistics. With SQL Server there are two methods that can be used to collect event tracing data.

  • SQL Trace
  • Extended Events

For the purposes of completeness, both methods will be discussed. In my view, only Extended Events should be used to collect event tracing data in SQL Server.

SQL Trace

SQL Trace, and by extension SQL Profiler, is the original tracing tool for SQL Server. It’s one of the most common tools that DBAs have in their back pockets and can easily collect events in SQL Server. With SQL Trace, there are a number of areas to be careful of when collecting information. First, SQL Trace will likely collect a lot of information, and this will need to be accommodated. In other words, the more active the server and the databases, the larger the trace (.trc) files will be. Along these same lines, don’t collect the trace information on drives that are already heavily used or dedicated to data or transaction log files. Doing this can, and likely will, impact the performance of I/O on those files. The end goal for monitoring is to improve the performance of the system; care needs to be taken to minimize the impact of monitoring.

Finally, SQL Trace and SQL Profiler are deprecated in SQL Server 2012. This doesn’t mean that these tools no longer function, but they are slated for removal in a future SQL Server release. While SQL Trace is deprecated, it is still the ideal tool for collecting trace information, which will later be used with the Database Engine Tuning Advisor.

Image Note  It is always advisable to keep apprised of deprecated features within SQL Server. For more information on deprecated features, see Books Online at http://technet.microsoft.com/en-us/library/ms143729(v=sql.110).aspx.

There are four basic steps to creating a SQL Trace session.

  1. Build the trace session.
  2. Assign the events and columns to the session.
  3. Add filters to the session.
  4. Start the SQL Trace session.

The next few pages will cover these steps and describe the components used in creating the SQL Trace session.

To begin monitoring with SQL Trace, a trace session must first be created. Sessions are created using the sp_trace_create stored procedure. This procedure accepts a number of parameters that configure how the session will collect information. In the example session, shown in Listing 13-17, the SQL Trace session will create files that automatically failover when they reach the 50MB file size limit. The file size is limited to allow for better file management. It’s easier to copy 50MB files compared to files that are 1GB or more. Also, the trace files are being created in c: emp with the file name IndexingMethod. Note that this name can be changed to anything that suits the needs of the server and databases you are monitoring.

After creating the SQL Trace session, the next step is to add events to the session. There are two events that will collect the information that is of most value to index monitoring: RPC:Completed and SQL:BatchCompleted. RPC:Completed returns results whenever a remote procedure call completes; the best example of this is the completion of a stored procedure. The other event, SQL:BatchCompleted, occurs when ad hoc and prepared batches are completed. Between these two events, all the completed SQL statements on the server will be collected.

To add events to the SQL Trace session, you use the sp_trace_set event stored procedure. The stored procedure adds events and the column requested from the event to the trace with each execution of the stored procedure. For two events with 15 columns each, the stored procedure will need to be executed 30 times. For the example session, shown in Listing 13-18, the following columns are being collected for each of the sessions:

  • ApplicationName
  • ClientProcessID
  • CPU
  • DatabaseID
  • DatabaseName
  • Duration
  • EndTime
  • HostName
  • LoginName
  • NTUserName
  • Reads
  • SPID
  • StartTime
  • TextData
  • Writes

You can find the codes for the events and columns in system catalog views. Events are listed in view sys.trace_events. The columns available are listed in sys.trace_columns. The columns view also includes an indicator to identify whether the values from the column can be filtered, which is useful in the next step in creating SQL Trace sessions.

The next step is to filter out unneeded events from the SQL Trace session. There is no need to collect all statements all the time for all databases and all applications with every SQL Trace session. In fact, in Listing 13-19, events from the system databases, those with a database ID less than 5, are removed from the session. The stored procedure for filtering SQL Trace sessions is sp_trace_setfilter. The stored procedure accepts the ID for columns from sys.trace_columns. Columns not included in the events can be filtered, and filters apply to all events.

The last step in setting up the monitoring for SQL Trace is to start the trace. This task is accomplished using the sp_trace_setstatus stored procedure, shown in Listing 13-20. Through this procedure, SQL Trace sessions can be started, paused, and stopped. Once the trace is started, it will start to create .trc files in the file location provided, and the configuration for SQL Trace monitoring will be complete. When the collection period for the SQL Trace session completes, this script will be used with the status code 2 instead of 1 to terminate the session. Listing 13-21 provides this script.

Image Note  SQL Server experts often find it unfashionable to use the Database Engine Tuning Advisor, instead preferring to manually analyze the database and determine the indexes needed. This preference misses the opportunity to uncover low-hanging fruit or situations where changing the location of the clustered index can improve performance.

The SQL Trace session example in this section is fairly basic. In your environment, you may need to have a more intelligent process that collects information in each trace file for a specified amount of time instead of using a file size to control the file rollover rate. These types of changes to collecting information from SQL Trace for monitoring indexes should have no impact on your ability to use the SQL Trace information for the purposes intended later in this chapter. There is one last item to consider with the SQL Trace information. Trace information does not need to constantly be gathered, like performance counter and DMO information. Instead, the SQL Trace information is often better suited to being collected for a four- to eight-hour period that represents a regular day of activity on your database platform. With SQL Trace, you can collect too much information, which can overwhelm the analyze phase and delay indexing recommendations.

Extended Events

Extended Events, introduced in SQL Server 2008, is an alternative tracing tool in SQL Server; it’s more functional but less popular than SQL Trace. Given a choice, you should create your traces with Extended Events over SQL Trace. There are two ways to create Extended Events sessions. The first is through T-SQL, which will be demonstrated in this chapter. The second uses a GUI in SQL Server Management Studio that includes wizards for building a new session; this was introduced in SQL Server 2012. The best practices in session creation are the same as SQL Trace for the most part. For instance, be sure to collect session logs on files other than those in which data and log files are stored.

The trace you’ll create in Extended Events will collect the same general information as SQL Trace. The main differences will be how the session is created and some of the names of events and columns. Instead of RPC:Completed and SQL:BatchCompleted, the events to capture in Extended Events are rpc_completed and sql_batch_completed, respectively. Each of these events capture their own set of columns, or data elements, which are listed in Table 13-3.

Table 13-3. Extended Events Columns

Event

Columns

rpc_completed

connection_reset_option

cpu_time

data_stream

duration

logical_reads

object_name

output_parameters

physical_reads

result

row_count

statement

writes

sql_batch_completed

batch_text

cpu_time

duration

logical_reads

physical_reads

result

row_count

writes

Additionally, you’ll include some additional data in the Extended Events session that is available as global fields, or actions, which can be used to extend the default information included in each event. These are the similar to the elements included in the SQL Trace session from the previous session. The global fields to be included are

  • client_app_name
  • client_hostname
  • database_id
  • database_name
  • nt_username
  • process_id
  • session_id
  • sql_text
  • username

With the session defined, the next step is to create the sessions. Extended Events leverages the T-SQL data definition language (DDL) instead of stored procedures to create sessions. The code in Listing 13-22 provides the DDL for the session and starts the session. For each event added, the ADD EVENT syntax is used, and the ACTION clause is used to include the global fields. For convenience, the session is designed to store the output in the default log folder for SQL Server in the file EventTracingforIndexTuning.

Similar to SQL Trace sessions, Extended Events sessions can be started and stopped. There is no need to pause them since the metadata for a session exists independent from whether the session is running. Listing 13-22 includes the syntax for starting the trace. Listing 13-23 shows the code to stop the trace.

This Extended Events session is pretty simple. The nice thing about it is its ability to easily capture workloads from your SQL Server instances. Using the workloads from tracing, you can begin to understand how the SQL Server is being queried and the types of indexes that will help improve the performance of you environment.

Summary

In this chapter, you walked through the steps to monitor your indexes. Monitoring indexes is an extension of general platform monitoring but an important part of providing the foundation for determining whether you have the right indexes and for analyzing your indexes. Through the monitoring, I reviewed how to gather dynamic management data and performance counters. In the next chapter, you’ll look at how you can apply this information to analyze whether you have the right indexes.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.111.179