Skill 4.3: Manage performance for database instances

SQL Server 2016 and Azure SQL Database include many features that help you monitor and manage the performance of database instances. In this section, we review your options for allocating and optimizing server resources. In addition, we introduce DMVs and SQL Server performance counters that you can use to monitor and troubleshoot database performance over time.

Manage database workload in SQL Server

The SQL Server Resource Governor helps you manage database workloads by setting limits for the amount of CPU, IO, and memory that incoming requests can consume. Within Resource Governor, a workload is a set of queries or requests for which SQL Server should consistently allocate a specific set of resources. This capability is useful when you are managing multiple tenants on the same server and need to minimize the impact of one tenant’s workload on the other tenants’ workloads or when you need to track resource consumption by workload for chargeback purposes, just to name two examples.


Important Editions supporting Resource Governor

Resource Governor is supported only in the Enterprise, Developer, or Evaluation editions of SQL Server.


Figure 4-38 shows the relationship between several components managed by Resource Governor. A resource pool defines the physical resources of the server and behaves much like a virtual server. SQL Server creates an internal pool and a default pool during installation, and you can add user-defined resource pools. You associate one or more workload groups, a set of requests having common characteristics, to a resource pool. As SQL Server receives a request from a session, the classification process assigns it to the workload group having matching characteristics. You can fine-tune the results of this process by creating classifier user-defined functions.

An illustration shows the components managed by the Resource Governor. At the top of the diagram is a rectangle labeled Session with an arrow connecting it to a circle labeled Classification. To the left of Classification is a rectangle labeled Classifier Function with an arrow connecting the rectangle to the circle. Arrows point from Classification to the following five rectangles on a single row below the circle: Default Group, Default Group External Resource, Internal Group, Group A, and Group B external resource. Each of these rectangles has an arrow connecting it to one of the following corresponding rectangles displayed in the bottom row: Default Pool, Default Pool External Resource, Internal Pool, Pool A, and Pool B External Resource.

FIGURE 4-38 Resource Governor components

You must enable Resource Governor to start using it. You can do this in SQL Server Management Studio by expanding the Management node in Object Explorer, right-clicking Resource Governor, and selecting Enable. As an alternative, you can execute the following T-SQL statement:

ALTER RESOURCE GOVERNOR RECONFIGURE;
GO


Image Exam Tip

For the exam, you should be able to review a Resource Governor configuration and identify which workload group will execute a specified T-SQL statement. To correctly answer this type of question, you should have a thorough understanding of Resource Governor’s architecture and configuration.


Resource pools

You distribute the amount of memory, CPU, and IO available to SQL Server among resource pools as a means of reducing contention between workloads. Each resource pool is configured with the following settings (except the external resource pool as described later in this section): Minimum CPU%, Maximum CPU%, Minimum Memory %, and Maximum Memory %. The sum of Minimum CPU% and of Minimum Memory % for all resources pools cannot be more than 100. These values represent the guaranteed average amount of that resource that each resource pool can use to respond to requests. The Maximum CPU% and Maximum Memory % reflect the maximum average amount for the respective resources. SQL Server can use more than the maximum percentage defined for a resource if it is available. To prevent this behavior, you can configure a hard cap on the resource available to the resource pool.

After you enable Resource Governor, SQL Server has the following types of resource pools:

Image Internal SQL Server uses the internal resource pool for resources required to run the database engine. You cannot change the resource configuration for the internal resource pool. SQL Server creates one when you enable the Resource Governor.

Image Default In SQL Server 2016, there is one resource pool for standard database operations and a separate resource pool for external processes such as R script execution. These two resource pools are created when you enable the Resource Governor.

Image External An external resource pool is a new type for SQL Server 2016 that was added to support R Services. Because the execution of R scripts can be resource-intensive, the ability to manage resource consumption by using the Resource Governor is necessary to protect normal database operations. In addition, you can add an external resource pool to allocate resources for other external processes. The configuration for an external resource pool differs from the other resource pool types and includes only the following settings: Maximum CPU%, Maximum Memory %, and Maximum Processes.

Image User-defined resource pool You can add a resource pool to allocate resources for database operations related to a specific workload.


Note Maximum number of supported resource pools per instance

SQL Server supports a maximum of 64 resource pools per instance.


You can use the Resource Governor node in Object Explorer to open a dialog box and add or configure resource pools as needed, although this interface does not include all settings available to configure by using T-SQL. When you create a resource pool by using T-SQL, as shown in Listing 4-33, you specify any or all arguments for CPU, the scheduler, memory, and I/O operations per second (IOPS).

Listing 4-33 Create user-defined resource pools


CREATE RESOURCE POOL poolExamBookDaytime
WITH (
    MIN_CPU_PERCENT = 50,
    MAX_CPU_PERCENT = 80,
    CAP_CPU_PERCENT = 90,
    AFFINITY SCHEDULER = (0 TO 3),
    MIN_MEMORY_PERCENT = 50,
    MAX_MEMORY_PERCENT = 100,
    MIN_IOPS_PER_VOLUME = 20,
    MAX_IOPS_PER_VOLUME = 100
);
GO
CREATE RESOURCE POOL poolExamBookNighttime
WITH (
    MIN_CPU_PERCENT =       0,
    MAX_CPU_PERCENT = 50,
    CAP_CPU_PERCENT = 50,
    AFFINITY SCHEDULER = (0 TO 3),
    MIN_MEMORY_PERCENT = 5,
    MAX_MEMORY_PERCENT = 15,
    MIN_IOPS_PER_VOLUME = 45,
    MAX_IOPS_PER_VOLUME = 100
);
GO
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO



Need More Review? Additional information regarding resource poolcreation

For more information about using T-SQL to create a resource pool, see “CREATE RESOURCE POOL (Transact-SQL”) at https://msdn.microsoft.com/en-us/library/bb895329.aspx.


Workload groups

Resource Governor monitors the resources consumed in aggregate by the sessions in a workload group to ensure consumption does not exceed the thresholds defined for both the workload group and the resource pool to which it is assigned. The predefined resource pools each have a predefined workload group, but you can also add workload groups to the default, external, and user-defined resource pools.

When you configure a workload group, as shown in Listing 4-34, you can specify the relative importance of a workload group as compared to other workload groups in the same resource pool only. You can also specify the maximum amount of memory or CPU time that a request in the workload group can acquire from the resource pool, the maximum degree of parallelism (DOP) for parallel requests, or the maximum number of concurrent requests.

Listing 4-34 Create workload groups


CREATE WORKLOAD GROUP apps
WITH (
    IMPORTANCE = HIGH,
    REQUEST_MAX_MEMORY_GRANT_PERCENT = 35,
    REQUEST_MAX_CPU_TIME_SEC = 0, --0 = unlimited
    REQUEST_MEMORY_GRANT_TIMEOUT_SEC = 60, --seconds
    MAX_DOP = 0, -- uses global setting
    GROUP_MAX_REQUESTS = 1000 --0 = unlimited
)
USING "poolExamBookNighttime";
GO
CREATE WORKLOAD GROUP reports
WITH (
    IMPORTANCE = LOW,
    REQUEST_MAX_MEMORY_GRANT_PERCENT = 25,
    REQUEST_MAX_CPU_TIME_SEC = 0, --0 = unlimited
    REQUEST_MEMORY_GRANT_TIMEOUT_SEC = 60, --seconds
    MAX_DOP = 0, -- uses global setting
    GROUP_MAX_REQUESTS = 100 --0 = unlimited
)
USING "poolExamBookNighttime";
GO
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO


Classifier user-defined functions

Resource Governor assigns a request to the default group if there is no criteria that matches the request to a workload group. You must create a user-defined function to provide the criteria necessary to assign a request to a specific workload group. If the user-defined function assigns the request to a non-existent workload group, or if the classification process fails for any reason, Resource Governor assigns the request to the default group.

Let’s say that you want to establish a classification function to assign a request to a workload group based on the time of day. Furthermore, you want to use a lookup table for the start and end times applicable to a workload group. Let’s start by creating and adding a row to the lookup table, as shown in Listing 4-35. Note that you must create this table in the master database because Resource Governor uses schema bindings for classifier functions.

Listing 4-35 Create lookup table


USE master
GO
CREATE TABLE tblClassificationTime  (
    TimeOfDay SYSNAME NOT NULL,
    TimeStart TIME NOT NULL,
    TimeEnd   TIME NOT NULL
) ;
GO
INSERT INTO tblClassificationTime
VALUES('apps', '8:00 AM', '6:00 PM');
GO
INSERT INTO tblClassificationTime
VALUES('reports', '6:00 PM', '8:00 AM');
GO


Next, you create the classifier function that uses the lookup table to instruct the Resource Governor which workload group to use when classifying an incoming request. An example of such a classifier function is shown in Listing 4-36.

Listing 4-36 Create and register classifier function


USE master;
GO
CREATE FUNCTION fnTimeOfDayClassifier()
RETURNS sysname
WITH SCHEMABINDING  AS
BEGIN
    DECLARE @TimeOfDay sysname
    DECLARE @loginTime time
    SET @loginTime = CONVERT(time,GETDATE())
    SELECT
        TOP 1 @TimeOfDay = TimeOfDay
    FROM dbo.tblClassificationTime
    WHERE TimeStart <= @loginTime and TimeEnd >= @loginTime
    IF(@TimeOfDay IS NOT NULL)
        BEGIN
            RETURN @TimeOfDay
        END
    RETURN N'default'
END;
GO
ALTER RESOURCE GOVERNOR with (CLASSIFIER_FUNCTION = dbo.fnTimeOfDayClassifier);
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO



Note Resources for checking configuration of Resource Governor components

You can confirm the configuration of resource pools, workload groups, and user-defined classifier functions by querying the sys.resource_governor_resource_pools, sys.resource_governor_workload_groups, and sys.resource_governor_configuration system tables respectively. For more information, see “Create and Test a Classifier User-Defined Function” at https://msdn.microsoft.com/en-us/library/cc645892.aspx.


Resource Governor management queries

After you configure all the components necessary for Resource Governor, you can monitor resource consumption by using any of the queries shown in Listing 4-37.

Listing 4-37 Monitor resource consumption


--Current runtime data
SELECT * FROM sys.dm_resource_governor_resource_pools;
GO

SELECT * FROM sys.dm_resource_governor_workload_groups;
GO

--Determine the workload group for each session
SELECT
    s.group_id,
    CAST(g.name as nvarchar(20)) AS WkGrp,
    s.session_id,
    s.login_time,
    CAST(s.host_name as nvarchar(20)) AS Host,
    CAST(s.program_name AS nvarchar(20))  AS Program
FROM sys.dm_exec_sessions s
INNER JOIN sys.dm_resource_governor_workload_groups g
    ON g.group_id = s.group_id
ORDER BY g.name ;
GO

SELECT
    r.group_id,
    g.name,
    r.status,
    r.session_id,
    r.request_id,
    r.start_time,
    r.command,
    r.sql_handle,
    t.text
FROM sys.dm_exec_requests r
INNER JOIN sys.dm_resource_governor_workload_groups g
    ON g.group_id = r.group_id
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) AS t
    ORDER BY g.name
GO

-- Determine the classifier running the request
SELECT
    s.group_id,
    g.name,
    s.session_id,
    s.login_time,
    s.host_name,
    s.program_name
FROM sys.dm_exec_sessions s
INNER JOIN sys.dm_resource_governor_workload_groups g
    ON g.group_id = s.group_id  AND
        s.status = 'preconnect'
ORDER BY g.name;
GO

SELECT
    r.group_id,
    g.name,
    r.status,
    r.session_id,
    r.request_id,
    r.start_time,
    r.command,
    r.sql_handle,
    t.text
FROM sys.dm_exec_requests r
INNER JOIN sys.dm_resource_governor_workload_groups g
    ON g.group_id = r.group_id
        AND r.status = 'preconnect'
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) AS t
ORDER BY g.name;
GO


Design and implement Elastic Scale for Azure SQL Database

Elastic Scale is a feature in SQL Database that you use to adjust the database capacity to match the scalability requirements for different applications. In other words, you can grow or shrink the database by using a technique known as sharding, which partitions your data across identically structured database. Sharding is useful when the application data in aggregate exceeds the maximum size supported by SQL Database or when you need to separate data by geography for compliance, latency, or geopolitical reasons.

Although sharding is not a new concept, it requires the use of custom code to create and manage sharded applications and adds complexity to your solution architecture. Elastic Scale provides an elastic database client library and a Split-Merge service that help simplify the management of your applications. That way you can adapt the capacity of SQL Database to support varying workloads and ensure consistent performance without manual intervention.

Elastic database client library

You must use the elastic database client library to implement standard sharding patterns in a SQL Database by calling its features in your elastic scale application. You use it to perform operations across the all shards as a unit or to perform operations on individual shards, as shown in Figure 4-39. The elastic database client library provides the following features:

Image Shard map management You first register each database as a shard, and then define a shard map manager that directs connection requests to the correct shard by using a sharding key or a key range. A sharding key is data such as a customer ID number that the database engine uses to keep related transactions in one database.

Image Data-dependent routing Rather than define a connection in your application, you can use this feature to automatically assign a connection to the correct shard.

Image Multishard querying The database engine uses this feature to process queries in parallel across separate shards and then combine the results into a single result set.

Image Shard elasticity This feature monitors resource consumption for the current workload and dynamically allocates more resource as necessary and shrinks the database to its normal state when those resources are no longer required.

An illustration shows the relationship between an elastic scale application and Elastic Scale for Azure SQL Database. On the right side of the diagram is a rectangle labeled Elastic Scale app. It contains two rectangles, each labeled Client Library. On the left side of the diagram is a set of four SQL Database icons, labeled from Shard 1 through Shard n. Lines connect these four icons to a rectangle above them labeled Shard Map Manager. An arrow between Shard Map Manager and one Client Library rectangle is labeled Cross-shard Operations. An arrow between the set of SQL Databases and the other Client Library rectangle is labeled Local Shard Operations.

FIGURE 4-39 Sharding management with elastic database client library


Note Learning more about working with the elastic database client library

To use the elastic database client library, you must use Visual Studio 2012 (or higher), C#, and Nuget 2.7 (or higher). You can learn more about working with this client library at “Get started with Elastic Database tools” at https://azure.microsoft.com/en-us/documentation/articles/sql-database-elastic-scale-get-started/.


Split-Merge service

You use the Split-Merge service to add or remove databases as shards from the shard set and redistribute the data across more or fewer shards, as shown in Figure 4-40. As demand increases, you can split the data out across a greater number of shards. Conversely, you can merge the data into fewer shards as demand lowers.

An illustration shows the effect of using the split-merge service. At the top of the diagram is a rectangle labeled Split-Merge Service with a line connecting it to a rectangle labeled Shard Map Manager. Below the Shard Map Manager is a set of four SQL Database icons labeled as a group as Before and individually as Shard 1, Shard 2, Shard 3, and Shard n. Below this set of icons is another set of three SQL Database icons labeled as a group as After and individually as Shard x, Shard y, and Shard z. Lines connect Shard 1 to Shard x and Shard y, below which appears the label Split. Lines connect Shard 2, Shard 3, and Shard n to Shard z, below which appears the label Merge.

FIGURE 4-40 Examples of splitting and merging shards


Note A tutorial for using the split-merge service

A tutorial that includes a link for the split-merge tool and instructions for using it is available at “Deploy a split-merge service,” https://docs.microsoft.com/en-us/azure/sql-database/sql-database-elastic-scale-configure-deploy-split-and-merge.



Need More Review? Elastic Scale resources online

For the exam, you should be familiar with the architecture, features, and tools of Elastic Scale and understand use cases. For more details about Elastic Scale, refer to “Scaling out with Azure SQL Database” at https://docs.microsoft.com/en-us/azure/sql-database/sql-database-elastic-scale-introduction.


Select an appropriate service tier or edition

Microsoft uses editions for SQL Server 2016 and service tiers for Azure SQL Database to provide combinations of product features, performance, and price levels so that you can select one that best meets your application’s requirements. You should be familiar with the general features and limitations of each edition and understand the differences between each edition for the exam.

SQL Server 2016 is available in the following editions:

Image Express This edition is a free version of SQL Server with limited features that you can use for small applications and Web sites. The maximum database size supported by this edition is 10 GB. It uses up to 1 GB memory and to the lesser of 1 physical processor or 4 cores. There are three types of SQL Server 2016 Express from which to choose:

Image LocalDB You use LocalDB for a simple application with a local embedded database that runs in single-user mode.

Image Express You use Express when your application requires a small database only and does not require any other components packaged with SQL Server in the Standard edition. You can install this edition on a server and then enable remote connections to support multiple users.

Image Express with Advanced Services This edition includes the database engine as well as Full Text Search and Reporting Services.

Image Web This edition is scalable up to 64 GB of memory and the lesser of 4 physical processors or 16 cores with a maximum database size of 524 PB. It includes the database engine, but without support for availability groups and other high-availability features. It also does not include many of the advanced security and replication features available in Standard or Enterprise edition, nor does it include the business intelligence components such as Analysis Services and Reporting Services, among others. Web edition is intended for use only by Web hosters and third-party software service providers.

Image Standard This edition scales to 128 GB of memory and the lesser of 4 physical processors or 24 cores. The maximum database size with Standard edition is 524 PB. This edition includes core database and business intelligence functionality and includes basic high-availability and disaster recovery features, new security features such as row-level security and dynamic data masking, and access to non-relational data sources by using JSON and PolyBase.

Image Enterprise This edition includes all features available in the SQL Server platform and provides the highest scalability, greatest number of security features, and the most advanced business intelligence and analytics features. Like Standard edition, Enterprise edition supports a database size up to 524 PB, but its only limits on memory and processor sizes are the maximums set by your operating system.

To support higher availability, this edition supports up to 8 secondary replicas, with up to two synchronous secondary replicas, in an availability group, online page and file restore, online indexing, fast recovery, mirrored backups, and the ability to hot add memory and CPU.

For greater performance, Enterprise edition supports in-memory OLTP, table and index partitioning, data compression, Resource Governor, parallelism for partitioned tables, multiple file stream containers, and delayed durability, among other features.

Enterprise edition includes many security features not found in Standard edition. In particular, Always Encrypted protects data at rest and in motion. Additional security features exclusive to Enterprise edition include more finely-grained auditing, transparent data encryption, and extensible key management.

Features supporting data warehouse operations found only in Enterprise edition include change data capture, star join query optimizations, and parallel query processing on partitioned indexes and tables.

Image Developer This edition is for developers that create, test, and demonstrate applications using any of the data platform components available in Enterprise edition. However, the Developer edition cannot be used in a production environment.

Image Evaluation This edition is a free trial version of SQL Server 2016. You can use this for up to 180 days that you can use to explore all of the features available in Enterprise Edition before making a purchasing decision.


Need More Review? Features and limitations of SQL Server editions

A complete list of the features and limitations of each edition is available at https://technet.microsoft.com/en-us/windows/cc645993(v=sql.90).


When you select the type of SQL Database to implement, you choose a service tier along with a performance level. The service tier sets the maximum database size while the performance level determines the amount of CPU, memory, and IO thresholds which collectively are measured as a DTU. When you create a new SQL Database, you can choose from the following service tiers:

Image Basic This service tier has a maximum database size of 2 GB and performance level of 5 DTUs. You use this option when you need a small database for an application or website with relatively few concurrent requests. The benchmark transaction rate is 16,600 transactions per hour.

Image Standard This service tier has a maximum database size of 250 GB and performance levels ranging from 10 to 100 DTUs. You can use this option when a database needs to support multiple applications and multiple concurrent requests for workgroup and web applications. With 50 DTUs, the benchmark transaction rate is 2,570 transactions per minute.

Image Premium This service tier has a maximum database size of 1 TB and performance levels ranging from 125 to 4,000 DTUs. You use this option for enterprise-level database requirements. With 1,000 DTUs, the benchmark transaction rate is 735 transactions per second.

Optimize database file and tempdb configuration

One option that you have for improving the performance of read and write operations is to optimize the configuration of files that SQL Server uses to store data and log files. Your optimization goal is to reduce contention for storage and IO of files used not only by your database, but also by tempdb.

Database file optimization

When your application must support a high volume of read/write operations, you should consider taking the following steps to mitigate disk contention:

Image File placement Data and log files should be placed on separate physical disks for better performance. Remember from the explanation of transactions that we described in Chapter 3, “Managing database concurrency,” that SQL Server writes each transaction to the log before updating the data file. By separating the two file types, the read/write head for the disk with the log file can work more efficiently without frequent interruptions by the writes to the data file. Furthermore, consider using a disk with high write performance for the log file. This recommendation is less applicable when the bulk of SQL Server activity is read operations.

By default, the data and log files for a new database are placed on the same drive, and normally in the same directory as the system databases: Program FilesMicrosoft SQL ServerMSSQL13.MSSQLSERVERMSSQLDATA. You can move the data and log files to a new location by using the ALTER DATABASE command as shown in Listing 4-38, replacing <databasename> and <drive:filepath> with the appropriate names for your database and folder structures.

LISTING 4-38 Relocate data and log files


ALTER DATABASE <databasename>
SET OFFLINE;
GO
ALTER DATABASE <databasename>
MODIFY FILE (NAME = <databasename>_Data, FILENAME = "<drive:filepath>
Data<databasename>_Data.mdf");
GO
ALTER DATABASE <databasename>
MODIFY FILE (NAME = <databasename>_Log, FILENAME = "drive:filepath>
Log<databasename>_Log.mdf");
ALTER DATABASE <databasename>
SET ONLINE;


Another benefit of separating data and log files on separate drives is mitigating a potential failure. If the drive containing the data files fails, you can still access the log file from the other disk and recover data up to the point of failure.

Image File groups and secondary data files By default, each database has a primary filegroup that contains the primary data file containing system tables and database files created without a filegroup specification. SQL Server uses the primary filegroup as the default for new indexes and tables that you create without placing them in a specific filegroup, but you can create a new filegroup and designate it as the default. There can only be one default filegroup, but you can create as many additional file groups as you need as containers for one or more data and log files. Files within a single filegroup can be spread across multiple disks to increase the parallelism of data access. Furthermore, you can separate tables or indexes that are heavily accessed from lesser used tables or indexes by assigning them to different filegroups, which in turn are each placed on separate disks. Listing 4-39 shows how to create a database with multiple filegroups and how to add a filegroup to an existing database to isolate an index on its own disk:

LISTING 4-39 Create and alter database with multiple file groups


--Create a database on 4 drives
CREATE DATABASE DB1 ON
PRIMARY
    (Name = <databasename>, FILENAME = '<drive1:filepath><databasename>.mdf'),
    FILEGROUP FGHeavyAccess1
    (Name = <databasename>_1, FILENAME = '<drive3:filepath><databasename>_1.ndf')
LOG ON
    (Name = <databasename>_1_Log, FILENAME = '<drive3:filepath><databasename>_1_
log.ldf'),
    (Name = <databasename>_1, FILENAME = '<drive4:filepath><databasename>_1_
log_2.ldf');
-- Add filegroup for index
ALTER DATABASE <databasename>
    ADD FILEGROUP FGIndex;
--  Add data file to the new filegroup
ALTER DATABASE <databasename>
ADD FILE (
    NAME = <databasename>,
    FILENAME = '<drive1:filepath><databasename>.ndf',
    SIZE=1024MB,
    MAXSIZE=10GB,
    FILEGROWTH=10%)
TO FILEGROUP FGIndex;
-- Add index to filegroup
CREATE NONCLUSTERED INDEX ix_Example
    ON Examples.BusyTable(TableColumn)
   ON FGIndex;


Image Partitioning You can use partitioning to place a table across multiple filegroups. Each partition should be in its own filegroup to improve performance. To map a value in a partition function to a specific filegroup, you use a partition scheme. Let’s say you have four filegroups—FGYear1, FGYear2, FGYear3, and FGYear4—and a partition function PFYearRange that defines four partitions for a table. You can create a partition schema to apply the partition function to these filegroups as shown in Listing 4-40.

LISTING 4-40 Create partition scheme to map partition function to filegroups


CREATE PARTITION SCHEME PSYear
    AS PARTITION PFYearRange
    TO (FGYear1, FGYear2, FGYear3, FGYear4);


tempdb optimization

Because so many operations, such as cursors, temp tables, and sorts, to name a few, rely on tempdb, configuring tempdb properly is critical to the performance of the database engine. Consider performing the following steps to optimize tempdb configuration:

Image SIMPLE recovery model By using the SIMPLE recovery model, which is the default, SQL Server reclaims log space automatically so that the space required for the database is kept as low as possible.

Image Autogrowth You should keep the default setting which allows tempdb files to automatically grow as needed.

Image File placement The tempdb data and log files should be placed on different disks than your production database data and log files. Do not place the tempdb data files on the C drive to prevent the server from failing to start after running out of hard drive space. In addition, be sure to place the tempdb log file on its own disk. Regardless, put tempdb files on fast drives.

Image Files per core In general, the number of data files for tempdb should be a 1:1 ratio of data files to CPU cores. In fact, in SQL Server 2016, the setup wizard now assigns the correct number based on the number of logical processors that it detects on your server, up to a maximum of 8, as shown in Figure 4-41.

Image File size When you configure the database engine at setup, the default file size recommended by the setup wizard of 8 MB for an initial size with an autogrowth setting of 64MB is conservative and too small for most implementations. Instead, consider starting with an initial size of 4,096 MB with an autogrowth setting of 512 MB to reduce contention and minimize the impact of uncontrolled tempdb growth on performance. If you dedicate a drive to tempdb, you can set up the log files evenly on the drive to avoid performance issues caused by SQL Server pausing user activity as it grows the log files.

A screen shot shows the TempDB configuration page in the SQL Server 2016 Setup wizard. The screenshot shows default settings for the author’s computer. For the TempDB data files, Number of Files is 4. Initial Size (MB) is 8. Total Initial Size (MB) is 32 and is not editable. Autogrowth (MB) is 64. Total Autogrowth (MB) is 256 and is not editable. For TempDB log file, Initial Size (MB) is 8 and Autogrowth (MB) is 64.

FIGURE 4-41 TempDB configuration in SQL Server 2016 Setup wizard


Need More Review? Optimization and capacity planning for tempdb

For specific size and placement recommendations, see “Optimizing tempdb Performance” at https://technet.microsoft.com/en-us/library/ms175527.aspx and “Capacity Planning for tempdb” at https://technet.microsoft.com/en-us/library/ms345368.aspx.


Optimize memory configuration

SQL Server’s memory manager dynamically allocates memory according to the workloads on the host computer and in the database engine. However, you can use the following server configuration options to optimize SQL Server memory:

Image min server memory Use this option to prevent SQL Server from releasing memory to the operating system when the server memory drops to this threshold.

Image max server memory Use this option to ensure that other applications running on the same computer as SQL Server have adequate memory. When an application requests memory only as needed, you do not need to configure this option. It applies only when an application uses the memory available when it starts and does not later request more memory when necessary. You should configure this option to prevent SQL Server from taking the memory that the application might need.

Image max worker threads Use this configuration to define the number of threads available to user operations. If you keep the default value of 0, SQL Server configures the number of worker threads each time the service restarts.

Image index create memory Use this option to set the maximum amount of memory that SQL Server initially allocates for index creation. SQL Server will allocate more memory later if necessary, but only if it is available. Typically, you do not need to configure this option, but if SQL Server is experiencing performance delays related to indexing, you can increase the value of this option.

Image min memory per query Use this option to improve performance of memory-intensive queries by establishing the minimum amount of memory allocated for query execution. SQL Server can use more memory than the configured minimum if it is available.

To configure memory, right-click the server instance in Object Explorer and select Properties. Click the Memory page, and then in the Server Memory Options, type the appropriate values for any property except max worker threads. You can also use T-SQL to adjust a property value like this:

EXEC sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
EXEC sp_configure 'min memory per query', 512 ;
GO
RECONFIGURE;
GO

Monitor and diagnose schedule and wait statistics using dynamic management objects

One of the best ways to determine which SQL Server resource is a bottleneck on performance is to review wait statistics. To better understand why waits, it is helpful first to understand how SQL Server manages incoming requests. Each authenticated connection is assigned to a session by SQL Server which then uses a pseudo-operating system called the SQL Operating System (SQLOS) Scheduler to schedule CPU time for each session’s requests. There is one SQLOS Scheduler per logical CPU core on the server to manage the worker threads performing operations necessary to complete a request. These worker threads must work cooperatively by running only for 4-milliseconds, known as a quantum, before yielding the CPU to another worker thread and waiting in a runnable queue for another turn. It might voluntarily yield the CPU if its quantum has not yet expired and it cannot complete its task, because a resource it needs is unavailable. In this case, the worker thread is moved to a waiter list and then later moves back to the runnable queue when the needed resource becomes available.

Wait statistics allow you to analyze the time a worker thread spends in various states before it completes a request by providing the following key pieces of information:

Image Wait type The cause of the wait. For example, the disk IO is slow, resources are locked, CPU is under pressure, an index is missing, or many other reasons. There are hundreds of wait types that SQL Server tracks.

Image Service time The amount of time that a thread runs on the CPU.

Image Wait time The amount of time that a thread is not running because it is in the waiter list.

Image Signal wait time The amount of time that a thread is in the runnable queue ready to run, but waiting for CPU time.

Image Total wait time The sum of wait time and signal wait time.

You can access wait statistics through the following DMVS:

Image sys.dm_os_wait_stats View information about completed waits at the instance level.

Image sys.dm_exec_session_wait_stats View information about waits at the session level.

Image sys.dm_os_waiting_tasks View information about requests in the waiter list.

sys.dm_os_wait_stats

In Chapter 3 we explored how to use this DMV for troubleshooting lock issues, but it is also useful for discovering the most frequently occurring waits since the last reset of the cumulative values. The cumulative wait time in this DMV includes the cumulative signal wait time, so subtract signal wait time from wait time when you want to determine the cumulative time threads spend in the waiter list.


Note Isolating top waits

You can find a useful query for isolating top waits to help focus your troubleshooting efforts in Paul Randal’s blog post, “Wait statistics, or please tell me where it hurts” at http://www.sqlskills.com/blogs/paul/wait-statistics-or-please-tell-me-where-it-hurts/.


By reviewing this DMV for specific characteristics, you can uncover some of the following potential issues on your server:

Image CPU pressure Compare the signal wait time to the total wait time to determine the relative percentage of time that a thread has to wait for its turn to run on the CPU. When this value is relatively high, it can be an indicator that the CPU is overwhelmed by queries that require tuning or your server needs more CPU. You can confirm whether the issue is related to CPU by checking the runnable_tasks_count column in the sys.dm_os_schedulers DMV to see if there is a high number of tasks in the runnable queue. You might also see a higher occurrence of the SOS_SCHEDULER_YIELD wait type if the CPU is under pressure. In addition, you can monitor CPU-related performance counters as described in Skill 4.4.

Image IO issues If tasks are waiting for the IO subsystem, you will see waits that contain IO in the name. In particular, monitor the trend in average wait time which is calculated by dividing wait_time_ms by waiting_tasks_count. If it starts trending upward, investigate IO using performance counters.

Two wait types that will appear frequently in this DMV when IO issues exist are ASYNC_IO_COMPLETION and IO_COMPLETION. Check physical disk performance counters to confirm this diagnosis, which we describe in the next section, “Troubleshoot and analyze storage, IO, and cache issues.” Consider adding indexes to reduce IO contention.

You might also see PAGEIOLATCH waits when a thread is waiting for latches to release after writing a data page in memory to disk or WRITELOG waits when the log management system is waiting to flush to disk. These wait types can indicate either an IO subsystem problem or a memory problem. To narrow down the possibilities, you need to check IO statistics by using sys.dm_io_virtual_file_stats and by reviewing IO-related performance counters.

Image Memory pressure The PAGEIOLATCH wait might also indicate memory pressure instead of an IO subsystem problem. It appears when SQL Server does not have enough free memory available for the buffer pool. Check the Page Life Expectancy performance counter to see if it is dropping as compared to a baseline value to confirm whether memory is the reason for this wait type. If you see an increase in CXPACKET waits in conjunction with PAGEIOLATCH waits, the culprit could be a query plan using large table or index scans.

Another indicator of a memory pressure issue is the RESOURCE_SEMAPHORE wait. It occurs when a query requests more memory than is currently available. You can check the sys.dm_exec_query_memory_grants DMV and combine it with sys.dm_exec_sql_text and sys.dm_exec_sql_plan DMVs to find the memory-intensive queries and review their query plans.

sys.dm_exec_session_wait_stats

This DMV is new in SQL Server 2016 and is identical in structure to sys.dm_os_wait_stats, but has an additional column for session ID. However, it is important to note that this new DMV only includes information for sessions that are currently connected. When a session disconnects, its wait statistics are cleared from the DMV. Nonetheless, it can be helpful when you need to diagnose the workload for a specific session.

sys.dm_os_waiting_tasks

We introduced this DMV in Chapter 3 as a tool for finding blocked sessions, but you can also use it to find the requests currently waiting for a resource and why. As one example, you might filter the DMV by using the wait_duration_ms column to find tasks that have been waiting longer than a threshold that you specify.


Note Additional resource for using the sys.dm_os_waiting_tasks DMV

Paul Randal has published a script that demonstrates how to use sys.dm_os_waiting_tasks with other DMVs to gather information about blocked sessions, the wait type, the resource affected, the query text, and the query plan in his blog post “Updated sys.dm_os_waiting_tasks” at http://www.sqlskills.com/blogs/paul/updated-sys-dm_os_waiting_tasks-script/.



Need More Review? Learning more about analyzing wait statistics

You can learn more about analyzing wait statistics in Jonathan Kehayias and Erin Stellato’s SQL Server Performance Tuning Using Wait Statistics: A Beginner’s Guide, a free whitepaper at https://www.simple-talk.com/free-stuff/sql-server-performance-tuning-using-wait-statistics-a-beginners-guide/.


Troubleshoot and analyze storage, IO, and cache issues

Troubleshooting and analyzing storage, IO, and cache issues is a huge topic to which many books, blog posts, and workshops are dedicated because there are many different ways to configure the disk and IO subsystem and many different issues that can arise. For the exam, you should understand how to get information about storage, IO, and cache performance from SQL Server and the Microsoft Windows operating system.

Storage and IO

Storage bottlenecks occur when your data or log files are stored on slow disks or when the RAID is not configured appropriately for your workload. Unless your application is using memory-optimized tables, SQL Server is frequently reading data from disk in response to queries or writing new or changed data to disk. Meanwhile, tempdb is disk-based and uses a lot of IO for grouping and sorting operations. Conditions within SQL Server that create IO bottlenecks include frequent index scans, inefficient queries, and outdated statistics. Although SQL Server is requesting the reads and writes, the operating system controls the system bus, disk controller cards, disks, and other IO devices to physically perform the disk IO. Another factor to consider is the demand that other applications running on the server can place on the IO subsystem.

Your first indication that you might have an IO problem might come from an analysis of waits as we described in the previous section. Your next step is to use the sys.dm_io_virtual_file_stats DMV in combination with sys.master_files to analyze cumulative metrics related to each database including its data and log files. These DMVs help you find the busiest files and provides IO stall information that tells you how long users had to wait for IO operations to finish.


Note Example of IO subsystem latency analysis

You can find an example of analyzing these DMVs in Paul Randal’s blog post, “How to examine IO subsystem latencies from within SQL Server” at http://www.sqlskills.com/blogs/paul/how-to-examine-io-subsystem-latencies-from-within-sql-server/.


Another way to find these issues is to use the sys.dm_os_performance_counters DMV. The availability of the DMV means you can easily get SQL Server-related performance counter information without first setting up Windows Performance Monitor.


Note An online resource for querying SQL Server performance counters

Interpreting the information found in the sys.dm_os_performance_counters information can sometimes be challenging because counter values can be an average of operations executed within a sample interval or the last observed value for a counter, to name only two types of values. Jason Strate has produced a comprehensive explanation of the types of values and how to derive important calculations from counter values when the available from the DMV in his blog post, “Querying Performance Counters in SQL Server,” at http://www.jasonstrate.com/2012/11/querying-performance-counters-in-sql-server/.


The following performance counters, accessible by executing the statements shown in Listing 4-41, provide insight into the amount of IO that SQL Server is directly contributing to the server:

Image SQLServer:Buffer Manager: Page lookups/sec Average requests per second at which SQL Server finds a page in the buffer pool. This value should be lower than SQLServer:SQL Statistics: Batch Requests/sec multiplied by 100.

Image SQLServer:Buffer Manager: Page reads/sec Average rate at which SQL Server reads from disk. This value should be lower than the hardware specifications for the IO subsystem’s read operations.

Image SQLServer:Buffer Manager: Page writes/sec Average rate at which SQL Server writes to disk. This value should be lower than the hardware specifications for the IO subsystem’s write operations.

Listing 4-41 Review SQL Server:Buffer Manager performance counters


SELECT
    object_name,
    counter_name,
    instance_name,
    cntr_value,
    cntr_type
FROM sys.dm_os_performance_counters
WHERE object_name = 'SQLServer:Buffer Manager' AND
    counter_name IN
        ('Page lookups/sec', 'Page reads/sec', 'Page writes/sec')


If these performance counters are too high, consider one or more of the following solutions:

Image Tune database performance by adding new indexes, improving existing indexes, or normalizing tables, or partitioning tables.

Image Replace the IO subsystem hardware with faster components.

Cache issues

SQL Server is self-tuning and manages memory dynamically. It will use as much memory as you can give it, and it will not release memory until the operating system sets the low memory resource notification flag. Cache bottlenecks occur when SQL Server does not have enough memory to manage. To diagnose cache issues, start by checking the physical memory of the server and identifying how other applications on the server are using memory. You might need to analyze and tune specific queries as well. For example, if an index is missing on a large table, SQL Server must perform a table scan which reads a significant amount of data into memory.

The following DMVs are useful for understanding memory usage on your server:

Image sys.dm_os_memory_cache_counters View the current state of the cache.

Image sys.dm_os_sys_memory View resource usage information for the server, including total physical and available memory and high or low memory state.

Image sys.dm_os_memory_clerks View usage information by memory clerk processes that manage memory for SQL Server.


Note Learning more about using DMVs to tune performance

You can learn more about using these and related DMVs to investigate memory usage in Louis Davidson and Tim Ford’s free book, Performance Tuning with SQL Server Dynamic Management Views, which you can download from https://assets.red-gate.com/community/books/performance-tuning-with-dmvs.pdf.


You should also use the following performance counters to monitor whether SQL Server has adequate memory:

Image SQLServer:Buffer Manager: Free List Stalls/Sec Number of requests per second that SQL Server waits for a free page in the buffer cache. If this value is greater than zero on a frequent basis, the server is experiencing memory pressure.

Image SQLServer:Buffer Manager: Lazy Writes/Sec Number of times per second that SQL Server flushes pages to disk. If this number is rising over time, and Free List Stalls/Sec is also greater than zero, you likely need more memory on the server.

Image SQLServer:Memory Manager: Memory Grants Outstanding Number of processes that have acquired a memory grant successfully. A low value might signify memory pressure.

Image SQLServer:Memory Manager: Memory Grants Pending Number of processes that are waiting for a memory grant. If this value is greater than zero, consider turning queries or adding memory to the server.

Monitor Azure SQL Database query plans

To monitor Azure SQL Database query plans, you use many of the same techniques that you use to monitor SQL Server query plans. Specifically, you can choose any of the following methods:

Image T-SQL statements You can use the Showplan SET options in SQL Server Management Studio to capture query plans, just as you can for SQL Server. You can also use the Display Estimated Execution Plan and Include Actual Execution Plan buttons in the toolbar to generate the respective graphical query plan.

Image Extended Events You can use Extended Events to capture query plans, much like you can for SQL Server. There are some slight differences, however. Instead of using the ON SERVER clause in the CREATE EVENT SESSION, ALTER EVENT SESSION, and DROP EVENT SESSION commands, you must use ON DATABASE instead. If you want to save a query plan to a file, you must write the file to an Azure Storage container. In addition, there are also several DMVS for Extended Events that are unique to SQL Database.


Note Understanding differences in Extended Events between SQL Server and SQL Database

For more information about the differences in usage of Extended Events between SQL Server and SQL Database, see “Extended events in SQL Database” at https://docs.microsoft.com/en-us/azure/sql-database/sql-database-xevent-db-diff-from-svr.


Image Query Store Query Store is enabled by default for V12 databases. You can access the Query Store views in SQL Server Management Studio in the same way that you do for SQL Server. Also, you can use the same Query Store DMVs.


Important SQL Database lacks support for SQL Trace

The use of SQL Trace is not supported in SQL Database.


Skill 4.4: Monitor and trace SQL Server baseline performance metrics

One of the most important responsibilities of a DBA is to ensure that SQL Server runs smoothly and performs optimally. To fulfill this responsibility, you should be familiar with the array of tools available to help you uncover and diagnose problems occurring on the server. We have introduced several of these tools in the preceding pages of this chapter, but we did so with specific contexts in mind. Now let’s step back and survey the tools again in terms of how you can use them to baseline server performance at the operating system and SQL Server instance levels. In addition, we introduce some new tools that you can use towards this same goal.

Monitor operating system and SQL Server performance metrics

At any time, without any elaborate setup requirements, you can access the following tools to check performance metrics for the operating system and SQL Server:

Image Dynamic management objects (DMOs) DMVs and DMFs provide insight into the historical and current state of SQL Server. You can query DMOs on an ad hoc basis, or you can create export the data into tables for long-term storage and trend analysis.

Image Performance Monitor Operating system and SQL Server performance counters provide useful information to corroborate metrics obtained from other sources and to help narrow down the range of possible causes when investigating specific problems.

Image SQL Trace You can set up server-side or SQL Server Profiler tracing for a variety of events as a method of investigating problematic workloads and poorly performing queries.

Image Extended Events You can create Extended Events sessions as a more lightweight approach to server-side tracing.

Dynamic management objects

In Chapter 3 and earlier in this chapter, we explored a wide variety of DMOs that you can use to troubleshoot locking and blocking issues, collect execution statistics on natively compiled stored procedures, review index usage, access Query Store information, review wait statistics, and troubleshoot storage, IO, and cache issues. Did you happen to notice any patterns in the naming of these DMOs? By understanding the naming conventions, you can get a high-level view of the range of information that DMOs supply. For the exam, you do not need to be familiar with the complete list of DMO categories supported in SQL Server. However, given the name of a specific DMO, you should be able to identify the type of information it provides at a general level by its association with one of the following categories:

Image sys.dm_exec_* Connections, sessions, requests, and query execution

Image sys.dm_os_* Information for the operating system on which SQL Server runs

Image sys.dm_tran_* Details about transactions

Image sys.dm_io_* IO processes

Image sys.dm_db_* Database-scoped information


Note Additional information about DMOs

For more in-depth information about dynamic management objects, refer to “Dynamic Management Views and Functions (Transact-SQL)” at https://msdn.microsoft.com/en-us/library/ms188754.aspx.


Most DMOs provide information about the current state, such as currently blocked sessions in sys.dm_os_waiting_tasks, or information accumulated since SQL Server last restarted, such as sys.dm_os_wait_stats. SQL Server retains the information accessible through DMOs in memory only and does not persist it to disk. Therefore, the information is reset when SQL Server restarts.

Performance Monitor

Performance Monitor, also known as PerfMon, is a tool provided with the Windows operating system that you can use to monitor operating system, application, and hardware performance in real time. You can even establish thresholds and receive alerts when thresholds are crossed. As an alternative, you can capture performance data in logs that you can review in the graphical interface or save to SQL Server tables for trend analysis.

Typically, you use performance counters to confirm suspicions of a problem that you uncover by using wait statistics rather than as a starting point. Like DMOs,

For real-time analysis, open Performance Monitor, and click the Add button in the toolbar. In the Add Counters dialog box, select the server to monitor in the Select Counters From Computer drop-down list (or use the Browse button to locate a server on your network), and then scroll through the Available Counters list to locate the set of counters to monitor, such as PhysicalDisk. When you monitor PhysicalDisk counters, you can select a specific disk to monitor or all disks. Select a specific counter, such as % Disk Time, and click the Add button. Continue adding counters as needed. When finished, click OK. You can then view real-time metrics for the selected counters. If you are monitoring multiple counters at the same time, right-click a counter and select Properties to change Color, Width, Style values to more easily distinguish between counters, as shown in Figure 4-42. You might also need to reset the scale for a counter.

A screen shot shows the Performance Monitor in which real-time values for the Processor: % Processor Time and PhysicalDisk: % Disk Time counters display. The latter performance counter displays with a dashed line to distinguish it from the other performance counter.

FIGURE 4-42 Real-time monitoring of performance counters in Performance Monitor

To identify issues in disk IO activity, start by reviewing the following performance counters (replacing PhysicalDisk with the LogicalDisk if you have multiple logical partitions on the same disk):

Image PhysicalDisk: % Disk Time Percentage of time the disk is active with reads and writes. If this percentage is greater than 90 percent, review the Physical Disk: Current Disk Queue Length counter.

Image PhysicalDisk: Avg. Disk sec/Read Average read latency in seconds. This value should be less than 0.20.

Image PhysicalDisk: Avg. Disk sec/Transfer Average latency of IO requests to the disk in seconds. This value should be less than .020.

Image PhysicalDisk: Avg. Disk sec/Write Average read latency of IO requests to the disk in seconds. This value should be less than .020.

Image PhysicalDisk: Current Disk Queue Length Number of IO requests waiting for access to the disk. This value should be no more than two times the number of spindles for the disk. Most disks have a single spindle, but a redundant array of independent disks (RAID) typically have more than one spindle, even though the RAID device appears as a single physical disk in System Monitor.

Image Memory: Page Faults/sec This performance counter increases when processes on the server are consuming too much memory and SQL Server must page to disk. Paging itself is not necessarily a cause of an I/O bottleneck, but can slow the performance of the IO subsystem.

Another important resource to monitor is CPU usage, which you can watch by using the following performance counters:

Image Processor: % Privileged Time Percentage of time that the operating system spends processes SQL Server I/O requests. If this value is high at the same time that the disk-related performance counters described in the “Troubleshoot and analyze storage, IO, and cache issues” section are also high, your server likely needs a faster disk.

Image Processor: % Processor Time Percentage of time that each processor spends executing a thread that is not idle. If this value is consistently between 80 and 90 percent, you should upgrade the CPU or add more processors.

Image Processor: % User Time Percentage of time that the operating system executes user processes, including IO requests from SQL Server. If this value approaches 100%, it might indicate that the CPU is under pressure.

Image System: Processor Queue Length Number of threads waiting for processor time. An increase in this counter means the CPU is not keeping up with demand and a faster processor is necessary.

To monitor memory, use the following performance counters:

Image Memory: Available Bytes Amount of memory available for processes on the server. If this value is too low, check to see if another application on the server is failing to release memory or if memory on the server is adequate for your requirements.

Image Memory: Pages/sec Frequency with which pages are retrieved from or written to disk due to hard page faults. When this value is consistently high, the server might be paging excessively. If it is, the Memory: Page Faults/sec performance counter will also be high.

Image Process: Working Set Amount of memory used by a process. If this number is consistently lower than the minimum server memory option, SQL Server is configured to use too much memory.


Note Customizing the collection of performance counters

Refer to “Create a Data Collector Set to Monitor Performance Counters” at https://technet.microsoft.com/en-us/library/cc722414.aspx to customize a set of performance counters to collect and configure alerts. Step-by-step instructions for saving performance counter data to SQL Server tables is available in “Capture PerfMon Statistics to SQL Server for Benchmarking, Baselining and Analysis,” a blog post by David Ames at http://www.amescode.com/capture-perfmon-statistics-to-sql-server-for-benchmarking-baselining-and-analysis/.


SQL Trace

As we described in Skill 4.2, “Analyze and troubleshoot query plans,” SQL Trace is useful for server-side or client-side tracing when you want to capture query plans. However, you can use tracing to monitor other aspects of SQL Server performance. Unlike DMOs that allow you to capture activity as it occurs, SQL Trace requires that a monitored event is completed before its data is added to the trace.


Important Considerations for query plan impact on performance

Remember that capturing a query plan does place some overhead on the server, so use with caution on a busy production server. In that case, the better approach is to use a server-side trace that stores output on a local drive. In addition, try to filter the trace as much as possible to limit the impact of tracing on your server.


Besides using SQL Trace to get query plans, consider using it as a diagnostic tool when you need to monitor SQL Server’s behavior during query processing. For example, you might use SQL Trace in the following situations:

Image Lock escalation You can find correlations between lock escalation events and queries running when lock escalation occurs by creating a trace that includes the Lock:Escalation, SP:Started, T-SQL:StmtStarted, SP:Completed, and T-SQL:StmtCompleted.

Image Deadlocks As we described in Chapter 3, you can capture a deadlock graph by creating a trace that includes the Deadlock Graph, Lock:Deadlock, and Lock:Deadlock Chain events.

Image Slow queries You can identify and capture information about slow queries in a trace containing the RPC:Completed, SP:StmtCompleted, SQL:BatchStarting, SQL:BatchCompleted, and Showplan XML events.


Note A resource for learning more about SQL Server Profiler

You can find in-depth information about using SQL Server Profiler, the graphical interface, in Brad M. McGehee’s free ebook “Mastering SQL Server Profiler” at http://www.red-gate.com/library/mastering-sql-server-profiler.


Extended Events

We introduced Extended Events in Chapter 3 as a method for finding deadlocks after they occur and explored this feature again as lightweight alternative to SQL Trace for capturing query plans. The Extended Events feature in SQL Server provides even more functionality than deadlock and query plan monitoring.

As the replacement for SQL Trace, which is slated for deprecation in a future version of SQL Server, Extended Events not only allow you to perform the same tasks with greater flexibility and better performance, but also allow you to monitor more events. In SQL Server 2016, you can monitor 180 events by using SQL Trace, whereas you can monitor 1209 events by using Extended Events. Later in this chapter, we provide specific examples of how you might use Extended Events, we compare the performance impact between Extended Events and SQL Trace, and explain its architecture of this event-handling infrastructure.

Compare baseline metrics to observed metrics while troubleshooting performance issues

When using dynamic management objects to gain insight into SQL Server performance, you can only see current or recent historical information. Some problems become apparent only when you view how this information changes over time. Rather than wait for users to complain about application or query performance, you can proactively monitor the health of SQL Server by collecting baseline metrics and then periodically comparing current system behavior. That way you can identify negative trends as early as possible.

SQL Server includes a data collector to help you gather data and store it in the management data warehouse. This is a database that you use as centralized storage for many types of data, not just performance data. When you set up the management data warehouse, you specify which DMOs to query, which performance counters from the operating system and SQL Server to collect, and which SQL Trace events to capture. You can also use this infrastructure to capture other types of data that you might want to store centrally.

To set up management data warehouse in SQL Server Management Studio, expand the Management node in Object Explorer, right-click Data Collection, point to Tasks, and select Configure Management Data Warehouse. In the Configure Management Data Warehouse Wizard, click Next, and click New to open the New Database dialog box. Type a name for the database, click OK, and then click Next. On the Map Logins And Users page of the wizard, select the login for your SQL Server service account, select mdw_admin, and then click OK. Next click Finish, and then click Close.

To set up data collection, right-click Data Collection again, point to Tasks, and select Configure Data Collection. In the Configure Data Collection Wizard, click Next, and then select the server and database hosting the management data warehouse. Select the System Data Collection Sets checkbox, click Next, click Finish, and then click Close.

To view information collected in the management data warehouse, right-click Data Collection once more, point to Reports, point to Management Data Warehouse, and select one of the following reports:

Image Server Activity History This report, a portion of which is shown in Figure 4-43, displays data collected from DMVs and performance counters, such as waits, locks, latches, among other SQL Server statistics, and CPU, memory, disk, and network usage. By default, this information is gathered every 60 seconds, uploaded into the Management Data Warehouse every 15 minutes, and retained for 14 days. The report is interactive. When you click on a chart, a new report page displays. For example, when you click the SQL Server Waits report, you can view a table and chart containing wait statistics for the selected period of time.

A screen shot shows the Server Activity Report. At the top of the page is a timeline navigator. In the center section are four charts: % CPU, Memory Usage, Disk I/O Usage, and Network Usage. Only Memory Usage displays data as a line chart. The bottom section of the report displays a stacked column chart labeled SQL Server Waits. The vertical axis is Wait Time (ms/secs) and the horizontal axis is time periods in intervals of 5 minutes. A legend below the chart contains labels for each color displayed in a stacked column, such as SQLCLR and Network I/O.

FIGURE 4-43 Server Activity History report

Image Disk Usage Summary This report displays a table of databases listing the starting and current size of the database and log files, the average growth measured as megabytes per day, and a sparkline chart to show the growth trend over time. You can click on a database name to view disk usage details for the database and log files as pie charts or click on the sparkline to view the disk space growth trends and collection details for each size metric, as shown in Figure 4-44.

A screen shot shows a subreport of the Disk Usage Summary Report. The top section is an area chart that displays disk space over time for the following categories: Reserved, Used Size, Database Size, Unallocated Space, and Index. The bottom section is a table displaying the collection times and the following metrics for each time: Database Size, Used Size, Reserved, Unallocated Size, and Index Size.

FIGURE 4-44 Disk Usage Summary subreport for a selected database

Image Query Statistics History This report displays a column chart the top 10 queries within a specified interval by CPU usage, duration, total IO, physical reads, or logical writes and a table that includes the following query executions statistics: Executions/min, CPU ms/sec, Total Duration (sec), Physical Reads/sec, and Logical Writes/sec. The data is cached on the local file system and then uploaded to the Management Data Warehouse every 15 minutes.

A screen shot shows the Query Statistics History report displaying a column chart of duration for two queries and a table showing the following statistics for each of the two queries: Executions/min, CPU ms/sec, Total Duration (sec), Physical Reads/sec, and Logical Writes/sec. Between the chart and the table is a set of links to rank queries by one of the following criteria: CPU, Duration, Total I/O, Physical Reads, or Logical Writes.

FIGURE 4-45 Query Statistics History report

You can click on one of these queries to view its details in a subreport, such as the query text, and more detailed execution statistics such as Average Duration (ms) per Execution or Average Executions Per Min. The subreport also displays a column chart of query plans that you can rank by CPU, duration, physical reads, or logical writes. Statistics by query plan are also available. Click on a query plan to view the query plan details much like the details for a query. Other links on this page allow you to view sampled waits or the graphical query plan.

Identify differences between performance monitoring and logging tools

Now that we have reviewed the various tools available for performance monitoring and logging, let’s compare and contrast them to better understand their differences. That way, you can select the right tool for the particular task and objectives at hand. Table 4-1 provides a comparison of the tools by capability.

Image

TABLE 4-1 Comparison of performance monitoring and logging tools

Monitor Azure SQL Database performance

There are several diagnostic tools available to you for monitoring SQL Database performance. For a quick view of performance metrics, use the Azure portal. For more targeted analysis, you can use DMVs, Query Store, or Extended Events to monitor SQL Database performance just as you do for SQL Server.

Azure Portal

In the Azure portal, you can select the metrics that you want to monitor and display a chart that shows average utilization for selected metrics. To configure the chart, open the blade for your database in the Azure portal, and then click Edit inside the Monitoring chart. In the Edit Chart blade, select one of the following time ranges: Past Hour, Today, Past Week, or Custom. If you select Custom, you can type in a data range or use calendar controls to set a beginning and end date for the range. Next, select one of the following chart types: Bar or Line. Then select one or more of the following metrics:

Image Blocked by firewall

Image CPU percentage

Image DTU limit

Image DTU percentage

Image DTU used

Image Data IO percentage

Image Database size percentage

Image Deadlocks

Image Failed connections

Image In-memory OLTP storage percent

Image Log IO percentage

Image Sessions percentage

Image Successful connections

Image Total database size

Image Workers percentage

After you add the metrics to monitor, you can view the results in the Monitoring chart in addition to details in the Metric window. When you select metrics, you can select compatible metrics only. For example, you can select metrics that count connections, as shown in Figure 4-46, or you can select metrics that measure the percentage of resources consumed, but you cannot mix those two types of metrics in the same chart.

A screen shot shows the Monitoring chart for SQL Database in the Azure portal. The line chart is labeled Blocked By Firewall, Failed Connections And One More Metric Today. The vertical access is a numeric count and the horizontal axis is time in intervals of 6 hours. The selected metrics, corresponding values, and a color matching the metric in the line chart display below the chart.

FIGURE 4-46 Monitoring chart for SQL Database in the Azure portal

You can also use the Azure portal to configure an email alert when a performance metric exceeds or drops below a specified threshold. Click the Monitoring chart to open the Metric blade. Then click Add Alert and follow the instructions in the Add Alert Rule blade. For example, you can add an alert to send an email when the number of connections blocked by the firewall exceeds 1, as shown in Figure 4-47. Not shown in this figure is the name of the resource, the name of the alert rule, and whether to email the alert to owners, contributors, and readers and optionally additional administrators.

A screen shot shows a portion of the Add An Alert Rule blade. At the top of the screenshot is the Metric drop-down list in which Blocked By Firewall is selected. Below the drop-down list is a line chart showing the current metric values over time. Next is a Condition drop-down list in which Greater Than is selected. Below this is the Threshold text box which contains the value 1. Last is the Period drop-down list in which the current selection is Over The Last 24 Hours.

FIGURE 4-47 Add An Alert Rule blade for SQL Database

DMVs for SQL Database

Use the following DMVs to monitor SQL Database performance:

Image sys.database_connection_stats Count successful and failed connections. The count of failed connections is the sum of login failures, terminated connections, and throttled connections.

Image sys.dm_db_resource_stats Get the resource consumption percentages for CPU, data IO, and log IO. It returns one row for every 15 seconds, even when there is no activity in the database. For less granular data, you can use sys.resource_stats in the logical master database for your server.

Image sys.dm_exec_query_stats In combination with sys.dm_exec_sql_text, find queries that use a lot of resources, such as CPU time or IO.

Image sys.dm_tran_locks Discover blocked queries.

Image sys.event_log Find issues such as deadlocking and throttling over the last 30 days. You must have permission to read the master database on the Azure server. As an example, you can search for specific types of events, such as deadlocks or throttle events, and when they occurred, as shown in Listing 4-42.

LISTING 4-42 Review SQL Server:Buffer Manager performance counters


SELECT
    Event_Category,
    Event_Type,
    Event_Subtype_Desc,
    Event_Count,
    Description,
    Start_Time
FROM sys.event_log
WHERE Event_Type = 'deadlock' OR
    Event_Type like 'throttling%'



Note Permission required for using DMVs in SQL Database

To use a DMV in SQL Database, you must be granted the VIEW DATABASE STATE permission.


Extended Events in SQL Database

You can use Extended Events to troubleshoot performance in SQL Database. It is similar to using Extended Events in SQL Server, although the set of extended events in SQL Database is smaller than the set available for SQL Server. In addition, there are some slight syntax differences when creating, altering, or dropping an event session as we noted in the “Monitor Azure SQL Database query plans” section earlier in this chapter.

Determine best practice use cases for extended events

Extended Events is a robust, yet lightweight tracing infrastructure that you can use to monitor and analyze SQL Server activity by collecting as much or as little information as you need. At minimum, you can use Extended Events for any diagnostic task that you can perform by using SQL Trace, but it can do so much more. In particular, Extended Events offers greater flexibility because you can filter events with more granularity. For this exam, you should be familiar with the following types of use cases that Extended Events supports:

Image System health By default, an Extended Events session dedicated to system health information starts automatically when SQL Server starts. System health information includes session_id and sql_text for sessions with a severity greater than or equal to 20 or experiencing a memory-related error, non-yielding scheduler problems, deadlocks, long latch and lock waits, connectivity and security errors, and more. For complete details, see https://msdn.microsoft.com/en-us/library/ff877955.aspx.

Image Query performance diagnostics Find historical deadlocks or queries that did not end, troubleshoot waits for a particular session or query, capture queries that match a specific pattern, and get query plans, to name a few. You can count the number of occurrences of a specific event to determine if a problem is recurring frequently.

Image Resource utilization monitoring and troubleshooting You can create a session to capture information about server resources, such as CPU, IO or memory utilization. You can filter events for specific utilization thresholds to fine-tune the diagnostic process. Furthermore, you can correlate SQL Server events with Windows Event Tracing for Windows (ETW) logs that capture details about operating system activities.

Image Security audits Capture login failures by filtering events with Severity 14 and Error 18456 with client application name to find malicious login attempts.

Distinguish between Extended Events targets

An Extended Event target receives information about an event. For example, in the “Capture query plans using extended events and traces” section earlier in this chapter, the target for the example Extended Events session is a file in which the session stores a query plan. In the last section of this chapter, we explain targets in greater detail. For now, be sure you understand the difference between the following targets for an Extended Events session:

Image etw_classic_sync_target Unlike the other targets that receive data asynchronously, this is an ETW target that receives data synchronously. You use it to monitor system activity.

Image event_counter This target counts the number of times that a specific event occurred.

Image event_file This target writes the event session output to a file on disk in binary format. You use the sys.fn_xe_file_target_read_file function to read the contents of the file.

Image histogram Like the event_counter target, the histogram target counts the occurrences of an event, but can count occurrences for multiple items separately and for both event fields or actions.

Image pair_matching This target helps you find start events that do not have a corresponding end event. For example, you can discover when a lock_acquired event occurred without a matching lock_released event within a reasonable time.

Image ring_buffer This target holds data in memory using an first-in, first-out method in which the oldest data is removed when the memory allocation is reached.

Compare the impact of Extended Events and SQL Trace

As we have explained throughout this chapter, you can use Extended Events and SQL Trace interchangeably to provide many similar diagnostic functions. However, it is important to note that there is considerable difference between them when considering their respective impact on the observed server. Both tools by necessity add overhead to the server, which is measurable by observing performance counters for monitoring CPU processor time and batch requests per second. The use of SQL Server Profiler for client-side tracing is the most intrusive option whereas using SQL Trace stored procedures for server-side tracing is less intrusive. With this in mind, you should try to limit the number of events and number of columns captured to minimize the overhead as much as possible.

The least intrusive option is Extended Events, which was developed as a lightweight replacement for SQL Trace. Nonetheless, because it does incur overhead, you should take care to create events that collect the minimum amount of data necessary for troubleshooting. In particular, be aware that the query_post_execution_showplan event is expensive and should be avoided on a production server. If you must use it to troubleshoot a specific issue, take care to restrict its use to a limited time only.

Define differences between Extended Events Packages, Targets, Actions, and Sessions

Throughout this chapter, we have discussed specific capabilities of Extended Events as well as the benefits of this feature, but we have yet to explore its components in detail. In this final section of the chapter, we review Extended Events concepts and objects that you should understand.

Events in Extended Events correspond to events in SQL Trace, but many more are supported in Extended Events to give you better diagnostic capabilities. An example of an event is sp_statement_starting.

Packages

A package is the top-level container for the various types of Extended Events objects: events, targets, actions, types, predicates, and maps. Of all the available packages, you can only use the following three packages in an event session:

Image package0 Contains Extended Events system objects

Image sqlserver Contains objects related to SQL Server

Image sqlos Contains SQLOS objects

Targets

A target is the destination for the data collected about an event. For short-term diagnostics, you can use a memory-resident target. To persist the data, you can use the event_file target. Review the section “Distinguish between Extended Events targets” earlier in this chapter to see the full list of available targets. After you create an Extended Event session, you can add one or more targets, like this:

ADD TARGET package0.event_file(SET filename=N'C:ExamBook762Ch4query.xel',
    max_file_size=(5),max_rollover_files=(4)),
ADD TARGET package0.ring_buffer

Actions

An action is a response or series of responses that you bind to an event. For example, you can use an action to detect an execution plan or calculate run-time statistics. Or you can add information from the global state, such as session_id, to the firing event to aid in troubleshooting, like this:

ADD EVENT sqlserver.sql_statement_completed(
    ACTION (sqlserver.session_id,
        sqlserver.sql_text))

Sessions

An Extended Events session is the equivalent of a trace. When you create a session, you add an event and actions to fire with that event, define one or more targets for the data collected about the event, and optionally create predicates that define filters for the event. Listing 4-43 provides an example of an event session that captures session_id and sql_text when a stored procedure executes on the server and sends the data to a file on the local drive and to the ring buffer.

LISTING 4-43 Create event session


CREATE EVENT SESSION [stored_proc]
ON SERVER
ADD EVENT sqlserver.sp_statement_completed(
    ACTION (sqlserver.session_id,
        sqlserver.sql_text))
ADD TARGET package0.event_file(SET filename=N'C:ExamBook762Ch4query.xel',
    max_file_size=(5),max_rollover_files=(4)),
ADD TARGET package0.ring_buffer;



Note Learning more about Extended Events packages

There are other objects that can also exist in a package: types, predicates, and maps. However, these topics are not covered in the exam. If you’d like to learn more refer to “SQL Server Extended Events Packages” at https://technet.microsoft.com/en-us/library/bb677278.aspx.


Chapter summary

Image SQL Server relies on statistics to select an optimal query plan. For this reason, it’s important to keep statistics current. Statistics can become obsolete when a table is the target of high volume inserts or deletions because the data distribution can change significantly. You can use the DBCC SHOW_STATISTICS command to check the histogram and index densities for an index.

Image SQL Server updates statistics automatically by default, but you can disable the automatic update and instead rely on a maintenance plan to update statistics when you need greater control over the timing of the update. When you create a maintenance plan, SQL Server creates a SQL Server Agent job that you can schedule to run at a convenient time.

Image There are several DMOs available to help you manage indexes. Use sys.dm_db_index_usage_stats to review current index usage or, in combination with sys.indexes and sys.objects, to find indexes that are never used. Use sys.dm_db_index_physical_stats to find fragmented indexes. To find missing indexes, use sys.dm_db_missing_index_details, sys.dm_db_missing_index_groups, and sys.dm_db_missing_index_group_stats.

Image The existence of overlapping indexes adds unnecessary overhead to SQL Server. You should periodically review index columns to ensure an index was not inadvertently added that overlaps with a similar index.

Image Use the query_pre_execution_showplan or query_post_execution_showplan Extended Events as a lightweight method to capture the estimated or actual query plans, respectively. As an alternative, you can use SQL Trace system stored procedures for server-side tracing or use SQL Server Profiler to capture the Showplan XML or Showplan XML For Query Compile events for an estimated query plan or the Showplan XML Statistics Profile event for an actual query plan.

Image When reviewing a query plan, you should review whether the following operators exist in the plan: Table Scan, Clustered Index Scan, Key Lookup (Clustered) or RID Lookup (Clustered) in conjunction with Index Seek (NonClustered), Sort, or Hash Match. Although the presence of these operators is not bad in all cases, you might consider options for tuning your query such as adding indexes to tables or adding filters to the query if operations are performed on large tables and your goal is to improve query performance.

Image Query Store is a new feature in SQL Server 2016 and SQL Database that you can use to capture query plans and associated runtime execution statistics over time. Several built-in views are available to help you find queries that consume a lot of resources or have regressed from a previous point in time. You can also force a query plan when necessary.

Image Usually an estimated query plan and an actual query plan are the same plan except that an actual query plan includes calculations and information that is available only after the query executes. You can create an estimated query plan to assess the impact of potential changes to a query or to table structures without executing the query. The actual query plan can vary from the estimated query plan if statistics are out-of-date or if the data structures changes after the estimated query plan is generated.

Image Query Performance Insight is an Azure SQL Database features that graphically shows you which queries are consuming the most CPU, memory, IO, and DTU resources over time.

Image The Resource Governor (available only in Enterprise, Developer, or Evaluation editions) allows you to define how SQL Server allocates CPU, memory, and IOPS resources on the server. You specify this allocation by creating one or more resource pools and then create workload groups that you assign to a resource pool. A workload group can have further resource consumption limits imposed and can be configured with relative importance within a resource pool to prioritize access to server resources when workload groups run concurrently. SQL Server assigns a session to a workload group by using criteria that you define in a classifier function. It if cannot match the session to a workload group, Resource Governor assigns the session to the default workload group assigned to the default resource pool.

Image By using Elastic Scale for Azure SQL Database, you can manage your data in separate shards for which resource requirements can dynamically grow or shrink as needed to match the current workload. To use this feature, you must use the elastic database client library in your application to perform operations across all shards or in individual shards. You can also use the Split-Merge service to separate data in one shard into multiple shards or to combine data from multiple shards into a single shard.

Image When choosing one of the following SQL Server 2016 editions, you must consider the scalability and features each supports: Express, Web, Standard, Enterprise, Developer, and Evaluation. For SQL Database, you choose the Basic, Standard, or Premium service level based on the maximum size of your database and the range of performance levels required. Performance levels are measured as DTUs which represent CPU, memory, and IO thresholds.

Image One aspect of query performance is the efficiency of your storage and IO subsystems which you can optimize by managing file placement for system, data, and log files. Use filegroups as separate containers that can be spread across separate disks and optionally use secondary data files and partitioning to improve data access.

Image The setup wizard for SQL Server 2016 makes it easier than in previous versions to optimize the configuration of tempdb by allowing you to define its file placement, the number of files to create, the initial file size, and autogrowth settings.

Image As long as SQL Server is the only application running on a server, it can manage memory dynamically without intervention. When you must run other applications on the server, you can optimize SQL Server’s memory configuration by specifying minimum and maximum memory thresholds, maximum worker threads, maximum memory for index creation, and minimum memory for query execution.

Image You use the sys.dm_os_wait_stats, sys.dm_exec_session_wait_stats, or sys.dm_os_waiting_tasks DMVs to gather information about the amount of time that threads must wait on resources, determine whether the server is experiencing CPU, memory, or IO pressure, or find out which resources are causing excessive waits.

Image You can troubleshoot IO issues by analyzing IO subsystem latencies captured in the sys.dm_io_virtual_file_stats and sys.master_files DMVs or by reviewing performance counters for SQL Server’s buffer manager in the sys.dm_os_performance_counters DMV.

Image You can troubleshoot cache issues by using the following DMVs: sys.dm_os_memory_cache_counters, sys.dm_os_sys_memory, or sys.dm_os_memory_clerks. You can also use performance counters for SQL Server’s buffer manager and memory manager in the sys.dm_os_performance_counters DMV.

Image SQL Database query plans are accessible by using Showplan SET options, Extended Events, or Query Store, but not by using SQL Trace.

Image You have a variety of tools that you can use to monitor operating system and SQL Server performance metrics: DMOs, Performance Monitor, SQL Trace, and Extended Events. In many cases, you are likely to use a combination of tools to diagnose SQL Server behavior.

Image Management Data Warehouse is a SQL Server feature that allows you to capture performance-related information over time in contrast to other available monitoring tools which primarily provide point-in-time or cumulative information. Management Data Warehouse provides the following reports for your review: Server Activity History, Disk Usage Summary, and Query Statistics History.

Image To monitor SQL Database performance, you can monitor specific metrics in the Azure portal. In addition, you can use the following DMVs with SQL Database for monitoring: sys.database_connection_stats, sys.dm_db_resource_stats, sys.dm_exec_query_stats, sys.dm_tran_locks, and sys.event_log. Extended Events are also available for SQL Database, although the set of supported events is smaller than the set that supports SQL Server.

Image You should be familiar with the following best practice use cases for extend events: system health, query performance diagnostics, resource utilization monitoring and troubleshooting, and security audits.

Image You should also understand how Extended Events uses the following targets to store information about an event: etw_classic_sync_target, event_counter, event_file, histogram, pair_matching, and ring_buffer.

Image Although Extended Events and SQL Trace can often be used interchangeably, Extended Events has a much lower impact on performance. You can measure the difference in impact by observing CPU processor time and batch requests per second.

Image The Extended Events architecture is comprised of several components. A package is the top-level container for the other objects. As data is collected about an event, it is sent to a target. In addition, you can configure an action to occur in response to an event. A session is a set of events, actions, and targets that you configure and enable or disable as a group.

Thought experiment

In this thought experiment, demonstrate your skills and knowledge of the topics covered in this chapter. You can find answer to this thought experiment in the next section.

You recently started as a new database administrator at Consolidated Messenger. Because you are unfamiliar with this SQL Server environment as a new employee, you decide to perform some analysis to determine if there are any significant problems to fix.

1. Which query or command do you use to find the most recent update for statistics on tables or indexed views?

A. DBCC SHOW_STATISTICS(‘ConsolidatedMessengerDB’, All Indexes);

B. SELECT name, STATS_DATE(object_id, stats_id) FROM sys.stats WHERE object_id IN (SELECT object_id FROM sys.objects WHERE type = ‘U’);

C. SELECT name, auto_created (object_id, stats_id) FROM sys.stats WHERE object_id IN (SELECT object_id FROM sys.objects WHERE type = ‘U’);

D. SELECT name, auto_created (object_id, stats_id) FROM sys.stats WHERE object_id IN (SELECT object_id FROM sys.objects WHERE type = ‘U’);

2. When you query the sys.dm_db_index_physical_stats DMV, you see the output shown below. What problem do you see and what step should you take to resolve it?

idx_id ObjectName                              index_type_desc             pg_ct
AvgPageSpacePct     frag_ct  AvgFragPct
-------- -------------------------------------- -------------------------------
------- -------------------------- ---------- ---------------------------
1          PK_SalesOrders_OrderD        CLUSTERED INDEX           2037
54.9851742031134 237        34.02189781021898
2          IX_SalesOrders_CustomerID NONCLUSTERED INDEX   685    98.4313442055844
2             0

3. You enabled Query Store on the main corporate database several weeks ago. Which DMV do you use to locate the top 5 queries with the longest duration?

A. sys.query_store_plan

B. sys.query_store_query

C. sys.query_store_query_text

D. sys.query_store_runtime_stats

E. sys.query_store_runtime_stats_interval

4. When you use Query Store to examine the query plans for a frequently executed query, you notice that for one plan uses an Index Scan operator and a second plan uses and Index Seek operator. If the normal query pattern is to retrieve a small number of rows, which is the more optimal query plan and how can you require SQL Server to use it?

5. You have been monitoring wait statistics in the sys.dm_os_wait_stats DMV for several weeks and notice that the ratio of signal_wait_time_ms to wait_time_ms has been increased from 10% to 30%. What type of problem is this likely to indicate?

A. CPU pressure

B. Memory pressure

C. Network bandwidth issues

D. IO subsystem failures

6. As another method to confirm your diagnosis for the scenario in the previous question, which performance counters should you check?

A. Physical disk counters: % Disk Time, Avg. Disk sec/Read, Avg. Disk sec/Transfer, Avg. Disk sec/Write, Current Disk Queue Length.

B. Processor counters: % Privileged Time, % Processor Time, % User Time, Processor Queue Length.

C. Memory counters: Available bytes, Pages/sec, Working set.

D. SQL Server counters: Page lookups/sec, Page reads/sec, Page writes/sec, Free List Stalls/Sec, Lazy Writes/Sec, Memory Grants Outstanding, Memory Grants Pending.

7. Which tool monitoring tools can you use to get information about SQL Server memory usage without writing any code?

A. DMVs or SQL Profiler

B. Server-side SQL Trace or Extended Events

C. Performance Monitor or Management Data Warehouse

D. Client-side SQL Trace or Resource Governor

Thought experiment answer

This section contains the solution to the thought experiment.

1. The answer is B. The sys.stats catalog view contains both the stats_id and object_id columns necessary to use the STATS_DATE system function that returns the most recent update date for an object’s statistics. The DBCC SHOW_STATISTICS command requires you to include a specific index name as the second argument, therefore A is incorrect due to the syntax. Similarly, C and D are examples of incorrect syntax because T-SQL does not include an auto_created function.

2. The clustered index has a page count greater than 500 and fragmentation is 34%. In this case, you should rebuild the index.

3. The answer is D. You can order by the avg_duration column in descending order in the sys.query_store_runtime_stats DMV to find the queries that run the longest. The sys.query_store_plan in A is incorrect because this DMV includes only information about estimated query plans. Sys.query_store_query is incorrect in B because this DMV collects aggregated information for a query’s compilation, but does not collect execution statistics. On its own the sys.query_store_query_text DMV in C is not a correct answer because it includes the query text without duration information, although you can join it to sys.query_store_runtime_stats to get more complete information about long-running queries. Sys.query_store_runtime_stats_interval is stores information about the intervals of time during which statistics are captured, but does not report duration information.

4. When multiple query plans exist for a selected query, Query Store allows you to select a query plan and then force that plan for all subsequent executions of the same query. In this example, the more optimal query plan is the one that includes the Index Seek because it incurs a lower cost by selecting specific rows from the index rather than scanning the entire index to find the rows to select.

5. The answer is A. The signal wait time is the amount of time that a thread is able to run a task, but is waiting its turn on the CPU. To confirm this diagnosis, check for an increasing number of SOS_SCHEDULER_YIELD wait types in the sys.dm_os_wait_stats DMV and check the sys.dm_os_schedulers DMV for a high value in the runnable_tasks_count column. Answer B is incorrect because memory pressure is indicated by PAGEIOLATCH waits in combination with a Page Life Expectancy performance counter value dropping over time. Answer C is incorrect because you generally use network-related wait types and performance counters to confirm your diagnosis of network bandwidth issues. Answer D is incorrect because a failure in the IO subsystem will become evident when there are many waits containing IO in the name and average wait time begins to increase.

6. The answer is B. You can monitor CPU usage by using the Processor performance counters. In particular, if % Processor Time is between 80 and 90 percent consistently, % User Time nears 100% consistently, and the Processor Queue Length value is increasing over time, the CPU is experiencing pressure. You should consider upgrading the CPU or adding more processors. The counters listed for Answers A, C, and D are useful performance counters. However, they do not provide any confirmation of whether the observations in Question 5 are related to CPU pressure.

7. The answer is C. Both Performance Monitor and Management Data Warehouse provide graphical interfaces that you can use to monitor SQL Server’s memory usage. In Performance Monitor, you can set up the data collection of performance counters for the SQL Server Memory Manager, such as Memory Grants Outstanding or Memory Grants Pending, among others. In Management Data Warehouse, you can use the Server Activity History report to drill into Memory Usage details for the server. To access information from DMVs, you must write a T-SQL query; therefore, A is incorrect. Similarly, B is incorrect because a server-side SQL Trace requires you to write code. Answer D is incorrect because Resource Governor is not a performance monitoring tool.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.247.81