Chapter 14
Automating SQL Server administration

This chapter reviews the common forms of automating Microsoft SQL Server instance administration, which includes an exploration of the automation tools of choice for SQL Server administration: SQL Server Agent; basic “care and feeding” maintenance, including Maintenance Plans built in to SQL Server Management Studio; strategies for administering multiple SQL Servers, including Master/Target Agent servers (MSX/TSX); event forwarding and Policy-Based Management (PBM); and, finally, an introduction to PowerShell with realistic sample code. This chapter varies little for SQL Server instances on Windows or Linux except in the case of PowerShell, for which you must run these commands from a Windows server. Where there are exceptions for Linux, we point them out.

Little in this chapter applies to Microsoft Azure SQL Database because the Azure SQL platform automates many of the activities in this chapter, including performance tuning and backups. No initial configuration is needed. If you need more control, many of these features are being released through a new Azure feature, Managed Instances. As the Azure SQL Database platform as a service (PaaS) offering has matured, it has become a powerful cloud-based and complementary platform to SQL Server, neither fully replacing nor overlapping with the feature set or purpose of on-premises SQL Server instances.

Components of SQL Server automated administration

Database Mail makes it possible for SQL Server to send emails to notify DBAs of the outcome of SQL Server Agent Jobs, server performance and error alerts, or custom notifications with Transact-SQL (T-SQL) calls to the sp_send_dbmail stored procedure. The SQL Server Agent is the automation engine available in all editions of SQL Server except for Express. Let’s review these two key features. Both features are fully supported on Linux as well as Windows.

Database Mail

Database Mail uses Simple Mail Transfer Protocol (SMTP) to send email. By design, this process is run outside of SQL Server using a separate executable DatabaseMail.exe, which is started asynchronously using Service Broker. Email is handled asynchronously and outside of the SQL Server process using Service Broker technology. This isolates both the process and any potential performance impact to the SQL Server instance.

Setting up Database Mail

To begin receiving automated emails, you need to configure Database Mail and then configure SQL Server Agent to use the database mail profile you create. First, in SQL Server Management Studio, use the Database Mail Configuration Wizard, which you can find in the Management folder. You’ll need to set up a profile and then an associated account.

The wizard will turn on the Database Mail feature in the Surface Area facet of the SQL Server instance. You need to do this only once. Database Mail is among a select few Surface Area facets that you should turn on for most SQL Server instances.

Image For more information on Surface Area facets in each SQL instance, see Chapter 4.

A Database Mail Profile can be public or private. In the case of private, only specific associated server principals are given access (users or roles in databases). A public profile allows any principal that is a member of the built-in database role DatabaseMailUsersRole in the msdb database.

Ideally, all database mail profiles are private, and only those credentials that will be used to send emails will be given access. In a multitenant environment, or an environment that allows access to external developers or vendors, this is crucial, but even in internal environments this could provide protection against malicious use to send emails.

You can configure a Database Mail Profile to use almost any SMTP configuration, including nonstandard ports and Secure Sockets Layer (SSL). You also can configure it with Windows Authentication (common for SMTP servers in the same domain), basic authentication (common for web authentication), or no authentication (common for anonymous relay in the local network, usually with an IP allow list).

You can configure Database Mail to use any SMTP server that it can reach, including web-based SMTP servers. You can even use Hotmail or other web-based email account if you’re configuring for testing purposes or have no other viable internal SMTP options. An internal SMTP server with Windows authentication using a service account is preferred. For Azure infrastructure as a service (IaaS) environments without an internal SMTP presence, SendGrid is a common and supported SMTP solution.

Image For more information, visit https://docs.microsoft.com/azure/sendgrid-dotnet-how-to-send-email.

After configuring your account’s SMTP settings (you’ll need to test them later), the Database Mail account has a number of options that you can adjust:

  • Account Retry Attempts. Defaults to 1, which you should probably leave as is to avoid excessive retries that could lock out an account or trigger spam detection.

  • Account Retry Delay (seconds). Defaults to 60. Again, you should leave this for the same reasons as for Account Retry Attempts.

  • Maximum File Size (Bytes). Defaults to roughly 1 MB. You should change this only if necessary.

  • Prohibited Attachment File Extensions. Specifies which file extensions cannot be sent, commonly set if third-party or multitenant development occurs on the SQL instance. This is a comma-delimited list that by default is “exe,dll,vbs,js.”

  • Database Mail Executable Minimum Lifetime (seconds). Defaults to 10 minutes, which is a counter that starts after an email message is sent. If no other messages are sent in that time frame, the Database Mail executable stops. If stopped, the Database Mail process is started again any time a new email is sent. You’ll see messages indicating “Database Mail process is started” and “Database Mail process is shutting down” in the Database Mail Log when this happens.

  • Logging Level. Defaults to Extended, which includes basic start/stop and error messages that should be kept in the Database Mail log. Change to Verbose if you are troubleshooting Database Mail and need more information, or Normal to suppress informational messages and see errors only.

After you’ve set up a Database Mail profile and account, you can send a test email via SQL Server Management Studio. Right-click Database Mail, and then, on the shortcut menu that opens, click Send Test E-Mail. Or, you can send a plain-and-simple test email via T-SQL by using the following code:

exec msdb.dbo.sp_send_dbmail
@recipients ='yournamehere@domain.com',
@subject ='test';

This code does not specify a @profile parameter, so the command will use the default profile for the current user, the default private if it exists, or the default public profile.

This is all that is necessary for developers and applications to send emails using Database Mail. To allow SQL Server Agent to send emails based on job outcomes and alerts, you will need to create an Operator in SQL Server Agent, and then configure SQL Server Agent’s Alert System to use a Database Mail profile. We look at SQL Server Agent and its initial configuration in depth later in this chapter.

Maintaining email history in the msdb database Finally, the email messages sent by Database Mail are queued in a table in the msdb database named dbo.sysmail_mailitems. As you might suspect, data in the msdb tables for Database Mail will grow, potentially to an unmanageable size. This can cause queries to the msdb’s Database Mail tables to run for a long time. There is no automated process in place to maintain a retention policy for these tables, though there is a stored procedure to delete older messages as well as a lengthy reference article in place to guide you through creating a set of archive tables and a SQL Server Agent job to maintain data over time. You can find both of these at https://docs.microsoft.com/sql/relational-databases/database-mail/create-a-sql-server-agent-job-to-archive-database-mail-messages-and-event-logs.

Troubleshooting Database Mail

We’ve already mentioned the Database Mail log, now let’s go over the other diagnostics available for Database Mail.

Reading email logging in the msdb database If the SMTP Server or the database mail process becomes unavailable, the messages are queued in a table in the msdb database named dbo.sysmail_mailitems.

The msdb database contains metadata tables for the Database Mail feature, including msdb.dbo.sysmail_allitems, which tracks all outbound email activity. Look for items for which the sent_status doesn’t equal sent for signs of messages that weren’t successfully sent; for example:

--Find recent unsent emails
SELECT m.send_request_date, m.recipients, m.copy_recipients, m.blind_copy_recipients
, m.[subject], m.send_request_user, m.sent_status
FROM msdb.dbo.sysmail_allitems m
WHERE
-- Only show recent day(s)
m.send_request_date > dateadd(day, -3, sysdatetime())
-- Possible values are sent (successful), unsent (in process), retrying (failed but
retrying), failed (no longer retrying)

AND m.sent_status<>'sent' ORDER BY m.send_request_date DESC;

There is also a view provided in the msdb, dbo.sysmail_unsentitems, that filters on (sent_status = 'unsent' OR sent_status = 'retrying'). There are four possible values for sent_status in sysmail_allitems: sent, unsent, retrying, and failed.

Allow anonymous relay for anonymous authentication If you’re using anonymous authentication internally with Microsoft Exchange, verify that the internal SMTP anonymous relay has a dedicated Receive connector that allows for anonymous relay. By design, a Receive connector just for anonymous relay should allow only a small list of internal hosts, your SQL Server(s) among them.

Authentication with the SMTP server is likely the problem if you observe errors in the Database Mail log after attempting to send email, such as

Cannot send mails to mail server. (Mailbox unavailable. The server response was: 5.7.1 Unable to relay…

or:

Cannot send mails to mail server. (Mailbox unavailable. The server response was: 5.7.1 Service unavailable…

Enabling service broker on the msdb database After restoring the msdb database or setting up database mail for the first time, the Service Broker feature might not be turned on for the msdb database. You can check the is_broker_enabled field in the system catalog view sys.databases; if it is 0, this is the case, and you must remedy this. You will receive the following self-explanatory error message:

Msg 14650, Level 16, State 1, Procedure msdb.dbo.sp_send_dbmail, Line 73 [Batch Start
Line 18] Service Broker message delivery is not enabled in this database. Use the ALTER
DATABASE statement to enable Service Broker message delivery.

To turn on Service Broker on the msdb database, you must stop the SQL Server Agent service and close any connections active to the msdb database prior to running the following code:

ALTER DATABASE msdb SET ENABLE_BROKER;

SQL Server Agent

The SQL Server Agent is the native automation platform for internal task automation, maintenance, log and file retention, even backups. SQL Server Agent is similar to the Windows Task Scheduler, but it has a number of advantages for automating SQL Server tasks, including integration with SQL Server security, authentication, logging, and native T-SQL programming. SQL Server Agent can accomplish many of the same tasks as Windows Task Scheduler, including running cmdexec and PowerShell commands. Metadata, configuration, and history data for the SQL Server Agent are kept in the msdb database.

Configuring SQL Server Agent jobs

A job contains a series of steps. Each job step is of a type that allows for different actions to take place. A job can be automatically started based on a number of conditions, including predefined schedule or schedules; in response to an alert; as a result of running the sp_start_job stored procedure in the msdb database; when SQL Server Agent starts; or even when the host computer is idle.

You can script jobs in their entirety through SQL Server Management Studio, providing a script-level recoverability, migration to other servers, and source control possibility for SQL Server Agent jobs. Jobs are backed up and restored via the msdb database, or scripted for backup and migration.

As you can see in the Job Properties dialog box, job steps do not necessarily need to run linearly. You can set a job to default to start at any job step, and additionally, when starting a job, you can manually change the start step for the job. Each job step reports back whether it succeeded or failed, and you can configure it to move to another step or fail based on the job step outcome. These step completion actions are defined on the Advanced page of the Job Step Properties dialog box. However, for ease of management in the future, we recommend that you create job steps that run as linearly as possible.

You can assign jobs to categories; in fact, many system-generated jobs (such as replication) are assigned to categories. You can create your own categories in SQL Server Management Studio by right-clicking the Jobs folder under SQL Server Agent, and then, on the shortcut menu, clicking Manage Categories. This should aid your efforts to report on, maintain, redeploy, and migrate jobs in the future.

Understanding job step security

Although creating SQL Server Agent jobs themselves is easy to do through SQL Server Management Studio, a critical step that many developers and administrators skip is the use of Credentials and Proxies in SQL Server Agent job steps. Using a proxy to run a job step instead of the SQL Server Agent service account or another named user is the most secure way to run jobs. Proxies make it possible for administrators to set job steps to run under a specific credential, rather than giving the SQL Server Agent service account access to everything that each job needs.

Proxies are used for all job step types but one. It is not possible to run a T-SQL script job step using a proxy. A T-SQL step will run in the security context of the owner of the job if the owner is not a sysadmin. If the owner of the job is a member of the sysadmin server role, the job will run as the SQL Server Agent service account.

For all other job step types, there is a proxy. On the Job Step Properties page, you can select the job step to “Run as” a proxy. SQL Server Agent checks for access to the subsystem each time the job step is run to verify that the security has not changed.

You can associate each proxy with one or more subsystems, though to make best use of them, you should consider not creating one proxy for all subsystems; instead, you should create many proxies for different job step security requirements and subsystems.

Without a proxy specified, jobs must be owned by a member of the sysadmin role to run job steps other than the T-SQL step type. These job steps will then run as the SQL Server Agent service account. This isn’t ideal, for two reasons:

  • The SQL Server Agent service account should not have local administrator privileges on the server. This reduces the risk to the operating system (OS) from potential misuse for SQL Server Agent jobs. Service accounts are discussed in Chapter 4 and Chapter 6.

  • Remember also that the SQL Server Agent service account must be a member of the sysadmin server role, so it might have far too many privileges inside SQL Server than necessary to safely run SQL Agent jobs.

Further, the owner of the job must also have permissions to use any proxy subsystem that the job’s steps use. It is also important because job steps often need to access other servers, and proxies give you the ability to assign pinpoint rights to the other resources. You will not be able to create or modify a job step for a subsystem if the job owner is not listed as a principal who has access to the proxy. Sysadmins automatically have access to all proxies.

Proxies map to credentials on the SQL Server, you’ll find a subfolder for credentials in the Security folder on the Server level in Object Explorer in SQL Server Management Studio. Each proxy is linked to a credential in SQL Server. The credential stores the account’s user name and password, which means that if it changes, the proxy and SQL Server Agent job steps that depend on it will not be able to authenticate and will fail. Therefore, you should use service accounts, not individual’s named accounts, in credentials that will be used by proxies. Credential account passwords shouldn’t be widely known, and the accounts shouldn’t be used interactively regularly by administrators, so that they cannot accidentally become locked out.

You can create a credential for a local Windows account or a domain account. You also can create credentials for accounts on Enterprise Key Management (EKM) modules, including the Azure Key Vault service. The Windows account of the credential must have “Log on as a batch job” permission on the server. As a local administrator, you can grant this permission in the Local Security Policy dialog box.

Securing permissions to interact with jobs

Your login must be a member of the sysadmin server role or one of the SQL Server Agent database roles in the msdb database to set up a SQL Server Agent job in SQL Server Management Studio.

The SQLAgentOperatorRole, SQLAgentReaderRole, and SQLAgentUserRole each have permission to create jobs, start jobs, view job history, view, and edit properties of jobs, though mostly only for jobs they own. For granular details on the limitations and overlapping of each role, visit https://docs.microsoft.com/sql/ssms/agent/sql-server-agent-fixed-database-roles.

The SQLAgentUserRole is the least privileged of the three roles, but the other two roles are members of the SQLAgentUserRole. Typically, membership to these roles is limited to service accounts and third-party developers. Do not grant permissions directly to the SQLAgentUserRole database role, including the ability to use proxies. Instead, grant permission on proxies to individuals or service accounts.

Scheduling and monitoring jobs

A job is run based on one or more schedules assigned to it. You give schedules a name upon creation and can assign them to multiple jobs, which can be especially useful for uncommon or esoteric job schedules, or to centralized management of jobs that should run simultaneously. To view and select schedules from other jobs, in the Job Properties dialog box, on the Schedules tab, click the Pick button. You will see only the job schedules to which you have access.

There are four schedule types:

  • Start automatically when SQL Server Agent starts

  • Start whenever the CPUs become idle

  • Recurring

  • One time (for running a schedule manually, often during testing)

Jobs run asynchronously when they are started by SQL Server Agent or manually. A dialog box with a spinning progress icon appears, but you can close this, and the job will continue to run until completion. You can monitor the progress of jobs in SQL Server Management Studio by viewing the Job Activity Monitor, or you can observe the job’s current request in sys.dm_exec_requests.

You can also use T-SQL to query the status of jobs with the undocumented stored procedure master.dbo.xp_sqlagent_enum_jobs, which you can join to msdb.dbo.sysjobs, as shown here:

--jobs still running
declare @xp_sqlagent_enum_jobs table (
id int not null IDENTITY(1,1) PRIMARY KEY,
Job_ID uniqueidentifier not null,
Last_Run_Date int not null,
Last_Run_Time int not null,
Next_Run_Date int not null,
Next_Run_Time int not null,
Next_Run_Schedule_ID int not null,
Requested_To_Run int not null,
Request_Source int not null,
Request_Source_ID varchar(100)  null,
Running int not null,
Current_Step int not null,
Current_Retry_Attempt int not null,
[State] int not null);

INSERT INTO @xp_sqlagent_enum_jobs
EXEC master.dbo.xp_sqlagent_enum_jobs 1,'';

SELECT j.name
, state_desc = CASE ej.state
WHEN 0 THEN 'not idle or suspended'
WHEN 1 THEN 'Executing'
WHEN 2 THEN 'Waiting for thread'
WHEN 3 THEN 'Between retries'
WHEN 4 THEN 'Idle'
WHEN 5 THEN 'Suspended'
WHEN 7 THEN 'Performing completion actions'

END
, *
 FROM  msdb.dbo.sysjobs j
 LEFT OUTER JOIN @xp_sqlagent_enum_jobs ej
 ON j.job_id = ej.Job_ID
ORDER BY j.name;

You can download this script, CH14_xp_sqlagent_enum_jobs_query.sql, and other accompanying scripts for this book from https://aka.ms/SQLServ2017Admin/downloads.

Configuring and viewing job history

Every time a job is run, a record is maintained in system views in the msdb database in the msdb.dbo.sysjobhistory table. To review the job’s history, in SQL Server Management Studio, right-click it, and then, on the shortcut menu, select Job History. History is stored for each job. You can expand a given job to view the output for each step, including any errors.

With jobs that run frequently (for example, transaction log backup jobs), a large amount of job history will be created and stored in the msdb. It is initially defaulted to two very low and likely unrealistic row caps: 1,000 rows of history for all jobs, and 100 rows of history at most for one job. If a job were to run once per hour, it would lose visibility into history after just four days—a likely unrealistic window for troubleshooting and diagnostic information.

In SQL Server Management Studio, in Object Explorer, right-click SQL Server Agent, and then click Properties. Click the History page. As Figure 14-1 demonstrates, this page is not intuitive and can be confusing. The first option, Limit Size Of Job History Log, is a rolling job history retention setting. You might find it a good start to simply add a 0 to each value, increasing the maximum log history size in rows from the default of 1,000 to 10,000 or more, and also increase the maximum job history per job in rows from the default of 100 to 1,000 or more. These settings would store just more than 41 days of history for a job that runs hourly if this were the only job on the server. You might find these numbers also insufficient on a SQL Server instance with many frequently running jobs, and you should increase until you have a comfortable retention of job run history.

Image

Figure 14-1 The two options to retain SQL Agent job history on the History page of the SQL Server Agent Properties dialog box. The top check box configures a rolling cap on row history; the bottom check box is a one-time manual pruning.

The job history log can be useful, but there are two limitations of which you should be aware:

  • The message text in this Job History viewer is truncated after 1,024 characters. If you need to view the full results of the output, you need to query the msdb.dbo.sysjobhistory table. The message window in that table is considerably larger at 8,000 characters.

  • The history of SQL Server Integration Services package execution in the SQL Server Agent job history is extremely limited, reduced to details around the fact that the package started, completed, and/or errored, without detail. More verbose detail will appear in the thorough history available in the SSISDB. To access and view that history, in SQL Server Management Studio, use the Integration Services Catalogs menu and then go to the project and packages that failed.

You can also configure additional logging for each job step in order to capture the full step output and, more commonly, the error text. The following options are available on the Advanced page of the Job Step Properties dialog box:

  • The output history to an output text file option, and additionally to append history to that log file. Be careful of keeping the Append Output To Existing File option turned on long term—the output file can grow to a significant size in a short amount of time.

  • The Log To Table option, which writes to the sysjobstepslogs table in the msdb database. That table has an nvarchar(max) data type for the Log field, allowing for more output data to be captured per step if needed. Be careful of this option, as well—the table can grow to a significant size in a short amount of time. You should schedule the stored procedure sp_delete_jobsteplog to remove old records from the table over time.

  • Finally, there is the Include Step Output In History option, which adds a row to the job history log to include the output of the job step. This should contain valuable information, and, unlike the other two options, job history is automatically maintained over time by SQL Server Agent.

Administering SQL Server Agent operators

Operators are aliases in SQL Server Agent, listing contact information email. (The Net Send and Pager Email Contact options are deprecated, and Net Send no longer appears in SQL Server Management Studio.)

Operators usually should not be pointed at individuals (even though you can create a semicolon-delimited list of email address), but instead to a distribution group (even if that group initially contains only one person). In most situations, you will create an operator to notify SQL Server first responders in your environment. You should maintain your environment’s list of DBA personnel in distribution lists; for example, not inside the Operator lists of each SQL Server instance.

To set up an operator, in SQL Server Management Studio, in Object Explorer, expand the SQL Server Agent folder, right-click Operator, and then, on the shortcut menu, click New Operator.

Alerts

Alerts are created to set conditions and, when met, prompt email notifications or the kickoff of SQL Server Agent jobs in response. Alerts are versatile and can look for SQL Server events in the error log, or performance conditions that you would view in the Performance Monitor application or even by Windows Management Instrumentation (WMI) queries.

As recommended in Chapter 4, you should set up alerts for high-severity SQL Server errors. However, do not overcommit your personal inbox with alerts, and do not set an inbox rule to Mark As Read and file away emails from SQL Server. By careful selection of emails, you can assure yourself and your team that emails from SQL Server will be actionable concerns that rarely arrive.

With a large number of SQL instances under your purview, email alerts for even severe issues can become too numerous. We then recommend a way to gather and queue actionable errors in a system that provides for aggregation, dashboarding, and team assignment. There are a number of third-party “log collection” software applications that perform the task of log aggregation and centralized alerting.

You might also configure the Delay Between Responses in each alert, to prevent an unchecked flooding of emails arriving from a repeating error. Consider a delay of up to five minutes between responses, as your environment deems appropriate.

You can specify only a single error message or severity per alert, so we recommend scripting the mass creation of a standard batch of alerts, to be created consistently on all your SQL Server instances. See the script CH14_add_standard_error_alerts.sql for an example that includes the alerts we examine in just a moment. You can download this script and others from https://aka.ms/SQLServ2017Admin/downloads.

Next, we review the three types of Alerts that you can set up: SQL Server event, performance conditions, and WMI event alerts.

SQL Server event You should set up alerts on actual error messages that are important enough for you to receive emails. SQL Server generates a lot of informational-only events, such as successful backup messages, for which you would not want to receive messages.

You can set up alerts based on the actual error number (samples follow shortly) or any error of a certain severity (1 to 25). You can optionally filter the alert to a single database, or for a specific message text.

It’s a good idea to set up alerts for severity 16 through 19 and 21 through 25, because these tend to be actionable errors. Severities 21 and above are severe and unrecoverable errors.

The most common Severity 20 errors are nuisance authentication-related and transient (e.g., the user tried, experienced an error, tried again, and succeeded). An alert for Severity 20 might send out a large number of unactionable alerts to the SQL Server DBA team. You will still see Severity 20 issues in the SQL Server Error Log and should make note of them as they appear, especially if they appear in large numbers, given that this can be a sign of greater authentication or domain issues, or malicious intrusion attempts. The goal of alerts is to send out actionable errors or performance conditions worth investigating.

You might also want to configure alerts to send out error messages for these SQL Server error numbers that are not already covered in the severities 16 through 19 and 21 through 25. These following errors are rare, but immediately actionable:

  • 825 (Severity 10). A dreaded “read-retry” error, after the read of a file succeeded after failing x number of times. This is often a harbinger of potential database integrity failure that should prompt immediate action.

  • 854, 855, 856 (Severity 10). This is an uncorrectable hardware memory corruption detected via the operating system’s memory diagnostics that indicates a potentially immediate stability thread to the system due to memory.

  • 3624 (Severity 20). This is an internal SQL Server error called an “assertion failure” that is typically a software bug, though it could indicate internal data corruption. This is oftentimes addressed via a SQL Server service pack or patch.

Performance conditions You can set up performance condition alerts for any performance counter in the SQLServer category, the same set of alerts you would see in the Windows Performance Monitor application with the prefix “SQLServer” or “MSSQL$instancename”.

For example, if you want to receive an email when the SQL Server’s Page Life Expectancy (PLE) drops below a certain value, you would select PLE in the same way that you would find it in Performance Monitor. Choose the object Buffer Manager, the counter Page Life Expectancy, and the comparison operator Falls Below and comparison value. In the case of PLE, this is measured in seconds.

WMI event alert conditions The third option for SQL Server Agent alerts allows for custom WMI queries to be run. WMI queries can gather and prompt alerts on a variety of Data Definition Language (DDL) events in SQL Server, and WMI queries follow the basic syntax of T-SQL SQL Server queries. The FROM of the WMI query will be a WMI object, not an object in a SQL Server database. You can reference the WMI provider classes and properties at https://docs.microsoft.com/sql/relational-databases/wmi-provider-server-events/wmi-provider-for-server-events-classes-and-properties.

This type of alert is not as straightforward. In general, you might find better results, more flexibility, and less complexity by using extended events, SQL Server Agent jobs, SQL Server Audit, and/or third-party monitoring tools than by using WMI alert queries.

Setting up an email recipient for a WMI event alert does not send over any useful or actionable information in the email aside from the alert’s name. This does little more than let you know that a WMI event occurred (observed asynchronously, so there might be some delay).

To view the information regarding the event—for example, the T-SQL command associated with the event—you must turn on Token Replacement in the SQL Server Agent Properties dialog box. On the Alert System page, at the bottom, select the Token Replacement check box. This allows for the tokenization (replacement at runtime) of WMI commands in a T-SQL job step. For more on the tokens that you can use in a T-SQL job step, reference https://docs.microsoft.com/sql/ssms/agent/use-tokens-in-job-steps.

We have prepared a sample WMI event alert to capture the CREATE DATABASE DDL event. For a simple but lengthy working example of the creation of a sample table, SQL Server Agent job, and alert, see the script CH14_WMI_Alert_data_capture.sql, which you can download along with others for this book from https://aka.ms/SQLServ2017Admin/downloads.

SQL Server Agent job considerations when using availability groups

If you are running SQL Server Agent jobs in an availability group environment, you will still need to configure your maintenance plans on each SQL Server instance, some of which will cover databases not included in availability groups and the system databases master and msdb, for example. You should ensure that your maintenance plans, regardless of platform, are consistently updated on all replicas and also are aware of their local replica role, so that maintenance plans do not need to be turned on, turned off, or reconfigured when a failover occurs.

Your SQL Server Agent jobs must exist on all replicas of the availability group and be aware of whether the script is running on the primary replica for a database. You will need multiple versions of any custom maintenance task in order to separate scripts for databases in each availability group and one more for databases not in an availability group (including any system databases that you intend to maintain with custom scripts).

To avoid having SQL Server Agent jobs error when their local replica is not the primary replica for a database, you can add a T-SQL step to the start of the job to detect and raise a failure. The goal of the first step is to prevent subsequent job steps from running and failing against secondary replica databases, which will not be not writeable. Name the first step “Am I Primary?” or something similar, and then add the following script:

--add as step 1 on every AAG-aware job
IF NOT EXISTS (
SELECT @@SERVERNAME, *
   FROM sys.dm_hadr_availability_replica_states  rs
   inner join sys.availability_databases_cluster dc
   on rs.group_id = dc.group_id
   WHERE is_local = 1
   and role_desc = 'PRIMARY'
--Any databases in the same Availability Group
   and dc.database_name in (N'databasename1', N'databasename2'))
  BEGIN
      print 'local SQL instance is not primary, skipping';
      throw 50000, 'Do not continue', 1;
  END;

This code causes Step 1 to fail when it is not run on a primary replica for the specified database(s). In the Advanced settings of the “Am I Primary?” job step, the On Success Action should be Go To The Next Step, as usual, but the On Failure Action should be Quit The Job Reporting Success, which would not register as a job failure. Instead of a green check mark or a red “X” next to the job, SQL Server Job History displays a yellow Triangle. This prevents subsequent job steps from running and failing against secondary replica databases, which will not be writeable.

This script is available in the CH14_Am_I_Primary.sql script in the accompanying downloads for this book, which are available at https://aka.ms/SQLServ2017Admin/downloads.

Image For more about availability groups, see Chapter 12.

Maintaining SQL Server

In this section, we review what you should be doing as a day-to-day database administrator of a SQL Server instance, how to accomplish these tasks, and the built-in tools that SQL Server provides. You can accomplish within SQL Server all of the major maintenance objectives that we cover without the use of third-party tools, or even well-respected free options. SQL Server editions above Express edition (because Express has no Agent) ship fully featured and ready for you to configure to perform basic maintenance.

For the most part, the tasks in this section are built in to Azure SQL Database. In some cases, the maintenance tasks are completely automated, especially in the case of disaster recovery, or partially automated, in the case of index maintenance. We’ll be focusing on SQL Server instances in this section because the fast evolution of Azure SQL Database reduces the hands-on maintenance required by DBAs on the PaaS platform.

Image For more information on Azure SQL databases and Managed Instances, see Chapter 5.

Basic “care and feeding” of SQL Server

You can carry out the regular proactive maintenance of a SQL Server instance by applying one or more of the following strategies:

  • SQL Server Maintenance Plans, including the option to use a Maintenance Plan Wizard

  • Custom scripting using DMVs and T-SQL or PowerShell commands

  • Third-party tools

Each has advantages and disadvantages, and each makes different compromises between ease of setup and customizability.

You can run these strategies via SQL Server Agent jobs, except for some third-party software packages that would utilize an external scheduling apparatus. You can configure each to provide customized activity logging, retention, and the ability to view history in different ways.

Regardless of the strategy or strategies adopted, with built-in or third-party tools, you should accomplish the following as a bare minimum on a regular schedule, in no particular order:

  1. Backup system and user databases.

    1. Full backups for all databases.

    2. Transaction log backups for database not in SIMPLE recovery mode.

    3. To save space, differential backups between less frequent full backups.

  2. Retention policy for database backups, if backups are stored locally, by deleting backups after a business-approved amount of time. In the case of tape backups for which retention isn’t necessary, a rotation policy, instead.

  3. Retention policy for maintenance plan log files, backup and restore history records in msdb, old database mail row entries.

    1. SQL Server Error Log files are already maintained by SQL Server to a configurable number of log files. (The default is 6 which should be likely be increased.)

    2. SQL Server Agent job history is also maintained automatically by settings in the SQL Server Agent properties.

    3. Backup history is kept in the msdb database and should be pruned over time.

  4. Reduce fragmentation in SQL Server indexes.

    1. There are different strategies to reduce fragmentation in clustered and nonclustered indexes.

    2. Columnstore indexes also require maintenance via REORGANIZE steps, as well, and fragmentation is measured differently.

  5. Update statistics.

    1. This should accompany INDEX REORGANIZE steps, but not INDEX REBUILD steps. Remember that the INDEX REBUILD command also updates the index statistics.

  6. Check database integrity via DBCC CHECKDB.

  7. Using off-premises facilities for any backups stored physically local to the SQL Server.

    1. Storage-Area Network (SAN) replication or another file-level backup system can accomplish this, as can integration with Azure Storage for easy cloud-based backup.

    2. Remember that your Data Loss Tolerance isn’t defined by how often you take backups, but by how often those backups get off-premises!

What will vary is how often these tasks need to run on each database and even what type of backups you need. This section of the chapter walks you through creating T-SQL scripts in SQL Server Agent, using the Maintenance Plan designer and the Maintenance Plan Wizard.

Even though we won’t make any recommendations regarding third-party tools, we should note that many third-party tools do not provide an end-to-end solution for item 7 in the previous list, which typically involves coordination with the storage administrators and/or cloud hosting such as Azure Storage. SQL Server Managed Backup to Azure is a full-featured SQL Server backup solution (though not free, it is relatively cheap). It is the Microsoft-recommended backup solution for SQL Server instances running on Azure virtual machines (VMs).

If you want to maintain direct control of item 7, backing up off-premises, you can use BACKUPTO URL statements to write backups directly to Azure Storage, often to complement local backup storage. (Scripting that yourself is free of course, but Azure Storage is not). To meet your Recovery Time Objective (RTO) goals, or in the event of external network failure, you should always maintain local backups within your network for a time.

Image For more details about backups, schedules, T-SQL command parameters, and backup strategy, refer to Chapter 11.

Using SQL Server Maintenance Plans

SQL Server Maintenance Plans are a free, low-cost, low-complexity, visually built option to implementing SQL Server maintenance and disaster recovery. The drag-and-drop tasks built in to Maintenance Plans’ design surface have some distinct shortcomings that we’ll review; although they’re much more capable starting in SQL Server 2016, especially for index maintenance, which can now be aware of fragmentation levels, different maintenance strategies, and more. You will see differences when creating Maintenance Plans in SQL Server Management Studio from version to version of SQL Server.

The Maintenance Plan Wizard is a step-by-step tour through most of the steps necessary for SQL Server. The Maintenance Plan Wizard guides you through an easy process of creating a Maintenance Plan with most of the basics, which you’ll then be able to review with the Maintenance Plan design surface in SQL Server Management Studio. To begin with a fresh slate, click New Maintenance Plan. This prepopulates objects for you in the design surface, with which we recommend you become familiar. If you have any experience with SQL Server Integration Services, the design surface interface will feel very familiar. Behind the scenes, Maintenance Plans actually create and store SQL Server Integration Services packages internally.

The tasks in a Maintenance Plan and the choices in the Maintenance Plan Wizard translate directly to the options you’re already familiar with in SQL Server Agent jobs or the options for backups and index maintenance T-SQL commands. For example, the Run As option on the first screen of the wizard or in the Subplan properties of the design surface, provides a list of proxies just as a SQL Server Agent job step does. Instead of using the SQL Server Agent service account, ideally you should choose a proxy that has access to the SQL Server Integration Services Package Execution subsystem.

Also on the first page of the Maintenance Plan Wizard, you have the option to run each task with separate schedules or with a single schedule for the entire plan. We recommend that you choose the Separate Schedules For Each Task option here, or if you’re designing the maintenance plan in the design surface, break activities into multiple subplans, each with its own schedules. This is because some tasks such as index maintenance or database integrity checks can take a long time to run, and you do not want your backups in serial with those, and then delayed and inconsistently occurring. To work the maintenance plan into your nonbusiness hours or after-hours windows, you will want more scheduling flexibility than a single start time for all tasks to run serially.

On the Select Maintenance Tasks page of the Maintenance Plan Wizard, there is a list of all of the base built-in maintenance tasks. In the graphical design surface, you have one additional tool to run custom T-SQL scripts titled Execute T-SQL Statement Task. You can use this to run your custom maintenance scripting or other administrative scripts. We review that later in this section.

The following subsections present the available tasks that you can select from, along with descriptions of what they do.

Check Database Integrity task

The Check Database Integrity task runs DBCC CHECKDB to check for database corruption, a necessary task that you should run periodically. You should run DBCC CHECKDB at least as often as your backup retention plan. For example, if you keep local backups around for one month, you should make sure that you perform a successful DBCC CHECKDB no less than once per month. More often, if possible, is recommended. On large databases, this task could take hours and can block other user queries.

Image For more about data corruption and checking database integrity, see Chapter 13.

The options available in the Maintenance Plan task, match the common parameters you would use in the DBCC CHECKDB command. The Physical Only check box uses the PHYSICAL_ONLY parameter of DBCC CHECKDB, which limits the potential disruptive impact of the DBCC CHECKDB and is less comprehensive as a result. However, using PHYSICAL_ONLY takes significantly less time to complete while still detecting the signs of common storage hardware failure.

Shrink Database task

There is no sound reason to ever perform a Shrink Database task on a schedule. A Shrink Database step removes free space from a file and returns it to the OS, causing the file to experience an autogrowth event the next time data it is written to it. Do not ever include the Shrink Database task in the Maintenance Plan.

Reorganize Index task

A reorganize task runs an ALTER INDEXREORGANIZE statement, which reduces index fragmentation but does not update statistics. On large databases, this could take hours, but will have less overhead, much less query disruption, and finish faster than a Rebuild Index.

Because Reorganize Index is an online operation, it will not take long-term table locks and might block other user queries for only a very short period of time. Online index operations will consume server resources and generate large amounts of logged transactions.

Inside OUT

I want to maintain indexes only if they are above a certain percentage of fragmentation, can I do that with Maintenance Plans?

You can, with improvements to the SQL Server Maintenance Plans first released with SQL Server 2016. Older versions of maintenance plans probably drew your ire with an “everything” approach to reorganizing or rebuilding indexes in a database.

You will now see options to intelligently limit index maintenance, starting with the options to select between Fast (LIMITED), Sampled, and Detailed. This corresponds the parameters provided to the structural statistics dynamic management function (DMF), sys.dm_db_index_physical_stats.

You can configure the Reorganize and Rebuild tasks to maintain only indexes filtered by percentage of fragmentation and page count, both from sys.dm_db_index_physical_stats, and/or actual index usage (based on the sys. dm_db_index_usage_stats DMF). This is a significant improvement in the tooling for Maintenance Plans, which before these improvements were mostly unusable on larger databases. The fragmentation threshold is 15% by default in the Reorganize task, 30% in the Rebuild task, as illustrated in Figure 14-2. Figure 14-2.

Image

Figure 14-2 The options to maintain indexes available in the Maintenance Plan Index Reorganize and Index Rebuild tasks.

Other options added to the Reorganize and Rebuild tasks match the options for the ALTER INDEXREORGANIZE, and REBUILD T-SQL commands, which we covered in detail in Chapter 13.

Rebuild Index task

More thorough than a Reorganize step at removing index fragmentation, this task runs a ALTER INDEXREBUILD statement and does update statistics. The options available in the Rebuild Index dialog box correspond to the options for the ALTER INDEXREBUILD syntax.

You can perform a Rebuild step as an online operation, and then it is not likely to block other user queries. Not all indexes and data types can have an online rebuild performed, so the Maintenance Plan dialog box for the Rebuild Index task will ask you what you want to happen. The ONLINE option for Index Rebuilds is an Enterprise edition–only feature.

Rebuilding indexes without the ONLINE option has the potential to block other user queries, and will consume server resources. On large databases, this could take hours to finish, and even more without the ONLINE option.

Update Statistics task

The Update Statistics task runs an UPDATE STATISTICS statement, rebuilding index statistics objects, which we reviewed in Chapter 10. Do not follow an Index Rebuild task with an Update Statistics task for the same objects, this is redundant work. Updating statistics is an online operation, so it will not block other user queries, but it will consume server resources. This task should finish faster than either a REORGANIZE or REBUILD step.

Inside OUT

Is an Update Statistics task necessary in a Maintenance Plan if auto_create_stats is turned on for the database?

Yes, you should still maintain the health of Update Statistics with regularity, even if not in a Maintenance Plan. When auto_update_stats is turned on, statistics are updated periodically based on usage. Statistics are considered out of date by the query optimizer when a ratio of data modifications to rows in the table has been reached. The query optimizer checks for and updates the out-of-date statistic before running a query plan. Therefore, auto_update_stats has some small runtime overhead, though the performance benefit of updated statistics usually outweighs this cost. We also recommend turning on auto_update_stats_async option, which helps minimize this runtime overhead by updating the statistics after the query is run, instead of before.

We recommend that you turn on the auto_update_stats and auto_update_stats_async options, as discussed in Chapters 4 and 10, on all user databases, unless the application specifically requests that it be turned off, such as is the case with Microsoft SharePoint.

You can also manually identify the date on which any statistics object was last updated by using the sys.dm_db_stats_properties DMF. In your databases, you might see that there are statistics that are quite old. This means that they might not have been accessed in a way that prompts the auto_update_stats update and have not had an INDEX REBUILD, which would also update the statistics.

Updating both column and index statistics for a database regularly, if your maintenance window time allows, will definitely not hurt, and will certainly help. By updating statistics regularly, you can reduce the number of statistics updates that happen automatically during transactions in regular business hours.

History Cleanup task

This task deletes older rows in msdb tables that contain database backup and restore history, prunes the SQL Server Agent log file, and also removes older Maintenance Plan log records. These are accomplished by running three stored procedures in the msdb database: dbo.sp_delete_backuphistory, dbo.sp_purge_jobhistory, and dbo.sp_maintplan_delete_log, respectively. You should run this task to prevent excessively old data from being retained, according to your environmental data retention requirements. This will save space and prevent large table sizes from degrading the performance of maintenance tasks. This step should finish quickly and would not disrupt user queries. This step does not delete backup files or Maintenance Plan log files; that is the job of the Maintenance Cleanup task.

Maintenance Cleanup task

The Maintenance Cleanup task deletes files from folders and is commonly used to delete old database backup files, using the system stored procedure master.dbo.xp_delete_file. You also can use it to clean up the .txt files that Maintenance Plans write their history to in the SQL Server instance’s Log folder. You can configure the task to look for and delete any extension and by folder directory, and then specify that subdirectories be included. The date filter uses the Date Modified file attribute (not the Date Created attribute). Combined with the option to create a subdirectory for each database, this means that you can create and remove backups files in folder structure for each database.

Note that in the case of Maintenance Plans, by default logs are kept in a table, msdb.dbo.sysmaintplan_log, as well as in text files in the SQL Server instance default Log folder. Deleting one does not delete the other. You should maintain a retention policy on both sources of Maintenance Plan run history.

The Maintenance Cleanup task deletes files only from folders, and thus isn’t an option to enforce a retention policy for backups to URL in Azure Storage. And currently, Maintenance Plans in SQL Server Management Studio are not supported at all on SQL for Linux, although this capability might be added in the near future.

Inside OUT

How do I delete old backups in Azure Blob storage?

The stored procedure sp_delete_backup specifically exists to clean up file-snap-shot based backups, which are continuous chains of backup starting from a single FULL backup.

To clean up old backups taken to Blob storage using the BACKUPTO URL syntax, you shouldn’t try to delete the base blob of the backup, using Microsoft Azure Storage Explorer or the Azure Storage viewer in SQL Server Management Studio, for example. Aside from the files, there are pointers to the file-snapshots in a file-snap-shot backup set that must be deleted, as well.

Note also that SQL Server Managed Backup to Azure has its own retention plan, which is currently limited to a maximum of 30 days.

Execute SQL Server Agent Job task

Using this task, you can orchestrate the asynchronous start of another SQL Server Agent job during the Maintenance Plan, perhaps to start another middle-of-the-night process as soon as possible after maintenance is complete.

Back Up Database (Full, Diff, Transaction Log) task

The most obvious and common of all Maintenance Plan tasks, with this task, you can take a backup of any kind of the specified databases. The options in the Maintenance Plan dialog box for the backup are similar to the SQL Server Management Studio Database Backup dialog box, plus some minor extra options, including an option to ignore replica priority in an availability group database.

The Back Up Database task affords you multiple strategies for backups, including backing up to disk or to Azure Storage via URL as well as to legacy tape backup support.

Backing up to URL writes files directly to Azure Storage natively, without the need to install any software or network connections. This was a fairly limited feature prior to SQL Server 2016 but now can be accomplished via a shared access signature credential for secure access to Azure Blob storage. A step-by-step walkthrough is available at https://docs.microsoft.com/sql/relational-databases/tutorial-use-azure-blob-storage-service-with-sql-server-2016.

You can configure disk backups to append multiple database backups multiple times to the same file or files, or create a backup file and a subdirectory for each database per backup. For backups to disk, we recommend that each database have a subdirectory in the folder location you select, to separate the backup files of databases with potentially different retention plans or recovery strategies. The Maintenance Plan backup will automatically create subdirectories for new databases, and when performing backups, append a time stamp and a unique string to backup names in the following format: databasename_backup_yyyy_mm_dd_hhmmss_uniquenumber.bak|dif|trn.

We recommend that you select the options for Verify Backup Integrity, CHECKSUM, and Compress Backup for all database types, for all databases. This is supported even for backups to URL. Keep in mind that the Verify Backup step performs a RESTORE VERIFYONLY statement to examine the backup file and verify that it was valid, complete, and should be restorable. “Should be” is key because the only way to truly test whether the backup was valid is to test a restore. The RESTORE VERIFYONLY does not actually restore the database backup, but could give you an early heads-up on a potential drive or backup issue, and is always recommended when time permits. The verify step could significantly increase the duration of the backup, scaling with the size of the database backup, but is time well worth spending in your regular maintenance window.

Execute T-SQL Statement task (not available in the wizard)

This task can run T-SQL statements against any SQL Server connection, with a configurable time-out. A simple text box accepts T-SQL statements, and because of its simplicity, we recommend that instead of pasting lengthy commands, you instead reference a stored procedure. This would be easier to maintain and potentially keep in source control by developing the stored procedure in other tools.

Maintenance Plan report options

By default, Maintenance Plans create a report in two places to record the history for each time a subplan runs. Logs are kept in a table, msdb.dbo.sysmaintplan_log, as well as in .txt files in the SQL Server instance default Log folder. You can also choose the Email Report option in the Maintenance Plan Wizard, which adds a Notify Operator Task.

Covering databases with the Maintenance Plan

After you select the maintenance tasks that you want the wizard to configure, you’ll be able to select the databases you want to run the tasks against. The options are: all databases, system databases, all user databases, or you can specify a list of databases. You also have the option to ignore databases for which the status is not online, which we also recommend.

To isolate the configuration, maintenance, and logging from one another, it is a common to create two Maintenance Plans: one for system databases (master, model, and msdb) and one for all user databases. The system plan just handles system database backups, the user plan handles everything else. This ensures that if there are any issues with ongoing changes to the User Maintenance Plan, the crucial system database backups are unaffected.

Inside OUT

Will a SQL Server Maintenance Plan automatically detect a new database created on my SQL Server?

Yes, a Maintenance Plan can accomplish this if you configure it correctly, which could be invaluable to you when applications are configured to procedurally create new databases, such as SharePoint.

You should try to configure Maintenance Plan tasks to use either the All Databases or All User Databases (assuming that you have another task that covers system databases). When you select either of these, new databases are automatically included in the maintenance plan. This makes your job as an administrator easier. If you choose a specific fixed list of databases using the These Databases option and list, new databases will be ignored, and you will need to remember to add the databases to the Maintenance Plan.

If you have databases that are no longer in use that you no longer want to cover with Maintenance Plans, consider taking the database offline, and then using the option in many Maintenance Plan tasks to ignore databases where the status is not online.

There is one caveat regarding transaction log backups tasks using either of the two All options for databases. After you create a new database in FULL recovery mode, the backup task that takes transaction log backups in the Maintenance Plan will attempt to take a transaction log backup and will fail. This is because a database must first have a FULL backup taken before a transaction log backup will succeed. When you create a new database, take a manual full backup, or your Maintenance Plan will chirp errors at you until a full backup is taken on that database. Other database’s transaction log backups will continue to run as usual, even if one or more databases fail.

Building Maintenance Plans by using the design surface in SQL Server Management Studio

Due to the nature of wizards, there are some inherent issues with configuring a robust maintenance solution that covers all the needs of your databases. The Maintenance Plan design surface in SQL Server Management Studio gives you the ability to set up your task run order and precedence constraints as well as to maintain multiple subplan schedules within a maintenance plan.

The maintenance plan designer has three main sections: the subplans list, the design surface, and the Maintenance Plan tasks toolbox. When you open a maintenance plan, the first two will be obvious, but the toolbox might not be docked. To display the toolbox and pin it to the side of SQL Server Management Studio, press Ctrl+Alt+X, or, on the View menu, click Toolbox. Figure 14-3 displays a sample Maintenance Plan.

Image

Figure 14-3 A sample maintenance plan for User database has been created, with multiple subplans, each running on a different schedule.

When you save the maintenance plan, each of the subplans in the maintenance plan will become a SQL Server Agent job with a naming convention maintenance plan name.subplan name.

At the top of the Maintenance Plan window is where the subplans are listed; initially, there will be just one plan called Subplan_1. You should break down the tasks that will be accomplished in the subplan by the schedules they will follow, and name them accordingly. You can add subplans and manage their schedules in the Maintenance Plan window. Note that you should not make changes to the SQL Server Agent jobs after they’ve been created—the next time you edit the Maintenance Plan, your changes might be overwritten.

The large gray area beneath the subplan list is the design surface for Maintenance Plans, a graphical, drag-and-drop interface. To add tasks to a subplan, simply drag a task from the toolbox to the design surface. To serialize the running of multiple tasks, click one, and then click the green arrow beneath the box, dragging it to the task that should follow. You can create a long string of sequential activities or a wide set of parallel-running activities, similar to designing SQL Server Integration Services packages. In fact, Maintenance Plans are SQL Server Integration Services packages.

When not to use SQL Server Maintenance Plans

Personal preference, of course, is a fine enough factor to not use built-in Maintenance Plans. You can write your own, as long as your library of scripts or third-party tools accomplish the necessary maintenance tasks with consistency, configurability, and good logging for review.

For SQL Server instances with manageable maintenance windows, SQL Server Maintenance Plans will meet your needs if the schedules are set up appropriately. You can create a variety of Maintenance Plans to cover databases with various levels of importance or availability based on business requirements (with the caveat of not being able to detect new databases [see the previous Inside OUT box]). For very large databases or databases with 24x7 availability requirements, more granularity for maintenance operations will likely be necessary.

Not every business has the luxury of having all night and/or all weekend to perform maintenance outside of business hours. When you become familiar with the T-SQL commands and their various options, you can be creative to overcome tight scheduling, crowded maintenance windows, very large databases, or other Maintenance Plan complications.

After reviewing the capabilities of the Backup Up task and Rebuild Index task, you should consider Maintenance Plans more full-featured and capable of handling the bulk of maintenance, even on larger databases with tight schedules. Ultimately, the success of Maintenance Plans or custom scripts will be dependent on your understanding of the various options available for the seven core maintenance tasks listed earlier in this chapter.

Backups on secondary replicas in availability groups

One of the many useful features in Always On availability groups is the ability to utilize read-only secondary replicas for remote backups. Performing backups on a secondary replica, including a geographically separated replica, introduces complexity but has a big advantage. Backups do not take locks and will never block a user query, but they will incur significant CPU, memory, and I/O overhead. Backups can slow database response, so on servers with large databases and/or busy 24x7 utilization, it might be necessary to find alternative strategies to backups. Taking database backups on secondary replicas is one of the alternatives and moves the resource expense of backups off the primary replica.

Understanding backup priority values

In SQL Server Management Studio, in the Availability Group Properties dialog box, review the Backup Preferences page. It’s important to understand the priority values and how they interact with various backup tasks.

The default option is to Prefer Secondary, which specifies that backups occur on the secondary replica first or, if it is not available, on the primary replica. You then can provide priority values (0 to 100, where 100 is highest) to decide which of multiple secondary replicas should be the preferred backup location. The values apply to both full and transaction log backups.

Other self-explanatory options include Primary, Secondary Only, or Any Replica, which uses the priority values to decide which replica is preferred for the backups. When failing over to another replica, you will need to review and script the changes to the backup priority. Your planned failover scripts should include the reassignment of backup priority values.

It’s also important to note that this Backup Preferences page affects only backup systems or scripts that are aware of the backup preferences. For example, in SQL Server Management Studio, in Object Explorer, right-click a database, and then, on the shortcut menu, click Tasks, and then Backup. The dialog box that opens takes a backup of a database but does not include any availability groups–aware settings. On the other hand, the Back Up Database task in SQL Server Maintenance Plans is aware of availability group backup priority value settings.

You are limited to taking copy-only FULL backups and transaction log backups on readable secondary replica databases. When including databases in an availability group, the Maintenance Plan Back Up Database task will warn you if you are attempting to configure a FULL backup without copy-only check or differential backup, or if you select the “For availability databases, ignore replica priority for backup and backup on primary settings” check box. If this is misconfigured, it is possible to create a Maintenance Plan that will run but not take backups of databases in an availability group.

Image For more information on using replica backup priorities, visit https://docs.microsoft.com/sql/database-engine/availability-groups/windows/active-secondaries-backup-on-secondary-replicas-always-on-availability-groups and https://docs.microsoft.com/sql/database-engine/availability-groups/windows/configure-backup-on-availability-replicas-sql-server.

Use replica backup priority in your backups schedules

If you attempt to configure a full backup without copy-only or a differential backup, you will see the warning in the Back Up Database task “This backup type is not supported on a secondary replica and this task will fail if the task runs on a secondary replica.” If you select the Ignore Replica Priority check box, the warning will read “Note: Ignoring the backup priority and availability group settings may result in simultaneous parallel backups if the maintenance plan is cloned on all replicas.”

Maintenance Plans should run on a schedule on all availability group replicas. The priority values for backups can cause a backup not to be taken on a primary or nonpreferred secondary replica, but the Maintenance Plan backup task will start and complete as usual. The Maintenance Plans will use the backup priority values and do not need to be configured when different replicas in the availability group become primary.

You can still take a manual backup of any type of the databases and bypass the availability group backup preferences, and, in fact, your backup strategy might include intentionally taking FULL backups in more than one node of a geographically dispersed availability group.

If you are not using SQL Server Maintenance Plans or scripting to take backups, be aware that not all third-party backup solutions are aware of availability group backup preferences or even availability groups in general. Maintenance Plans in SQL Server Management Studio are aware of the backup preferences, as can be your custom scripting via the function master.sys.fn_hadr_backup_is_preferred_replica. It returns a 0 or 1, based on whether the current SQL Server instance is operating as the preferred backup.

Inside OUT

How do I prevent a broken recovery chain when taking backups on secondary replicas?

Taking backups of the same database on multiple servers could lead to some parts of a backup recovery chain being stored on different servers. The solution to this is rather obvious: You should always ensure that backups are copied off-premises, from one datacenter or secured site to another, which is likely where your availability group replicas are.

Just as you would copy your backups of a standalone SQL Server instance to another location, you must copy your backups of availability group databases off-premises, ideally to each other. You can accomplish this with two strategies.

Copy the backups taken on the secondary node to the primary regularly, and make sure you maintain a chain of transaction log backups together with their root full and/or differential backups, regardless of where the backups were taken. You should keep a complete chain intact in multiple locations.

Strategies for administering multiple SQL Servers

There are some options for creating orchestration in SQL Server, to allow for a multiplication and standardization of SQL Server DBA effort across multiple servers. The potential to set up SQL Server Agent jobs that are run simultaneously on multiple servers is powerful, especially for custom-developed scripts to gather and report information back to a central SQL Server.

If you live in an environment with more than a few SQL Server instances, you should be aware of the Registered Servers and Central Management Server features of SQL Server Management Studio that we reviewed in Chapter 1.

Master and Target servers for SQL Agent jobs

The Master and Target servers (known as the MSX/TSX) feature is built in to SQL Server Agent to aid DBAs who want to manage identical jobs across multiple SQL Server instances. This feature has been in the product since SQL Server 7.0, but many DBAs are unaware of the convenience that it can deliver. The feature shows its age—some of the critical configuration settings are registry keys—but there is no doubt that the feature is useful and works seamlessly with technologies it could not have foreseen, including availability groups (more on that later).

You can designate one SQL Server as a Master server (MSX) and set up multiserver jobs on that server, and configure each instance to have its SQL Server Agent jobs remotely managed into a Target server (TSX). The MSX cannot be a TSX of itself, so using a separate production server to orchestrate multiserver administration of SQL Server Agent jobs is necessary. The MSX server should be a production environment server that does not host performance-sensitive production workloads.

Other considerations for the MSX server include the following:

  • Of the servers you have available, choose the most recent version of SQL Server for the MSX server. You can communicate with up to two previous versions for TSX servers.

  • Each TSX can have only one MSX.

  • Before changing the name of a TSX, first defect it from the MSX and then reenlist it.

  • Do not use a built-in account for the SQL Server Agent service account on all servers; instead, use a domain service account, as recommended earlier in this chapter.

Creating MSX and TSX Servers by using SQL Server Management Studio

In SQL Server Management Studio, in Object Explorer, right-click SQL Server Agent, point to Multi Server Administration, and then click Make This A Master, as illustrated in Figure 14-4.

Image

Figure 14-4 The often-overlooked Multi Server Administration options available in SQL Server Management Studio Object Explorer.

The Master Server Wizard launches and first sets up a special operator just for running multiserver jobs, called MSXOperator. You can specify only one operator to run multiserver jobs, so think carefully about who should be notified about these jobs. Specify the email address. As always with operators, it’s best not use an individual’s email addresses, but to use an email distribution group, instead.

Next, the wizard presents locally registered and central-management registered servers so that you can select them as targets for the MSX. Select the target servers from the list or, at the bottom, click Add Connection and add servers not registered in your list.

Image For more information about locally registered servers and Central Management Servers, see Chapter 1.

When you are finished with the wizard, the labels in Object Explorer will be different for both Master and Target SQL Server Agents, as shown in Figure 14-5. SQLSERVER-0SQL2K17 has been configured as the MSX, and SQLSERVER-1SQL2K17 has been configured as a TSX.

Image

Figure 14-5 With Multi Server Administration configured, the SQL Server Agent in SQL Server Management Studio Object Explorer is labeled, denoting which server is an MSX and which server is a TSX.

Similarly, on the MSX, under SQL Server Agent in Object Explorer, you will see two new subfolders under Jobs: Local Jobs and Multi-Server Jobs.

By default, SSL encryption is used between the servers, but you can change that through a registry setting. See https://docs.microsoft.com/sql/ssms/agent/set-encryption-options-on-target-servers for the exact entries. You should not need to change this, because encrypted communication between the servers is recommended. An SSL certificate will need to be installed on the server before using it through the wizard.

Though we do not recommend it, you can turn off the encryption of Master-Target servers by changing the registry set to not use encryption. The registry key HKEY_LOCAL_MACHINESOFTWAREMicrosoftMicrosoft SQLServerisntanceSQLServerAgentMSxEncryptChannelOptions is by default 2, which means all communication is encrypted. Changing this to 0 on all servers removes encryption.

Image For more information, visit https://docs.microsoft.com/sql/ssms/agent/set-encryption-options-on-target-server.

Managing multiserver (MSX/TSX) administration

To manage SQL Server Agent jobs in a multiserver environment, set up the job in the same way you would for any other job on the MSX server. Then, navigate to the Targets page on the job, which outside of a multiserver environment has no use. From there, select Target Multiple Servers and then select the servers on which the job should run. In the background, code is run to send the jobs to the TSX servers.

Any jobs with steps that use proxies will need the proxy accounts to have access to the TSX, and a proxy by that same name on the TSX. Otherwise, the job will not find a proxy on the TSX. However, by default, matching proxy names from MSX to TSX isn’t allowed, because of the potential for malicious action. This must be turned on via the registry on the TSX.

Image For more information reference https://docs.microsoft.com/sql/ssms/agent/troubleshoot-multiserver-jobs-that-use-proxies.

On Target servers, jobs will appear in the SQL Server Agent, in the Jobs folder, but you can’t edit them.

Sometimes, the synchronizing of job definitions to the TSX will not be queued up and not post to a server. In this case, issue the following the command, but be aware that this cancels any running jobs on the TSX:

EXEC msdb.dbo.sp_resync_targetserver '<Target Server Name>';

Most other issues with multiserver jobs are solved by defecting the TSX and then adding it again.

Managing multiserver administration in availability groups

The MSX/TSX feature works with availability groups and can be quite useful for ensuring that jobs stay synchronized across servers. The MSX/TSX data is stored in system tables in the msdb database, which cannot be part of an availability group.

You should not have the MSX on a SQL Server instance in the availability group, this would limit your ability to failover and use SQL Server Agent Multi Server Administration. You would lose your ability to orchestrate jobs across the Target servers if one of the nodes in your availability group was unreachable, compromising your failover state. Instead, consider other high availability solutions for the server that functions at the master.

Note that using MSX/TSX for availability group SQL Server Agents doesn’t change the need to set up the “Am I Primary?” logic in step 1 of any job that should run only on a SQL Server instance that currently hosts the primary replica.

Image For a code sample, refer to the section “SQL Agent Job considerations when using availability groups” earlier in this chapter.

SQL Server Agent event forwarding

Event forwarding refers to having one central Windows server to receive the SQL Server events of many. The server that is the destination of many servers’ event forwarding might handle a heavy workload, especially network traffic. The destination server should be a production-environment server that does not host performance-sensitive production workloads. You can refer to this server as the alerts management server.

Event forwarding allows for the Windows Event Viewer to be a single pane view of events on many instances of SQL Server. Further, it allows for alerts on the alerts management server to prompt a response to the originating server, via SQL Server Agent alerts. Forwarded errors arrive in the Windows Application Event log, not the SQL Server Error Log, and because of this, the SQL Server Agent service account needs to be a local Windows administrator.

Typically, this setup is the same server as your Multi Server Administration (MSX/TSX) server; in fact, the features work together. If your MSX server and alerts management server are on separate SQL Server instances, you will lose the ability to run jobs when events happen.

Setting up event forwarding

You configure event forwarding in the SQL Server Agent Properties dialog box. In SQL Server Management Studio, open the SQL Server Agent Properties dialog box, and then go to the Advanced page. In the SQL Server Event Forwarding section, select the Forward Events To A Different Server check box. In the text box, type the name of the alerts management server. Be aware that the alerts management server that receives forwarded events must be the SQL Server default instance of a server.

You can also choose whether to send all events or only unhandled alerts that have not been handled by local alerts on each SQL Server instance. You can then specify a minimum error severity to be forwarded. We recommend that you select errors of Severity 16 and above, keeping in mind the caveat of nuisance Severity 20 errors.

Image For information about alerts and error severity, see the section “Components of SQL Server automated administration” earlier in this chapter.

Policy-Based Management

Policy-Based Management (PBM) is a powerful tool for enforcing rules for configuration settings, options for databases and servers, security principals, table design, even database object naming conventions. As of this writing, this feature does not yet apply to Azure SQL Database.

PBM is structured around policies. Policies contain a single condition, which is a Boolean expression. The condition expression is evaluated against properties and setting of destination objects, such as the server itself, a database, a table, or an index.

For example, you might set up a condition around the AdHocRemoteQueries server-level setting. SQL Server has a large list of facet properties built in to it, such as AdHocRemoveQueriesEnabled in the Surface Area Configuration facet. (As we covered in Chapter 4, the Surface Area Configuration facet contains a number of security-sensitive features, many of which—but not all—we recommend that you turn off unless needed.) To check that this Surface Area Configuration option is always turned off, create an expression that checks whether AdHocRemoveQueriesEnabled can be evaluated to Enabled = False.

Policies can contain only one condition, but you can configure many different expressions, any one of which could initiate the expression. You could, for example create a PBM policy called “Configuration Settings,” a PBM condition called “Settings That Should be Disabled,” and a list of expressions, each of which evaluates a different Surface Area Configuration option.

In fact, in the View Facets dialog box of SQL Server Management Studio, you can click Export Current State As Policy to ease the implementation of many configuration options into policy. You can apply the resulting settings to the local server as a policy right away, or export them as .xml, which then can be imported as a new policy. To export or import a policy, in SQL Server Management Studio, in Object Explorer, in the Policy Management folder, right-click a policy.

Evaluating policies and gathering compliance data

After you create a policy, you have multiple options for when to evaluate it: on demand (manually); on a schedule; and continuously, which either logs or blocks policy violations.

PBM is built in to SQL Server Management Studio, accessible via the Policy Management subfolder within the Management folder. SQL Server maintains history of all policy evaluations in a Policy History log, which is available within SQL Server Management Studio. To access it, right-click Policy Management, and then, on the shortcut menu, click View History. In the System Policies subfolder, you’ll find 14 prebuilt policies for checking the health of availability groups, and two more for the SQL Server Managed Backup feature, first introduced in SQL Server 2014.

For example, after configuring the sample policy in the previous section (see the code example at the end of this section) with On Demand Evaluation Mode, test it by turning on the AdHocRemoteQueries setting in the Surface Area Configuration facet. A message will immediately appear in the SQL Server Error Log stating: “Policy ‘Disable AdHocRemote
QueriesEnabled’ has been violated by target ‘SQLSERVER:SQLservernameinstancename’”, accompanied by a Severity 16 error 34053 message.

The on-demand Evaluate Mode gives the administrator an immediate report of all policy compliance. From the Evaluate Policies window, you can view any number of policies, and should any expressions fail, the policy will display a red “X” beside it. In this example, evaluating the policy with the AdHocRemoteQueries facet turned on displays an error message and provides you with the ability to apply the change to bring the servers in line with policy.

In the Target Details pane of the Evaluate Policies window, you can click Details to start an analysis of all expression evaluations in the condition.

The scheduled Evaluate Mode generates only SQL Server Error log activity.

Not all facet expressions allow for the On Change: Prevent Evaluation Mode, but this option makes it possible for you to create a policy that will prevent developers or other administrations from making DDL changes that violate condition expressions. This is accomplished by rolling back the transaction that contains the violating statement. As with all rollbacks, this transaction could contain other statements and could cause a database change deployment to fail in a manner that could complicate your change process. This is a heavy-handed way to enforce policy, especially reversible activity such as naming conventions or table designs. Administrators and database developers should be aware of the potential impact to database deployments. You should limit the use of the On Change: Prevent Evaluation Mode to security-related or stability-related properties, such as the following:

  • Server Security: @CrossDBOwnershipChainingEnabled

    Enforce that this evaluates to False, unless this is part application security design

  • Server Security: @PublicServerRoleIsGrantedPermissions

    Enforce that this evaluates to False, in any circumstance

  • Login: @PasswordPolicyEnforced

    Enforce that this evaluates to True for all logins

  • Certificate: @ExpirationDate

    Enforce that this date is not within a certain time frame in the future (6 months), in every database (sample code to follow)

  • Database: @AutoShrink

    Enforce that this evaluates to False, in all databases

  • Database: @AutoClose

    Enforce that this evaluates to False, in all databases

Samples

You can script policies to T-SQL for application to multiple servers, though be wary of applying policies for production systems to development systems, and vice versa. Following are the T-SQL scripts to create the two aforementioned samples. (This script is available in the CH14_PBM_samples.ps1 file in the accompanying downloads for this book, which are available at https://aka.ms/SQLServ2017Admin/downloads.)

Ad Hoc Remote Queries Enabled Use the following T-SQL code to create the sample policy to keep the Surface Area Configuration option AdHocRemoteQueries turned off (it will be evaluated on demand):

--Create policy to evaluate status of AdHocRemoteQueries option
--This script is generated by SSMS
Declare @condition_id int
--Create Condition
EXEC msdb.dbo.sp_syspolicy_add_condition
@name=N'AHRQE Disabled'
, @description=N'Keep AdHocRemoteQueries disabled.'
, @facet=N'ISurfaceAreaFacet'
, @expression=N'<Operator>
  <TypeClass>Bool</TypeClass>
  <OpType>EQ</OpType>
  <Count>2</Count>
  <Attribute>
    <TypeClass>Bool</TypeClass>
    <Name>AdHocRemoteQueriesEnabled</Name>
  </Attribute>
  <Function>
    <TypeClass>Bool</TypeClass>
    <FunctionType>False</FunctionType>
    <ReturnType>Bool</ReturnType>
    <Count>0</Count>
  </Function>
</Operator>', @is_name_condition=0, @obj_name=N'', @condition_id=@condition_id OUTPUT
Select @condition_id;
GO
--Create internal object set and target sets to support condition
Declare @object_set_id int;
EXEC msdb.dbo.sp_syspolicy_add_object_set
@object_set_name=N'Disable AdHocRemoteQueriesEnabled_ObjectSet'
, @facet=N'ISurfaceAreaFacet'
, @object_set_id=@object_set_id OUTPUT
Select @object_set_id;
Declare @target_set_id int;
EXEC msdb.dbo.sp_syspolicy_add_target_set
@object_set_name=N'Disable AdHocRemoteQueriesEnabled_ObjectSet'
, @type_skeleton=N'Server', @type=N'SERVER'
, @enabled=True, @target_set_id=@target_set_id OUTPUT
Select @target_set_id;
GO
--Create Policy
Declare @policy_id int;
EXEC msdb.dbo.sp_syspolicy_add_policy
@name=N'Keep AdHocRemoteQueries Disabled'
, @condition_name=N'AHRQE Disabled', @policy_category=N''
, @description=N'', @help_text=N'', @help_link=N''
, @schedule_uid=N'00000000-0000-0000-0000-000000000000'
, @execution_mode=2, @is_enabled=True
, @policy_id=@policy_id OUTPUT, @root_condition_name=N''
, @object_set=N'Disable AdHocRemoteQueriesEnabled_ObjectSet'
Select @policy_id;
GO

The result sets returned by these queries contain only the integer IDs of new policy objects that have been created.

User-generated certification expiration Check to verify whether any nonsystem certificates have an expiration in the next three months, and fails policy if so. Ignores any certificates with “##” in the name because these are built in to SQL Server and are for internal use only. These certificates that begin with “##” are generated when SQL Server is installed, and cannot be modified. Keep in mind also that certificates used for TDE will continue to work just fine after expiration.

--Create policy to evaluate certificate expiration
--This script is generated by SSMS
Declare @condition_id int
--Create Condition
EXEC msdb.dbo.sp_syspolicy_add_condition @name=N'Expiration'
, @description=N'Check to verify that any non-system certificates
have an expiration in the next three months, fails policy if so.
Ignores any certificates with "##" in the name.'
, @facet=N'Certificate', @expression=N'<Operator>
  <TypeClass>Bool</TypeClass>
  <OpType>AND</OpType>
  <Count>2</Count>
  <Operator>
    <TypeClass>Bool</TypeClass>
    <OpType>GT</OpType>
    <Count>2</Count>
    <Attribute>
      <TypeClass>DateTime</TypeClass>
      <Name>ExpirationDate</Name>
    </Attribute>
    <Function>
      <TypeClass>DateTime</TypeClass>
      <FunctionType>DateAdd</FunctionType>
      <ReturnType>DateTime</ReturnType>
      <Count>3</Count>
      <Constant>
        TypeClass>String</TypeClass>
        ObjType>System.String</ObjType>
        Value>m</Value>
      </Constant>
      <Constant>
        TypeClass>Numeric</TypeClass>
        ObjType>System.Double</ObjType>
        Value>6</Value>
      </Constant>
      <Function>
        TypeClass>DateTime</TypeClass>
        FunctionType>GetDate</FunctionType>
        ReturnType>DateTime</ReturnType>
        Count>0</Count>
      </Function>
    </Function>
  </Operator>
  <Operator>
    <TypeClass>Bool</TypeClass>
    <OpType>LIKE</OpType>
    <Count>2</Count>
    <Attribute>
      <TypeClass>String</TypeClass>
      <Name>Name</Name>
    </Attribute>
    <Constant>
      <TypeClass>String</TypeClass>
      <ObjType>System.String</ObjType>
      <Value>##%</Value>
    </Constant>
  </Operator>
</Operator>', @is_name_condition=0, @obj_name=N''
, @condition_id=@condition_id OUTPUT
Select @condition_id;
GO
--Create internal object set and target sets to support condition
Declare @object_set_id int;
EXEC msdb.dbo.sp_syspolicy_add_object_set
@object_set_name=N'Cert Expiration_ObjectSet'
, @facet=N'Certificate', @object_set_id=@object_set_id OUTPUT
Select @object_set_id;
GO
Declare @target_set_id int;
EXEC msdb.dbo.sp_syspolicy_add_target_set
@object_set_name=N'Cert Expiration_ObjectSet'
, @type_skeleton=N'Server/Database/Certificate'
, @type=N'CERTIFICATE', @enabled=True
, @target_set_id=@target_set_id OUTPUT
Select @target_set_id;
EXEC msdb.dbo.sp_syspolicy_add_target_set_level
@target_set_id=@target_set_id
, @type_skeleton=N'Server/Database/Certificate'
, @level_name=N'Certificate', @condition_name=N''
, @target_set_level_id=0;
EXEC msdb.dbo.sp_syspolicy_add_target_set_level
@target_set_id=@target_set_id, @type_skeleton=N'Server/Database'
, @level_name=N'Database', @condition_name=N''
, @target_set_level_id=0;
GO
--Create Policy
Declare @policy_id int;
EXEC msdb.dbo.sp_syspolicy_add_policy
@name=N'Cert Expiration', @condition_name=N'Expiration'
, @policy_category=N'', @description=N''
, @help_text=N'', @help_link=N''
, @schedule_uid=N'00000000-0000-0000-0000-000000000000'
, @execution_mode=2, @is_enabled=True
, @policy_id=@policy_id OUTPUT, @root_condition_name=N''
, @object_set=N'Cert Expiration_ObjectSet'
Select @policy_id;
GO

The result sets returned by these queries contain only the integer IDs of new policy objects that have been created.

Using PowerShell to automate SQL Server administration

SQL Server has supported close integration with Windows PowerShell for a decade. PowerShell is a robust scripting shell language with which you can script many administrative tasks. PowerShell was first released in 2006, has integrated with SQL Server since 2008, and was made an open-source, cross-platform language in 2016.

The goal of this section is not to list every possible interaction of PowerShell with SQL Server, Azure, or availability groups, but to provide instructional, realistic samples that will help you to learn the PowerShell language and add it to your DBA tool box.

We reviewed a number of useful PowerShell scripts for Azure SQL Database interaction in Chapter 5. All the scripts in this section are available in the accompanying downloads for this book, which are available at https://aka.ms/SQLServ2017Admin/downloads.

IT professionals of all walks of life are learning PowerShell to ease their administrative tasks on various technologies, not just Windows Server and SQL Server, but Active Directory, Machine Learning, Azure, Microsoft Office 365, SharePoint, Exchange, even Office products like Microsoft Excel. There is very little in the Microsoft stack for which PowerShell cannot help. Developers have created third-party downloadable modules, available for download in the PowerShell Gallery (http://powershellgallery.com), to further enhance PowerShell’s ability to interact even with non-Microsoft platforms such as Amazon Web Services, Slack, Internet of Things (IoT) devices, Linux, and more.

If you’d like to adopt more PowerShell for administration in your database, you’ll need more than the samples in this chapter, though we selected these because we feel them to be good learning examples. Consider adding the https://dbatools.io/ open-source suite of PowerShell cmdlets to your repertoire, which is available to download from the PowerShell Gallery. Though we won’t be looking at any of its highly regarded cmdlets here, this suite has furthered the development of helpful PowerShell cmdlets for automating high availability, security migrations, backups, and more.

There are even scenarios for which PowerShell fills feature gaps in configuration panels and UI, necessitating some basic literacy for PowerShell on the part of the modern system administrator, DBA, or developer. PowerShell is especially useful when building code to automate the failover of Windows Server Failover Clusters or for interacting with DNS. You also can use PowerShell remoting to manage multiple Windows servers and SQL Server instances from a single command prompt.

PowerShell also supports wrappers to run the Windows Command Prompt statements you’ve been familiar with as a Windows OS user; for example, ping, ipconfig, telnet, and net start. And, they can start commands just like the Command Prompt could; for example, regedit, notepad, SQLCMD, even shutdown.

Instead of the Windows Command Prompt, you should consider getting in the habit of using the PowerShell console window or Visual Studio Code (the replacement for the original PowerShell scripting environment, PowerShell ISE). You can download and start using Visual Studio Code for free at https://code.visualstudio.com. A separate installation of Visual Studio is not required.

You can also start a PowerShell console from within SQL Server Management Studio. Right-click most folders, and then, on the shortcut menu, select Launch PowerShell. You might find the Visual Studio Code environment more conducive to authoring multiline PowerShell scripts, especially if you have any prior familiarity with Visual Studio or Visual Studio Code.

PowerShell basics

Cmdlets for PowerShell follow a pattern of Verb–Noun. This helps provide ease and consistency when trying to find cmdlets to run your desired task.

For database administration tasks, we will become familiar with cmdlets and using SQL Server Management Objects (SMO). Starting with the release of SQL Server Management Studio 17.0, PowerShell for SQL Server is installed separately from SQL Server Setup or the SQL Server Management Studio Install, and can be installed from the PowerShell Gallery. We demonstrate in this chapter how to install and check the current version of the SQLSERVER module.

For each cmdlet, there is a built-in way to receive a description of the cmdlet and see all of the parameters along with descriptions and examples (if provided by the author) of the cmdlet. Let’s try it on Invoke-Sqlcmd, a cmdlet that calls sqlcmd and runs a SQL query.

First, run the cmdlet:

Update-Help

This updates the extensive and helpful local help files for PowerShell and installed modules.

Inside OUT

The Visual Studio Code shortcuts for running scripts are different from SQL Server Management Studio, and you need to be aware of the following:

  • In SQL Server Management Studio, pressing F5 runs the entire script if no text is highlighted, or just the highlighted text if any is selected. Pressing Ctrl+E has the same behavior by default.

  • In Visual Studio Code, pressing F5 saves then runs the entire script file, regardless of whether any code is highlighted. Pressing F8 runs only highlighted code.

To access the help information for any cmdlet, use the Get-Help cmdlet. Here are some examples:

#Basic Reference
Get-Help Invoke-SqlCmd

#See actual examples of code use
Get-Help Invoke-SqlCmd -Examples

#All cmdlets that match a wildcard search
Get-Help -Name "*Backup*database*"

Note that the # character begins a single-line comment in PowerShell code. Alternatively, you can use <# and #> to enclose and declare a multiline comment block.

Installing the PowerShell SQLSERVER module

You must be running at least Windows PowerShell 5.0 to download modules from the PowerShell Gallery. To determine the version of Windows PowerShell on your system, run the following code in the PowerShell window:

$PSVersionTable

The PSVersion value contains the current installed version of PowerShell.

Inside OUT

As of this writing, the latest version of PowerShell, PowerShell 5.1, ships in Windows Server 2016, the Windows 10 Creator’s Edition, and Windows Management Framework 5.1. Up-to-date Windows 10 and Windows Server 2016 do not need any further installation.

To upgrade your version of Windows PowerShell, you should install the latest version of Windows Management Framework 5, which includes PowerShell. PowerShell 5.0 introduced the ability to download and install modules with cmdlets. https://msdn.microsoft.com/powershell/wmf/readme5.1/install-configure provides the current download by OS to upgrade your Windows PowerShell. Note that installing this package will require that you reboot your server or workstation.

To install the latest version of the SQLSERVER module, use the following code on an internet-connected device, running the PowerShell console or Visual Studio Code in administrator mode:

Install-Module -Name SQLSERVER -Force -AllowClobber

In the preceding script, we used a few handy parameters let’s review them:

  • -Name. Specify the unique name of the module we want.

  • -Force. Avoid having to answer Yes to confirm that you want to download.

  • -AllowClobber. Allows this module to overwrite cmdlet aliases already in place. Without AllowClobber, the installation will fail if it finds that the new module contains commands with the same name as existing commands.

To find the current installed versions of the SQLSERVER PowerShell module as well as other SQL modules including SQLPS, use the following:

Get-Module -ListAvailable -Name '*sql*' | Select-Object Name, Version, RootModule

Offline installation

To install the module on a server or workstation that is not internet-connected or cannot reach the PowerShell Gallery, go to a workstation that can reach the PowerShell Gallery and has at least PowerShell 5.0. Then, use the following command to download the module (be aware that it’s 82 MB in size):

save-module -Name SQLSERVER -LiteralPath "c:temp"

Then, copy the entire C:TempSqlServer folder to the machine that cannot reach the PowerShell Gallery. Copy the folder to a path that is in the list of PSModule paths. The potential paths for modules list is stored in an PSModulePath environment variable, which you can view in Windows System Properties, or mode easily with this PowerShell script:

$env:PSModulePath.replace(";","'n")

The default folder for the module downloaded from the gallery would likely be “C:Program FilesWindowsPowerShellModules”. Verify that this path is available or choose another PSModule folder, and then copy the downloaded SQLSERVER folder there. Then, adding the module is as easy as this:

Import-Module SQLSERVER

You will now see the SQLSERVER module in the Get-Module list of available modules.

Using PowerShell with SQL Server

PowerShell 5.0 can interact with SQL Server instances all the way back to SQL Server 2000 (with some limitations). No book is a good medium to demonstrate the full capability that PowerShell can bring to your regular DBA tasks, nor should a book try to detail all the possibilities. Nonetheless, here are some selected, representative, but simple examples.

Backup-SqlDatabase

Let’s learn about some more basics and syntax of PowerShell via the Backup-SQLDatabase cmdlet. With this PowerShell cmdlet, you have access to the same parameters as the T-SQL command BACKUP DATABASE.

Again, use PowerShell’s built-in help files to see full syntax and examples, many of which will be familiar to you if you have a good understanding of the BACKUP DATABASE options.

Get-Help Backup-SQLDatabase -Examples

Following is an example of how to back up all databases on a local SQL Server instance, providing the backup path, including a subfolder with the database’s name. The script also adds the current date and time to the name of the backup file:

#Backup all databases (except for Tempdb)
#TODO: change value for -ServerInstance parameter
Get-SqlDatabase -ServerInstance 'localhost' | `
    Where-Object { $_.Name -ne 'tempdb' } | `
    ForEach-Object {
        Backup-SqlDatabase -DatabaseObject $_ `
        -BackupAction "Database" '
        -CompressionOption On  '
        -BackupFile "F:Backup$($_.Name)$($_.Name)_$(`
        Get-Date -Format "yyyyMMdd")_$(`
        Get-Date -Format "HHmmss_FFFF").bak" `
        -Script    ' #The -Script generates TSQL, but does not execute
    }

Here are a few learning notes about this script:

  • Adding the -Script parameter to this and many other cmdlets outputs only the T-SQL code, split by GO batch separators; it does not actually perform the operation.

  • The back tick, or grave accent (`) symbol (below the tilde on most standard keyboards), is a line extension operator. Adding the ` character to the end of a line gives you the ability to display long commands, such as in the previous example, on multiple lines.

  • The pipe character (|) is an important concept in PowerShell to grasp. It passes the output of one cmdlet to the next. In the previous script, the list of databases is passed as an array from Get-SQLDatabase to Where-Object, which filters the array and passes to ForEach-Object, which loops through each value in the array.

This script is available in the CH14_powershell_examples.ps1 file in the accompanying downloads for this book, which are available at https://aka.ms/SQLServ2017Admin/downloads.

Remove-Item

Let’s learn some more about common PowerShell syntax parameters. You can use the Remove-Item cmdlet to write your own retention policy to delete old files, including backup files, stored locally. Remember to coordinate the removal of old local backups with your off-premises strategy that keeps backups safely in a different location.

In this script, we use the Get-ChildItem cmdlet to Recurse through a subfolder, ignore folders, and select only files that are more than $RetentionDays old and have a file extension in a list we provide:

$path = "F:Backup"
$RetentionDays = -1
$BackupFileExtensions = ".bak", ".trn", ".dif"
Get-ChildItem -path $path -Recurse | `
    Where-Object { !$_.PSIsContainer `
                 -and $_.CreationTime -lt (get-date).AddDays($RetentionDays) `
                 -and ($_.Extension -In $BackupFileExtensions) `
                 } | Remove-Item -WhatIf

Here are a few learning notes about this script:

  • The Get-ChildItem cmdlet gathers a list of objects from the provided path, including files and folders. The -Recurse parameter of Get-ChildItem causes the cmdlet to include subfolders.

  • The $_ syntax is used to accept the data from the object prior to the previous pipe character (|). In this example, the objects discovered by Get-ChildItem are passed to the Where-Object, which filters the objects and passes that data to Remove-Item.

  • Adding the -WhatIf parameter to this and many other cmdlets does not actually perform the operation, but provides a verbose summary of the action, instead. For example, rather than deleting old backup files, this PowerShell script returns something similar to the following sample:

    What if: Performing the operation "Remove File" on target "F:Backupackup_
    test_201402010200.bak".

This script is available in the CH14_powershell_examples.ps1 file in the accompanying downloads for this book, which are available at https://aka.ms/SQLServ2017Admin/downloads.

Invoke-Sqlcmd

The Invoke-Sqlcmd cmdlet can run T-SQL commands, including on remote SQL Server instances and Azure SQL databases. Invoke-Sqlcmd can replace previous batch file scripts that use sqlcmd. Use Invoke-Sqlcmd when there doesn’t exist a cmdlet to return the same data for which you’re already looking. In this script, we connect to a database in Azure SQL Database and run a query to see current sessions:

Invoke-Sqlcmd -Database master -ServerInstance .sql2k17 `
-Query "select * from sys.dm_exec_sessions" | `
Format-Table | Out-File -FilePath "C:TempSessions.txt" -Append

Here are a couple learning notes about this script:

  • As you might be familiar with in SQL Server Management Studio and other connection strings, the “.” character is a shorthand substitute for “localhost”.

  • The Invoke-SqlCmd cmdlet uses Windows Authentication by default. Notice that we passed no authentication information at all. You can provide the UserName and Password parameters to the Invoke-SqlCmd to connect via SQL Authentication to SQL Server instances and Azure SQL databases.

  • The | Format-Table cmdlet has a big impact on readability of the script output. Without Format-Table, the script returns a long list of column names and row values. The Format-Table output does not include all columns by default, but returns a wide line of column headers and row values, like how SQL Server Management Studio returns resultset output in Text mode.

  • The | Out-File cmdlet dumps the output to a text file instead of to the PowerShell console, creating the script if needed. The -Append parameter adds the text to the bottom of an existing file.

  • The Out-File can be handy for archival purposes, but for viewing live rowsets, especially SQL Server command results, instead try using the Out-GridView cmdlet, which provides a full-featured grid dialog box with re-sortable and filterable columns, and so on. Out-GridView is used instead in the following sample:

    Invoke-Sqlcmd -Database master -ServerInstance .sql2k17 `
    -Query "select * from sys.dm_exec_sessions" | `
    Out-GridView

These scripts are available in the CH14_powershell_examples.ps1 file in the accompanying downloads for this book, which are available at https://aka.ms/SQLServ2017Admin/downloads.

Inside OUT

After running a PowerShell command in Visual Studio Code, my cursor moves to the results pane, instead of staying in the script pane. Help!

Unlike SQL Server Management Studio, running a script in PowerShell by default moves the cursor to the PowerShell terminal pane (which is actually not a Results window, but a live terminal window). This means that you need to move your cursor back up to the script pane after each run to continue to edit your PowerShell code.

You can change this behavior in Visual Studio Code. Type Ctrl+, (comma) to access the Visual Studio Code User Settings, or start Settings by clicking the File Menu, and then choosing Preferences, and then Settings. On the right side, provide the following code to override the Default Setting:

"powershell.integratedConsole.focusConsoleOnExecute": false

No restart is necessary. Now the cursor will remain in the scripting pane after running a PowerShell command.

Using PowerShell with availability groups

You can script the creation and administration of availability groups and automate them with PowerShell instead of using SQL Server Management Studio commands or wizards. If you work in an environment in which creating, managing, or failing over availability groups is a repeated process, you should invest time in automating and standardizing these activities with PowerShell.

Following are some samples of code that can help you along the way, starting with the very beginning—a new group of servers has been created for availability groups, and you need to add the Failover Clustering feature to each server. This could be time consuming and click-heavy in a remote desktop session to each server. Instead, consider the following script, in which we deploy the Failover Clustering feature and tools on four servers, quickly:

Invoke-Command -script {Install-WindowsFeature -Name "Failover-Clustering" } `
    -ComputerName SQLDEV11, SQLDEV12, SQLDEV14, SQLDEV15
Invoke-Command -script {Install-WindowsFeature -Name "RSAT-Clustering-Mgmt" } `
    -ComputerName SQLDEV11, SQLDEV12, SQLDEV14, SQLDEV15
Invoke-Command -script {Install-WindowsFeature -Name "RSAT-Clustering-PowerShell" } `
    -ComputerName SQLDEV11, SQLDEV12, SQLDEV14, SQLDEV15

Image For more about turning on and configuring availability groups, see Chapter 12.

#Local server
    Install-Module -Name SQLSERVER -Force -AllowClobber
    Import-Module -Name SQLSERVER
    
#Remove Server
Invoke-Command -script {
    Install-Module -Name SQLSERVER -Force -AllowClobber
    Import-Module -Name SQLSERVER
    } -ComputerName "SQLSERVER-1"

Let’s fast forward to an in-place availability group, with two replicas set to asynchronous synchronization. A planned failover is coming up, and you need to automate the script as much as possible. Start with the sample script that follows, which accomplishes these goals:

  • Sets asynchronous replicas to synchronous and waits so that we can perform a planned failover with no data loss.

  • Performs availability group failover.

  • Sets replicas back to asynchronous.

You can adapt the following sample script for your own purposes and environment:

#Must run on the primary node
#TODO: configure initial variable values.

    Write-Output "Begin $(Get-Date)"
#Setup: TODO Configure these
    $PrimaryReplicaName = "SQLSERVER-0"
#Named instance or DEFAULT for the default instance
    $PrimaryReplicaInstanceName = "SQL2K17"     $SecondaryReplicaName1 = "SQLSERVER-1"
#Named instance or DEFAULT for the default instance
    $SecondaryReplicaInstanceName1 = "SQL2K17"
    $AvailabilityGroupName = "WWI2017-AG"

#Inventory and test
    Set-Location "SQLSERVER:SQL$($PrimaryReplicaName)$($PrimaryReplicaInstanceName)`
AvailabilityGroups$($AvailabilityGroupName)"
    Get-ChildItem "SQLSERVER:Sql$($PrimaryReplicaName)`
$($PrimaryReplicaInstanceName)`
AvailabilityGroups$($AvailabilityGroupName)AvailabilityReplicas" `
| Test-SqlAvailabilityReplica | Format-Table

       $AGPrimaryObjPath = "SQLSERVER:Sql$($PrimaryReplicaName)`
$($PrimaryReplicaInstanceName)`
AvailabilityGroups$($AvailabilityGroupName)AvailabilityReplicas`
$($PrimaryReplicaName+$(IF($PrimaryReplicaInstanceName `
-ne "DEFAULT"){$("%5C")+$PrimaryReplicaInstanceName} ))"
       $AGPrimaryObj = Get-Item $AGPrimaryObjPath
       $AGSecondaryObjPath = "SQLSERVER:Sql$($PrimaryReplicaName)`
$($PrimaryReplicaInstanceName)`
AvailabilityGroups$($AvailabilityGroupName)AvailabilityReplicas`
$($SecondaryReplicaName1+$(IF($SecondaryReplicaInstanceName1 -ne
"DEFAULT"){$("%5C")+$SecondaryReplicaInstanceName1} ))"
       $AGSecondaryObj = Get-Item $AGSecondaryObjPath



#Set replicas to synchronous before planned failover

        Set-SqlAvailabilityReplica `
        -Path $AGPrimaryObjPath `
        -AvailabilityMode SynchronousCommit `
        -FailoverMode "Manual" `
        -ErrorAction Stop
        Set-SqlAvailabilityReplica `
        -Path $AGSecondaryObjPath `
        -AvailabilityMode SynchronousCommit `
        -FailoverMode "Manual" `
        -ErrorAction Stop

#Check for when replicas are synchronized.
Do {
    $AGSecondaryObj.Refresh()
    $CurrentSync = ($AGSecondaryObj | `
                                       Select RollupSynchronizationState | Format-Wide | Out-String).Trim()
    IF ($CurrentSync -ne "Synchronized") {
            Write-Output "Waiting for Synchronized state before failover, still $($Cur-
rentSync)"
            Start-Sleep -s 2
            }
} Until ($CurrentSync -eq 'Synchronized')

#Perform failover
    Write-Output "Beginning Failover $(Get-Date)"
    Switch-SqlAvailabilityGroup `
        -Path "SQLSERVER:Sql$($SecondaryReplicaName1)`
$($SecondaryReplicaInstanceName1)`
AvailabilityGroups$($AvailabilityGroupName)" `
        -ErrorAction Stop `
        #Only include the next line if it is a forced failover
        #-AllowDataLoss -Force
    Write-Output "Failover Complete $(Get-Date)"
   Start-Sleep -s 10 #Allow failover to resolve

#Return secondary replica to Asynchronous sync
#Note that the values here of Primary and Secondary1 are flipped,
# because the variables predate the failover.
Invoke-Command -script { `
    param($SecondaryReplicaName1, $SecondaryReplicaInstanceName1, `
$AvailabilityGroupName, $PrimaryReplicaName, $PrimaryReplicaInstanceName)
    
     Set-SqlAvailabilityReplica `
    -Path "SQLSERVER:Sql$(($SecondaryReplicaName1))`
$(($SecondaryReplicaInstanceName1))`
AvailabilityGroups$(($AvailabilityGroupName))AvailabilityReplicas`
$(($SecondaryReplicaName1)+$(IF(($SecondaryReplicaInstanceName1) -ne `
"DEFAULT"){$("%5C")+(($SecondaryReplicaInstanceName1))} ))"  `
    -AvailabilityMode asynchronousCommit `
    -ErrorAction Stop
        Set-SqlAvailabilityReplica `
    -Path "SQLSERVER:Sql$(($SecondaryReplicaName1))`
$(($SecondaryReplicaInstanceName1))`
AvailabilityGroups$(($AvailabilityGroupName))AvailabilityReplicas`
$(($PrimaryReplicaName)+$(IF(($PrimaryReplicaInstanceName) `
-ne "DEFAULT"){$("%5C")+(($PrimaryReplicaInstanceName))} ))"  `
    -AvailabilityMode asynchronousCommit `
    -ErrorAction Stop

    Get-ChildItem "SQLSERVER:Sql$($SecondaryReplicaName1)`
$($SecondaryReplicaInstanceName1)`
AvailabilityGroups$($AvailabilityGroupName)AvailabilityReplicas" | `
Test-SqlAvailabilityReplica | Format-Table
    } -ComputerName $SecondaryReplicaName1  `
-Args $SecondaryReplicaName1, $SecondaryReplicaInstanceName1, `
$AvailabilityGroupName, $PrimaryReplicaName, $PrimaryReplicaInstanceName

    Write-Output "End $(Get-Date)"

Here are a few learning notes about this script:

  • We need to do some character trickery to pass in a named instance in the SMO Path for the availability group, providing %5C for the backslash in the replica name, SQLSERVER-0SQL2K17. The need here is rare albeit frustrating.

  • We see another control structure, DoUntil. In this case, we’re waiting until the availability group RollupSynchronizationState has changed from Synchronizing to Synchronized, indicating that the synchronization has changed from asynchronous to synchronous.

  • After it is synchronous, the failover can occur without data loss, without being Forced. In an emergency, in which the primary server SQLSERVER-0 is offline, we could skip the steps where we change the synchronization and proceed straight to the most important cmdlet in the script: Switch-SqlAvailabilityGroup. Except in a Forced failover, for which data loss is possible, we must specify the -AllowDataLoss and -Force parameters.

  • You must run this entire script from the primary node, as it is currently written. A hint of how you could rewrite the script to be run from anywhere lies in Invoke-Command, where we connect to the original secondary replica (now the primary replica) and set the synchronization from asynchronous back to synchronous.

  • Why SQL2K17, not SQL2017? Old DBA habits die hard from the SQL2K5, SQL2K8, SQL2K8R2 days.

This script is available in the CH14_AAG_failover.ps1 file in the accompanying downloads for this book, which are available at https://aka.ms/SQLServ2017Admin/downloads.

Using PowerShell with Azure

Similar to the process for installing the SQLSERVER module from the PowerShell Gallery, there is a Microsoft-provided module for administering Azure Resource Manager infrastructure. (There are legacy cmdlets for Azure Classic deployment objects, but we’ll focus on Resource Manager here.)

Image For much more about administering Azure SQL Databases, see Chapter 5.

You’ll need to install the AzureRM module using a similar process that we covered earlier:

Install-Module AzureRM -AllowClobber

With AzureRM commands, there is additional authentication required to verify your access to the Azure tenant, using the credentials with which you log in to azure.com. The following PowerShell cmdlet creates a dialog box that asks you to sign in to Azure and provide a password (see Figure 14-6), which can handle two-factor authentication if turned on (two-factor authentication is always recommended!):

Login-AzureRmAccount

The dialog box outside of the PowerShell console or Visual Studio Code won’t be modal, so look for it in the background if it does not appear right away.

Image

Figure 14-6 The Microsoft Azure sign-in dialog box over Visual Studio Code, after Login-AzureRmAccount was run to associate an Azure credential with the current PowerShell session. The same window will later ask for two-factor authentication to your Azure account, if you turn this on.

Using PowerShell with Azure VMs

Many administrators and developers use Azure infrastructure as labs for learning and sandboxes for proofs of concept, often using Azure spending credits that come with MSDN subscriptions. You can create Azure virtual machines (VMs) from the gallery with SQL Server preinstalled and ready to go in minutes. Accidentally leaving the Azure VMs running, however, means that your monthly spending credits will be quickly exhausted. It’s even happened to a certain author of this book during the writing process.

A new feature made available in 2017 allows for VMs to automatically be turned off on schedules, or example the time of day, to prevent them from running overnight. Even in production environments, significant cost savings can be had by automatically turning off Azure VMs outside of business hours and then back on again before business hours begin. But what if, as a developer (or book author), you want to spin up and shut down your VMs at will?

The script that follows accomplishes the same task that would take many clicks and minutes through the Azure portal in a browser. Each Start-AzureRMVM and Stop-AzureRMVM command is synchronous with the VMs startup or shutdown status. Usually this is two to three minutes per VM.

There are also cmdlets available for legacy Azure Classic VMs. These cmdlets are for the newer Azure Resource Manager VMs, first introduced in 2014.

The following script starts or shuts down the five VMs that are created by the SQL Server availability groups template in the Microsoft Azure Portal Gallery. It declares variables including the Azure Resource Group and a list of servers. The list of servers is in the ideal order for the group of servers to be shut down, but when started, we want the two Domain Controllers to come online first, so we add a step to reverse the order of the array. The script also declares a simple-to-understand variable that determines whether the script should be run to shut down or start up the list of VMs.

$ResourceGroupName = "w-ag-20170915"
#Order matters!
$vms = "sqlserver-1","sqlserver-0","cluster-fsw"'
,"ad-secondry-dc","ad-primary-dc"
$Startup_or_Shutdown = "startup" # "Shutdown" or "Startup"

#To startup, reverse the order so that DC's come online first.
IF($Startup_or_Shutdown -eq "Startup") {[array]::Reverse($vms)}

#Begin startup/shutdown loop
ForEach ($vm in $vms) {      
    If ($Startup_or_Shutdown -eq "Startup")
    {
    Write-Output "Starting VM:($vm.ToString) $(Get-Date -Format G)"
    Start-AzureRMVM  -ResourceGroupName $ResourceGroupName -Name $vm  
    Write-Output "Started VM:($vm.ToString) $(Get-Date -Format G)"
    }
    If ($Startup_or_Shutdown -eq "Shutdown")
    {
    Write-Output "Stopping VM:($vm.ToString) $(Get-Date -Format G)"
    Stop-AzureRMVM  -ResourceGroupName $ResourceGroupName -Name $vm -Force  
    Write-Output "Stopped VM:($vm.ToString) $(Get-Date -Format G)"
    }
}

Here’s the sample script output (simplified):

Starting VM:(ad-primary-dc.ToString) 9/17/2017 3:28:51 PM
                         True         OK OK
Started VM:(ad-primary-dc.ToString) 9/17/2017 3:31:53 PM
Starting VM:(ad-secondry-dc.ToString) 9/17/2017 3:31:53 PM
                         True         OK OK
Started VM:(ad-secondry-dc.ToString) 9/17/2017 3:34:24 PM
Starting VM:(cluster-fsw.ToString) 9/17/2017 3:34:24 PM
                         True         OK OK
Started VM:(cluster-fsw.ToString) 9/17/2017 3:36:25 PM
Starting VM:(sqlserver-0.ToString) 9/17/2017 3:36:25 PM
                         True         OK OK
Started VM:(sqlserver-0.ToString) 9/17/2017 3:38:26 PM
Starting VM:(sqlserver-1.ToString) 9/17/2017 3:38:26 PM
                         True         OK OK
Started VM:(sqlserver-1.ToString) 9/17/2017 3:39:56 PM

Here are a couple of learning notes from the preceding script:

  • PowerShell as a language is not strongly data typed. When we passed in a string, the $ResourceGroupName variable stored a string. When we passed in a list of strings, the $vms variable stored an array. We were then able to treat $vms as an array, and reverse its sort order.

  • The ForEach loop runs synchronously. Using PowerShell functions or workflows, the ForEach loop can be set to run its tasks in parallel.

  • One common mistake to new PowerShell developers are comparison operators. You cannot use “=” to compare; rather, there is a list of string operators such as “-eq” (equals), “-ne” (not equals), and “-gt” (greater than).

Image You can learn more about PowerShell comparison operators in the technical reference documentation online at https://docs.microsoft.com/powershell/module/microsoft.powershell.core/about/about_comparison_operators?view=powershell-5.1.

This script is available in the CH14_start_or_stop_RM_vms.ps1 file in the accompanying downloads for this book, which are available at https://aka.ms/SQLServ2017Admin/downloads.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.131.38.14