18 PowerShell and SQL Server Agent

As we discussed in chapter 1, the manual administration of SQL Server Agent jobs can be very time consuming, especially when you have many instances to administer. This chapter provides all of the tools you’ll need to administrate your SQL Server Agent estate efficiently. You will learn how to gather all of the information about your SQL Server Agents, how to find a particular job in your estate easily, and how to retrieve and display the Agent job results and history.

We have waited to talk fully about the SQL Server Agent until this chapter (and the next two) for a couple of reasons. We wanted you to have some knowledge about dbatools commands and be comfortable with their structure at the command line before discussing scheduling them with SQL Server Agent. As we discussed in chapter 1, we considered the order of the chapters as if we were DBAs starting to look at a new estate.

SQL Server Agent is the heart of scheduled task management in SQL Server, providing invaluable built-in functionality for DBAs to manage essential tasks such as backups, integrity checks, data imports, and more. It’s basically the Windows Task Scheduler of SQL Server that enables DBAs to run code on demand or automatically via a schedule. SQL Server Agent uses the msdb database to store its information and is available in all editions except for Express.

Running PowerShell scripts within SQL Server Agent is extremely useful, but it does have a few caveats. Because of this, we are going to dedicate an entire chapter to running PowerShell scripts within SQL Server Agent in an effort to save you time and prevent needless headaches. If you’ve spent any time in SQL Server Management Studio (SSMS), you’re likely very familiar with the SQL Server Agent tree view, as seen in figure 18.1.

Figure 18.1 Good ol’ SQL Server Agent

This chapter will cover creating SQL Server Agent jobs that run PowerShell scripts, especially scripts that use dbatools, and the steps that you need to take to ensure that the SQL Server Agent jobs run without errors and report failures when you expect them to.

Agent jobs can run a variety of tasks through subsystems that help the Agent interact with other components, such as the operating system, PowerShell, Analysis Services, Integration Services, and Replication. The list of available subsystems, or “job step types,” can be viewed in the Job Step Properties tab, as shown in figures 18.2 and 18.3.

Figure 18.2 A list of SQL Server Agent subsystems on Windows, including PowerShell

Considering PowerShell runs on Linux and SQL Server runs on Linux, you may be wondering if you can run PowerShell-based SQL Server Agent jobs on SQL on Linux. The answer at the time of this writing is no, unfortunately.

Figure 18.3 SQL Server Agent on Linux has strong support for replication but little else.

Although SQL Server Agent on Linux certainly has its uses, SQL Server Agent on Windows is where it really shines, primarily because of its support for PowerShell and running external processes. Check out table 18.1, which displays some example tasks it can do for you.

Table 18.1 Sample SQL Server Agent job tasks

Topic

Example tasks that can be scheduled

Database backups

Full, differential, and log backups

Database restores

Restore to test and development environments or test backups

Database maintenance

Index reorganization, integrity checks

Disaster recovery

Nightly exports of logins, instance configurations

Audits

Log file compression and offload

Syncs

Availability group login and job sync

Imports

CSV import/export, SSIS tasks

ETL

Data warehouse processing

Instance migration

Schedule an entire instance migration

Inventory

Keep a centralized database up to date

Health checks

Daily checks with dbachecks.io

Monitoring

Monitor performance issues with sqlwatch.io

18.1 Which to choose, CmdExec or PowerShell job steps?

You have seen that SQL Server supports the PowerShell subsystem, so you may be expecting us to show how you can create PowerShell job steps to run dbatools commands. Surprisingly, however, dbatools cannot be run as part of a SQL Server Agent PowerShell job step because the PowerShell step type places you in the sqlps.exe host (sqlps.io/dbatoolsagent) with the SQL Server PowerShell provider (sqlps.io/abproviders) as your current working directory. This results in a conflict because dbatools and the SQL Server provider are basically trying to do the same thing but with different libraries. The SQLPS host, which is used by the SQL SERVER Agent PowerShell job step, can’t load many external modules and can be using a different version of PowerShell than the version installed on the host.

For all of these reasons and following years of trial and error, we recommend that you use a job step of type Operating system (CmdExec) for any SQL Server Agent jobs where you are going to use PowerShell.

Note For more detail about Windows Task Scheduler vs. SQL Server Agent, and the PowerShell subsystem vs. the CmdExec subsystem, visit dbatools.io/agent.

One of the biggest reasons we recommend CmdExec is because it’s far more reliable. This is both because the PowerShell version will be predictable and your code will not require workarounds, such as using Microsoft.PowerShell.CoreFileSystem::, to ensure UNC paths work as expected. The SQLPS host that runs the PowerShell subsystem just isn’t as complete or functional as it needs to be. See table 18.2 for a comparison.

Table 18.2 PowerShell vs. CmdExec job steps

PowerShell job step

CmdExec job step

sqlps.exe

powershell.exe or pwsh.exe via cmd.exe

Version varies by SQL Server version

Version is always the system version

Doesn’t always (rarely?) work as expected

Works as expected

Another upside to CmdExec not being bound by the PowerShell version is that you can install or upgrade the version of PowerShell you are using. This means that you can use PowerShell 7 or upgrade earlier operating systems’ PowerShell version to Windows PowerShell 5.1.

18.2 Creating Agent jobs to run PowerShell and dbatools

Setting up the Agent to support PowerShell steps is a bit more complicated than setting up pure T-SQL steps. The trade-off, however, is that you can do more within SQL Server Agent because you have far fewer limitations with PowerShell as compared to with T-SQL. To create a SQL Server Agent job to run dbatools (and any other PowerShell) steps, you need to perform the following:

  • Create a Windows-based login in SQL Server.

  • Create a SQL Server credential.

  • Create the Agent proxy.

  • Ensure dbatools is available to the account.

  • Create the PowerShell .ps1 file and save it.

  • Create the job and CmdExec job step.

You have learned many of these steps individually already. In chapter 9, you learned to create a login. Chapter 2 showed how to install dbatools for a specific user or for all users, as shown in figure 18.4. You can use that knowledge to create the requirements listed earlier.

Figure 18.4 Installing dbatools in a directory that is available in the default path and accessible to all users

Try it now 18.1

See the modules that are available to all users by exploring C:Program Files WindowsPowerShellModules.

In the next couple of chapters, you will learn how to create a credential, a proxy, a job, and a job step, but for now we will focus on important concepts to remember when working with PowerShell within SQL Server Agent.

To simplify the presentation of these concepts, we’ll rely on SSMS visuals, because this interface is familiar to most of us. Again, we’re focusing on concepts because using Agent with PowerShell has a few frustrating caveats that we’d like to highlight for everyone, even DBAs who have been using SQL Server Agent for years. The information within this chapter is basically a list of things we’ve learned along the way, as we’ve deployed PowerShell within SQL Server Agent throughout our own SQL Server estates.

18.2.1 Creating a SQL Server credential

SQL Server Agent supports running jobs as alternative logins by using SQL Server credentials and SQL Server Agent proxies. Supporting credentials in this fashion helps avoid Kerberos double-hop issues that we covered in chapter 13.

SQL Server credentials are different from SQL Server logins. When looking for an easy way to explain them, we found Microsoft’s straightforward definition (sqlps.io/sqlcred) to be the easiest to understand:

A credential is a record that contains the authentication information (credentials) required to connect to a resource outside SQL Server. This information is used internally by SQL Server. Credentials are used to connect to resources that the SQL Server service account does not have access to. They can be used to access file shares in other domains or on Azure, for example, or when a stored procedure needs to access a resource in a different domain.

Credentials allow you to run processes and jobs as other accounts, instead of relying on the permissions of the SQL Server Agent service account. This alternative user could be granted read/write permissions to a network share, for instance, or it could be given more restrictive read-only access to various databases. Credentials can be found in SSMS in the Object Explorer, under Security, as demonstrated by figure 18.5.

Figure 18.5 A sample SQL Server credential

When creating a credential, we recommend that you use an Active Directory domain service account that is dedicated to running jobs or is dedicated to a specific task, such as performing environment refreshes or data warehouse tasks. We recommend using a domain account because domain accounts simplify user management and can grant access to multiple network resources that are on the same domain. Using dedicated credentials also enables your organization to abide by the principle of least privilege, which runs code with only the access it needs.

Try it now 18.2

See which credentials currently exist on your SQL Server by opening SSMS and clicking to Server > Security > Credentials.

In figure 18.6, we’re creating a credential named PowerShell Service Account and associating it with the domain account adpowershell.

Figure 18.6 Creating a SQL Server credential with the Active Directory account adpowershell

You want to avoid using your personal account or any user account for both security and account management reasons.

18.2.2 Creating a SQL Server Agent proxy

By default, SQL Server Agent jobs run as the SQL Server Agent service account. You’ve probably seen the proxies leaf in the SQL Server Agent tree in SSMS, as shown in figure 18.7.

Figure 18.7 A SQL Server Agent proxy

In previous chapters, you have seen that by default a job step runs under the security context of the SQL Server Agent service account. Each job step can also be modified to run under the context of a SQL Server Agent proxy account (sqlps.io/agentproxy), as long as the proxy has been created and it has been associated with the job step’s subsystem.

Figure 18.8 The drop-down list for Run As contains both the default SQL Server Agent service account credential and a user-created credential.

To enable the SQL Server Agent job to be successful, you will also need to ensure that the account that is running the job step (either the SQL Server service account or the one associated with the proxy) has access to all resources required for the task. This includes the PowerShell script file described next, network shares, other SQL instances, and the PowerShell modules on the host. To put it simply, whatever your script needs to accomplish, this account needs the required permissions to do it.

If you are using a proxy, ensure it has been granted rights to the appropriate subsystem. In figure 18.9, we’ve created a proxy account with the PowerShell service account credential and associated it with only the CmdExec subsystem, because that’s our preferred method for the reasons listed in the CmdExec column in figure 18.2.

Figure 18.9 This proxy account is associated with the PowerShell credential and enabled only for CmdExec.

With the proxy set up and all permissions granted, you can then create your PowerShell script.

18.2.3 The PowerShell file

Our preferred method of running PowerShell in a SQL Server Agent job step requires creating a PowerShell script file and saving it to the filesystem. Storing and running scripts from the filesystem simplifies script management, because files are far easier to edit when they are stored on disk. The syntax for accessing scripts is also much easier than escaping PowerShell code directly within the CmdExec job step.

The Windows account for which you have created a credential must have permissions to access the directory and execute the file. You can choose to store the scripts in a central share that all of your instances have access to, or you can store your PowerShell scripts on the filesystem of the SQL instance. The decision is yours, and you should make it only after a conversation with all the required teams in your organization, such as DBAs, security, and Windows admin.

Important If you are using Failover Cluster Instances, ensure you store the file in a location that both nodes can access.

Whichever choice you make, we recommend that you store your scripts in source control and deliver them to the location automatically. We have used Azure DevOps, TFS, Jenkins, and Octopus Deploy to do this.

18.3 Creating the SQL Server Agent job with a CmdExec job step

As you learned earlier, SQL Server Agent jobs consist of steps that execute actions. So a nightly dev restore job may have three steps: backup, restore, and verify. When creating each of these job steps, you will need to select the SQL Server Agent proxy that was created earlier. In figure 18.10, the step is running as the PowerShell proxy account seen in figure 18.7. Most often, all of the job steps will use the same proxy, but it is not required. You may, for example, have one account that can read from a source system but need another account to write to a destination system. You would need to create a proxy account for each account and then define each job step to use the correct proxy.

Figure 18.10 A SQL Server Agent job step using the Run as PowerShell proxy account and the PowerShell command for the database restore script

This job step is running the command powershell.exe -File S:scheduleddbrestore.ps1. This spawns a CmdExec process (cmd.exe) that then spawns a powershell.exe process, which will run the script. If you have PowerShell 7 installed on the host, you can replace powershell.exe with pwsh.exe.

Try it now 18.3

Mimic CmdExec by starting up cmd.exe and running powershell.exe with the -File or -Command parameter.

The previous job step command was powershell.exe -File S:scheduleddbbackup .ps1, and the next step would run powershell.exe -File S:scheduleddbverify .ps1. Depending on your team’s preferences, you can also create just one step that runs all three processes in one script.

This process consists of multiple parts: the service account, the credential, the proxy account, the PowerShell script file, and maybe also source control and CI/CD. Because of this, we strongly encourage you to add relevant information to the SQL Server Agent job description, as shown in figure 18.11, because this will make life easier for future you. The time spent troubleshooting issues will be vastly reduced for the expenditure of a couple of minutes to add the description.

Figure 18.11 The description for the SQL Server Agent job should include information to help the team quickly support it.

With the Agent step(s) created, you’re ready to use PowerShell in SQL Server Agent.

18.4 Tips

Some approaches we use within our own PowerShell scripts that run within SQL Server Agent follow.

18.4.1 Using default parameter values

$PSDefaultParameterValues is an automatic variable that was introduced in PowerShell 3.0. It is a hashtable that can set defaults for any command that you run. In its simplest form, setting a default parameter value looks like the following listing.

Listing 18.1 Setting sql01 as the default value

PS> $PSDefaultParameterValues["Get-DbaDatabase:SqlInstance"] = "sql01"

If the format is a bit intimidating, just copy and paste, and replace as needed. That’s what we do. Basically, it’s $PSDefaultParameterValues["CommandName:ParameterName"] = "default value".

According to Microsoft (dbatools.io/abparamdefaults), $PSDefaultParameterValues enables you to specify custom default values for any command, and these custom default values will be used unless you specify another value. This is particularly helpful when you must specify the same value nearly every single time. Table 18.3 contains some default parameter values to consider when scheduling scripts.

Table 18.3 Default parameter values to consider

Parameter

Default value

Reason

-Verbose

$true

Creates more output to help debug and see results

-Confirm

$false

Prevents prompting that would lead to an infinite wait

-EnableException

$true

Forces the Agent to fail on dbatools commands

-ErrorAction

Stop

Forces the Agent to fail on other commands

Even when you’re not scheduling tasks, $PSDefaultParameterValues continue to be useful. We have used it to set persistent -SqlCredential parameter values when working with Docker. We have also used it to set the default proxy and -ProxyUseDefaultCredentials for Invoke-WebRequest. Default parameter values are used extensively within dbachecks to ensure Pester tests fail.

18.4.2 Ensuring that the Agent job fails when the PowerShell fails

When you run a SQL Server Agent job with a CmdExec job step, you want the Agent job to fail when the PowerShell returns an error. By default, this will not happen because the job step does not return an exit code that the SQL Server Agent will translate as a failure. For example, figure 18.12 shows a successful Agent job run at first glance with the green ticks, but closer examination shows that the PowerShell script failed.

Figure 18.12 This Agent job was not as successful as it appears.

We have used this image to show two issues. First, as you can see by reading the quoted text, the Agent job succeeded. As far as any DBA looking at the result of the job is concerned, there is no issue. It is for this reason that you need to perform the extra coding so that the Agent job will fail when expected. Secondly, the error message that is in the log for the job states the following:

The term Copy-DbaLogin is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.

This is a commonly reported error, and the reason for it is the dbatools module cannot be imported. If you receive this error, revisit chapter 2, and ensure that dbatools is available to the user.

Once dbatools has been successfully imported, you will need to add the parameter -EnableException. dbatools, by default, manages all of the errors for you so that you are not faced with a sea of red when errors occur. The -EnableException parameter returns those errors causing the jobs to fail as expected.

As you read earlier, you can easily add -EnableException to every dbatools command in your script. You just set the value once at the top of the script. Because dbatools uses a consistent PowerShell prefix in its naming convention, we can limit -EnableException to only dbatools commands by limiting it to any command that contains -Dba in the name, as shown next.

Listing 18.2 Setting -EnableException for all dbatools commands

PS> $PSDefaultParameterValues['*-Dba*:EnableException'] = $true

To ensure that the correct exit code is passed from all PowerShell commands, you must place them inside a try/catch block. This is a good PowerShell scripting practice anyway, and by controlling the code paths, you can make sure the Agent jobs fail as you expect. You should also ensure that the Agent job fails if the results of the command are not as expected and that you write code to provide the correct exit code.

Listing 18.3 A try/catch block

powershell.exe -Command "try { Get-DbaDatabase -SqlInstance doesntexist 
 -ErrorAction Stop} catch { Write-Error 
 'Failure getting databases from server doesntexist' }"

Imagine that you had a default instance (like a model database) on which you created all of the logins (and other objects) that you wanted on all of your instances. You also want to create an Agent job on every instance to automate this procedure. You will only need to add a new login to your model instance, and the Agent jobs will ensure that all of the instances have the new login added. You know from chapter 9 that you can use Copy-DbaLogin to do this, but you want to write a SQL Server Agent job that is robust and either succeeds, or, if it fails, makes it clear that the step failed. The following steps will help you accomplish this task:

  • Setting PSDefaultParameterValues

  • Placing the code in the try block

  • Using a catch block with information and the correct exit code for failure

  • Checking the results of the code and responding with success or failure

We’ve already covered $PSDefaultParameterValues, so let’s take a look at the next code listing, which shows more detailed error handling with customized error message output using a combination with try/catch.

Listing 18.4 Detailed error handling

# EnableException for all dbatools commands and ErrorAction Stop for all 
 commands so that the catch block is hit.
$PSDefaultParameterValues["*-Dba*:EnableException"] = $true
$PSDefaultParameterValues["*:EnableException"] = "Stop"
 
# Copy Logins from default instance to current instance
try {
  $splatCopyLogin = @{
    Source = "SQL2019N5"
    Destination = "SQL2019N7"
    Login = "dbachecks", "SartoriSauce", "adSqlAdmins"    
  }
  $results = Copy-DbaLogin @splatCopyLogin
}
catch {                                                    
  $errormsg = $_.Exception.GetBaseException()
  Write-Output "There was an error - $errormsg"
  $results
  [System.Environment]::Exit(1)
}
 
#Check the results
If ($results.Status -in "Successful","Skipped") {         
  Write-Output "There was an failure - $results"
  [System.Environment]::Exit(1)
}
else {
  Write-Output "$results"
  [System.Environment]::Exit(0)                           
}

In listing 18.4, the line of code that copies the logins will be familiar and is inside a try block. You are not limited to a single line of code. You can place any number of lines of PowerShell code within the try block if you are performing more complex tasks. This will most often be the code that you are used to writing on the command line or in small scripts already.

If an error occurs when running that code, the catch block is invoked and writes to the output stream. It returns the base exception of the error, which gives a cleaner output, returns the results for more information, and exits, using the correct exit code to fail the Agent job. This causes the job to fail properly, as demonstrated in figure 18.13.

Figure 18.13 This job now fails as expected.

To be certain that the script has successfully achieved what was expected, the results of the command are checked to make sure that the status includes either Successful or Skipped . If there is no match, the script exits with the correct exit code to fail the Agent job and writes the results. Finally, if the script did not fail and the results are as expected, the results are written to the output stream, a successful exit code is provided , and the Agent job succeeds. Once the code is written and stored in source control, save the file in a suitable directory.

Writing your code like this will take you a little more time in the beginning, but the benefit of writing more robust code and having your Agent jobs fail when you expect them to fail will save a lot of time for the future you.

18.4.3 Logging

By default, the logging in SQL Server Agent can be brief, because it truncates the output generated by each job step. You have ways around this. If you want to use the built-in SQL Server Agent functionality, you can change the job step property to include the step output in the history. When using PowerShell, you can also use PowerShell transcripts to save the history to files outside of the SQL Server Agent history.

Built-in property

If you want richly detailed messages, you can use the built-in SQL Server Agent functionality that is available in the Advanced properties of each job step. This check box is selected in figure 18.14.

Figure 18.14 Ensure that the Include Step Output In History check box is selected for each step.

If enough output is created by the executed script, setting this property will result in multiple lines in the job’s history, as can be seen in figure 18.15.

Figure 18.15 Detailed logging, as seen in SQL Server Management Studio 17

One potential downside of keeping detailed records is that it can bloat your msdb database, so you must be sure to maintain your job history. This can be done a number of ways. If you’d like to use SSMS, right-click on the SQL Server Agent icon, and click Properties. Then click History. Figure 18.16 should then appear.

Figure 18.16 Using the Agent properties to delete old records

Alternatively, you can limit the history rows and job history rows using Set-DbaAgentServer in dbatools, as shown in the next listing.

Listing 18.5 Using dbatools to manage job history

PS> $splatSetAgent = @{
  SqlInstance = "sql1"
  MaximumHistoryRows = 10000
  MaximumJobHistoryRows = 100
}
PS> Set-DbaAgentServer @splatSetAgent

-MaximumHistoryRows applies to the entire Agent engine, whereas -MaximumJobHistoryRows applies to individual jobs. Determining the best numbers to specify depends on your current environment. The defaults work well for most of our needs.

Start-Transcript

Start-Transcript is a native PowerShell command that transcribes the console window output to a text file, as shown here.

Listing 18.6 Writing a transcript

PS> $date = Get-Date -Format FileDateTime
PS> Start-Transcript -Path "\loggingserversql01filelist-$date.txt"
PS> Get-ChildItem -Path C:
PS> Stop-Transcript

Running the code in listing 18.6 results in the output shown in figure 18.17.

Figure 18.17 Sample output of Start-Transcript

We used a unique filename because, by default, Start-Transcript overwrites the destination file. If you’d like all of your output to go into one file and not overwrite previous results, use the -NoClobber parameter.

Try it now 18.4

Use Start-Transcript to write to a file named daily-filelist.txt without overwriting the currently existing file.

18.4.4 Execution policies

Sometimes, a PowerShell-based job step can run literally forever. After some digging, we discovered that the system settings or antivirus utility had an issue with the signed module we were using, namely, dbatools. To fix this, we simply set the execution policy for the command to Bypass in the Command box, as seen in figure 18.18.

Figure 18.18 The execution policy has been set to Bypass within a job step.

Bypassing a safety mechanism may seem concerning, but as we discovered in chapter 2, it is an effective setting when used in the appropriate circumstances, such as this one.

18.5 Hands-on lab

Try the following tasks:

  • Create a basic SQL Server Agent task with a CmdExec subsystem that runs a PowerShell script that performs Get-ChildItem C:.

  • Add extra logging by setting Include Step Output in History on the job step.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.255.189