© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2023
P. A. CarterPro SQL Server 2022 Administrationhttps://doi.org/10.1007/978-1-4842-8864-1_16

16. Implementing Log Shipping

Peter A. Carter1  
(1)
SOUTHAMPTON, UK
 

As discussed in Chapter 14, log shipping is a technology you can use to implement disaster recovery and the scale out of read-only reporting. It works by taking the transaction log backups of a database, copying them to one or more secondary servers, and then restoring them, in order to keep the secondary server(s) synchronized. This chapter demonstrates how to implement log shipping for disaster recovery (DR). You also discover how to monitor and fail over log shipping.

Note

For the purpose of the demonstrations in this chapter, we use a domain, consisting of a domain controller and four stand-alone servers, each with an instance of SQL Server installed. The server/instance names are PRIMARYSERVERPROSQLADMIN, DRSERVERPROSQLDR, REPORTSERVERPROSQLREPORTS, and MONITORSERVERPROSQLMONITOR, respectively.

Implementing Log Shipping for DR

Before we begin to implement log shipping for disaster recovery, we first create a database that we will use for the demonstrations in this chapter. The script in Listing 16-1 creates a database called Chapter16 with its recovery model set to FULL. We create one table within the database and populate it with data.
--Create the database
CREATE DATABASE Chapter16;
GO
ALTER DATABASE Chapter16 SET RECOVERY FULL;
GO
USE Chapter16
GO
--Create and populate numbers table
DECLARE @Numbers TABLE
(
        Number        INT
)
;WITH CTE(Number)
AS
(
        SELECT 1 Number
        UNION ALL
        SELECT Number + 1
        FROM CTE
        WHERE Number < 100
)
INSERT INTO @Numbers
SELECT Number FROM CTE;
--Create and populate name pieces
DECLARE @Names TABLE
(
        FirstName        VARCHAR(30),
        LastName        VARCHAR(30)
);
INSERT INTO @Names
VALUES('Peter', 'Carter'),
                 ('Michael', 'Smith'),
                 ('Danielle', 'Mead'),
                 ('Reuben', 'Roberts'),
                 ('Iris', 'Jones'),
                 ('Sylvia', 'Davies'),
                 ('Finola', 'Wright'),
                 ('Edward', 'James'),
                 ('Marie', 'Andrews'),
                 ('Jennifer', 'Abraham') ;
--Create and populate Customers table
CREATE TABLE dbo.Customers
(
        CustomerID              INT            NOT NULL    IDENTITY    PRIMARY KEY,
        FirstName               VARCHAR(30)    NOT NULL,
        LastName                VARCHAR(30)    NOT NULL,
        BillingAddressID        INT            NOT NULL,
        DeliveryAddressID       INT            NOT NULL,
        CreditLimit             MONEY          NOT NULL,
        Balance                 MONEY          NOT NULL
);
SELECT * INTO #Customers
FROM
         (SELECT
                 (SELECT TOP 1 FirstName FROM @Names ORDER BY NEWID()) FirstName,
                 (SELECT TOP 1 LastName FROM @Names ORDER BY NEWID()) LastName,
                 (SELECT TOP 1 Number FROM @Numbers ORDER BY NEWID()) BillingAddressID,
                 (SELECT TOP 1 Number FROM @Numbers ORDER BY NEWID()) DeliveryAddressID,
                 (SELECT TOP 1 CAST(RAND() * Number AS INT) * 10000
                      FROM @Numbers
                      ORDER BY NEWID()) CreditLimit,
                 (SELECT TOP 1 CAST(RAND() * Number AS INT) * 9000
                      FROM @Numbers
                      ORDER BY NEWID()) Balance
        FROM @Numbers a
        CROSS JOIN @Numbers b
) a;
INSERT INTO dbo.Customers
SELECT * FROM #Customers;
GO
Listing 16-1

Creating the Database to Be Log Shipped

For the purpose of this demonstration, we would like to configure disaster recovery for the Chapter16 database so that we have an RPO (recovery point objective) of ten minutes. We will also implement a ten-minute load delay on the DR server. This means that if an application team notifies us immediately of an incident that has led to data loss—for example, a user accidently deletes rows from a table—then we are able to rectify the issue by using the data on the DR server before the log that contains the erroneous transaction is restored.

GUI Configuration

We can configure log shipping for our Chapter16 database through SQL Server Management Studio (SSMS). To do this, we select Properties from the context menu of the database and navigate to the Transaction Log Shipping page, which is displayed in Figure 16-1. The first task on this page is to check the Enable this as the primary database in a log shipping configuration check box.
Figure 16-1

The Transaction Log Shipping page

We can now use the Backup Settings button to display the Transaction Log Backup Settings screen. On this screen, we enter the UNC (Universal Naming Convention) to the share that will be used for storing the log backups that log shipping takes. Because this share is actually configured on our primary server, we also enter the local path in the field below. The account that will be used to run the backup job needs to be granted read and change permissions on this share. By default, this will be the SQL Server service account, but for more granular security, it is possible to configure log shipping jobs to run under a proxy account, which reduces the security footprint of the implementation. Proxy accounts are discussed in Chapter 23.

We then configure how long we want our backup files to be retained before they are deleted. The value that you select for this depends on your enterprise’s requirements, but if your backup files are offloaded to tape, then you should make the files available long enough to allow the enterprise backup job to run, and you should potentially build in enough time for it to fail and then succeed on the following cycle. You should also consider your requirements for ad hoc restores. For example, if a project notices a data issue and requests a restore, you want to be able to retrieve the relevant backups from local disk, if possible. Therefore, consider how long you should give projects to notice an issue and request a restore before SQL Server removes the local copy of the backups. Backup strategies are discussed further in Chapter 13.

You should also specify how soon you want an alert to be generated if no log backup occurs. To be notified of any backup failure, you can configure the value to be a minute longer than your backup schedule. In some environments, however, it may be acceptable to miss a few backups. In such an instance, you may set the value to a larger interval so that you are not flooded with failed backup alerts during maintenance windows and other such situations.

The Set Backup Compression drop-down determines if backup compression should be used, to reduce network traffic. The default is to take the configuration from the instance, but you can override this by specifically choosing to use it, or not use it, for the backups taken by the log shipping job. The Transaction Log Backup Settings screen is illustrated in Figure 16-2.

Tip

Your decision of whether or not you should use compression depends on the resource constraints of the server. If you are experiencing network bandwidth constraints, you will likely want to implement compression. If your server is CPU bound, however, then you may not be able to spare the extra cycles that compression uses.

Figure 16-2

The Transaction Log Backup Settings screen

Clicking the Schedule button causes the New Job Schedule screen to be invoked. This screen, which is illustrated in Figure 16-3, is the standard SQL Server Agent screen used for creating job schedules, except that it has been prepopulated with the default name of the log shipping backup job. Because we are trying to achieve an RPO of ten minutes, we configure the backup job to run every five minutes. This is because we also need to allow time for the copy job to run. In a DR planning, we cannot assume that the primary server will be available for retrieving our log backup.
Figure 16-3

The New Job Schedule screen

After returning to the Transaction Log Shipping page, we can use the Add button to configure the secondary server(s) for our Log Shipping topology. Using this button causes the Secondary Database Settings page to display. This page consists of three tabs. The first of these is the Initialize Secondary Database tab, which is displayed in Figure 16-4.

On this tab, we configure how we want to initialize our secondary database. We can preinitialize our databases by taking a full backup of the database and then manually restoring them to the secondary server using the NORECOVERY option. In this kind of instance, we would select the No, the secondary database is initialized option.

If we already have a full backup available, then we can place it in a file share that the SQL Server service account has read and modify permissions on, and then use the Yes, restore an existing backup of the primary database into the secondary database option and specify the location of the backup file.

In our case, however, we do not have an existing full backup of the Chapter16 database, so we select the option to Yes, generate a full backup of the primary database and restore it into the secondary database. This causes the Restore Options window to display; it is here where we enter the locations that we want the data and transaction log files to be created on the secondary server, as illustrated in Figure 16-4.
Figure 16-4

The Initialize Secondary Database tab

On the Copy Files tab, illustrated in Figure 16-5, we configure the job that is responsible for copying the transaction log files from the primary server to the secondary server(s). First, we specify the share on the secondary server to which we will copy the transaction logs. The account that runs the copy job must be configured with read and modify permissions on the share. Just like the backup job, this job defaults to running under the context of the SQL Server service account, but you can also configure it to run under a proxy account.

We also use this tab to configure how long the backup files should be retained on the secondary server before they are deleted. I usually recommend keeping this value in line with the value that you specify for retaining the backups on the primary server for consistency.
Figure 16-5

The Copy Files tab

The Job name field is automatically populated with the default name for a log shipping copy job, and using the Schedule button, you can invoke the New Job Schedule screen, where you can configure the schedule for the copy job. As illustrated in Figure 16-6, we have configured this job to run every five minutes, which is in line with our RPO requirement of ten minutes. It takes five minutes before the log is backed up, and then another five minutes before it is moved to the secondary sever. Once the file has been moved to the secondary server, we can be confident that, except in the most extreme circumstances, we will be able to retrieve the backup from either the primary or the secondary server, thus achieving our ten-minute RPO.
Figure 16-6

The New Job Schedule screen

On the Restore Transaction Log tab, we configure the job that is responsible for restoring the backups on the secondary server. The most important option on this screen is what database state we choose when restoring. Selecting the No recovery mode option is the applicable choice for a DR server. This is because if you choose Standby mode, uncommitted transactions are saved to a Transaction Undo File, which means the database can be brought online in read-only mode (as discussed in Chapter 14). However, this action increases the recovery time, because these transactions then need to be reapplied before the redo phase of the next restore.

On this tab, we also use the Delay restoring backups for at least option to apply the load delay, which gives users a chance to report data issues. We can also specify how long the delay should be before we are alerted that no restore operation has occurred. The Restore Transaction Log tab is illustrated in Figure 16-7.
Figure 16-7

The Restore Transaction Log tab

The Schedule button invokes the New Job Schedule screen, displayed in Figure 16-8. On this screen, we can configure the job schedule for the restore of our transaction logs. Although doing so is not mandatory, for consistency, I usually recommend configuring this value so it is the same as the backup and copy jobs.
Figure 16-8

New Job Schedule screen

Once back on the Transaction Log Shipping page, we need to decide if we want to implement a monitor server. This option allows us to configure an instance, which acts as a centralized point for monitoring our Log Shipping topology. This is an important decision to make at this point, because after configuration is complete, there is no official way to add a monitor server to the topology without tearing down and reconfigured log shipping.

Tip

It is technically possible to force in a monitor server at a later time, but the process involves manually updating log shipping metadata tables in MSDB. Therefore, it is not recommended, or supported.

To add a monitor server, we check the option to Use a Monitor Server Instance and enter the server/instance name. Clicking the Settings button causes the Log Shipping Monitor Settings screen to display. We use this screen, shown in Figure 16-9, to configure how connections are made to the monitor server and the history retention settings.
Figure 16-9

The Log Shipping Monitor Settings screen

Now that our Log Shipping topology is fully configured, we can choose to script the configuration, which can be helpful for the purposes of documentation and change control. We can then complete the configuration. The progress of the configuration displays in the Save Log Shipping Configuration window (see Figure 16-10). Once configuration is complete, we should check this window for any errors that may have occurred during configuration and resolve them as needed. The most common cause of issues with log shipping configuration tends to be permissions related, so we need to ensure that the SQL Server service account (or proxy account) has the correct permissions on the file shares and instances before we continue.
Figure 16-10

The Save Log Shipping Configuration page

T-SQL Configuration

To configure log shipping through T-SQL, we need to run a number of system stored procedures. The first of these procedures is sp_add_log_shipping_primary_database, which we use to configure the backup job and monitor the primary database. The parameters used by this procedure are described in Table 16-1.
Table 16-1

sp_add_log_shipping_primary_database Parameters

Parameter

Description

@database

The name of the database for which you are configuring log shipping.

@backup_directory

The local path to the backup folder.

@backup_share

The network path to the backup folder.

@backup_job_name

The name to use for the job that backs up the log.

@backup_retention_period

The duration that log backups should be kept for, specified in minutes.

@monitor_server

The server/instance name of the monitor server.

@monitor_server_Security_mode

The authentication mode to use to connect to the monitor server. 0 is SQL authentication and 1 is Windows authentication.

@monitor_server_login

The account used to connect to the monitor server (only use if SQL authentication is specified).

@monitor_server_password

The password of the account used to connect to the monitor server (only use if SQL authentication is specified).

@backup_threshold

The amount of time that can elapse, without a log backup being taken, before an alert is triggered.

@threshold_alert

The alert to be raised if the backup threshold is exceeded.

@threshold_alert_enabled

Specifies if an alert should be fired. 0 disables the alert; 1 enables it.

@history_retention_period

The duration for which the log backup job history will be retained, specified in minutes.

@backup_job_id

An OUTPUT parameter that specifies the GUID of the backup job that is created by the procedure.

@primary_id

An OUTPUT parameter that specifies the ID of the primary database.

@backup_compression

Specifies if backup compression should be used. 0 means disabled, 1 means enabled, and 2 means use the instance’s default configuration.

Listing 16-2 demonstrates how we can use the sp_add_log_shipping_primary_database procedure to configure Chapter16 for log shipping. This script uses the @backup_job_id output parameter to pass the job’s GUID into the sp_update_job stored procedure. It also uses the sp_add_schedule and sp_attach_schedule system stored procedures to create the job schedule and attach it to the job. Because configuring log shipping involves connecting to multiple instances, we have added a connection to the primary instance. This means that we should run the script in SQLCMD mode.

Note

sp_update_job, sp_add_schedule, and sp_attach_schedule are system stored procedures used to manipulate SQL Server Agent objects. A full discussion of SQL Server Agent can be found in Chapter 23.

--Note that this script should be run in sqlcmd mode
:connect primaryserverprosqladmin
DECLARE @LS_BackupJobId         UNIQUEIDENTIFIER
DECLARE @LS_BackUpScheduleID        INT
--Configure Chapter16 database as the Primary for Log Shipping
EXEC master.dbo.sp_add_log_shipping_primary_database
                @database = N'Chapter16'
                ,@backup_directory = N'c:logshippingprimary'
                ,@backup_share = N'\primaryserverlogshippingprimary'
                ,@backup_job_name = N'LSBackup_Chapter16'
                ,@backup_retention_period = 2880
                ,@backup_compression = 2
                ,@monitor_server = N'monitorserver.prosqladmin.comprosqlmonitor'
                ,@monitor_server_security_mode = 1
                ,@backup_threshold = 60
                ,@threshold_alert_enabled = 1
                ,@history_retention_period = 5760
                ,@backup_job_id = @LS_BackupJobId OUTPUT ;
--Create a job schedule for the backup job
EXEC msdb.dbo.sp_add_schedule
                 @schedule_name =N'LSBackupSchedule_primaryserverprosqladmin1'
                ,@enabled = 1
                ,@freq_type = 4
                ,@freq_interval = 1
                ,@freq_subday_type = 4
                ,@freq_subday_interval = 5
                ,@freq_recurrence_factor = 0
                ,@active_start_date = 20220517
                ,@active_end_date = 99991231
                ,@active_start_time = 0
                ,@active_end_time = 235900
                ,@schedule_id = @LS_BackUpScheduleID OUTPUT ;
--Attach the job schedule to the job
EXEC msdb.dbo.sp_attach_schedule
                 @job_id = @LS_BackupJobId
                ,@schedule_id = @LS_BackUpScheduleID  ;
--Enable the backup job
EXEC msdb.dbo.sp_update_job
                 @job_id = @LS_BackupJobId
                ,@enabled = 1 ;
Listing 16-2

Sp_add_log_shipping_primary_database

We use the sp_add_log_shipping_primary_secondary system stored procedure to update the metadata on the primary server in order to add a record for each secondary server in the Log Shipping topology. The parameters that it accepts are described in Table 16-2.
Table 16-2

sp_add_log_shipping_primary_secondary Parameters

Parameter

Description

@primary_database

The name of the primary database

@secondary_server

The server/instance of the secondary server

@secondary_database

The name of the database on the secondary server

Listing 16-3 demonstrates how we can use the sp_add_log_shipping_primary_secondary procedure to add a record of our DRSERVERPROSQLDR instance to our primary server. Again, we specifically connect to the primary server, meaning that the script should run in SQLCMS mode.
:connect primaryserverprosqladmin
EXEC master.dbo.sp_add_log_shipping_primary_secondary
                 @primary_database = N'Chapter16'
                ,@secondary_server = N'drserverprosqldr'
                ,@secondary_database = N'Chapter16'
Listing 16-3

Sp_add_log_shipping_primary_secondary

We now need to configure our DR server. The first task in this process is to run the sp_add_log_shipping_secondary_primary system stored procedure. This procedure creates the SQL Server Agent jobs that copy the transaction logs to the secondary server and restore them. It also configures monitoring. The parameters accepted by this stored procedure are detailed in Table 16-3.
Table 16-3

sp_add_log_shipping_secondary_primary Parameters

Parameter

Description

@primary_server

The server/instance name of the primary server.

@primary_database

The name of the primary database.

@backup_source_directory

The folder that the log backups are copied from.

@backup_destination_directory

The folder that the log backups are copied to.

@copy_job_name

The name that is given to the SQL Server Agent job used to copy the transaction logs.

@restore_job_name

The name that is given to the SQL Server Agent job used to restore the transaction logs.

@file_retention_period

The duration for which log backup history should be retained, specified in minutes.

@monitor_server

The server/instance name of the monitor server.

@monitor_server_security_mode

The authentication mode to be used to connect to the monitor server. 0 is SQL authentication and 1 is Windows authentication.

@monitor_server_login

The account used to connect to the monitor server (only use if SQL authentication is specified).

@monitor_server_password

The password of the account used to connect to the monitor server (only use if SQL authentication is specified).

@copy_job_id

OUTPUT parameter that specifies the GUID of the job that has been created to copy the transaction logs.

@restore_job_id

OUTPUT parameter that specifies the GUID of the job that has been created to restore the transaction logs.

@secondary_id

An OUTPUT parameter that specifies the ID of secondary database.

Listing 16-4 demonstrates how we can use the sp_add_log_shipping_secondary_primary stored procedure to configure our DRSERVERPROSQLDR instance as a secondary server in our Log Shipping topology. The script connects explicitly to the DR instance, so we should run it in SQL command mode. Just as when we set up the primary server, we use output parameters to pass to the SQL Server Agent stored procedures, to create the job schedules and enable the jobs.
--Note This script should be run in sqlcmd mode
:connect drserverprosqldr
DECLARE @LS_Secondary__CopyJobId        AS uniqueidentifier
DECLARE @LS_Secondary__RestoreJobId        AS uniqueidentifier
DECLARE @LS_SecondaryCopyJobScheduleID        AS int
DECLARE @LS_SecondaryRestoreJobScheduleID        AS int
--Configure the secondary server
EXEC master.dbo.sp_add_log_shipping_secondary_primary
                 @primary_server = N'primaryserverprosqladmin'
                 @primary_database = N'Chapter16'
                ,@backup_source_directory = N'\primaryserverlogshippingprimary'
                ,@backup_destination_directory = N'\drserverlogshippingdr'
                ,@copy_job_name = N'LSCopy_primaryserverprosqladmin_Chapter16'
                ,@restore_job_name = N'LSRestore_primaryserverprosqladmin_Chapter16'
                ,@file_retention_period = 2880
                ,@monitor_server = N'monitorserver.prosqladmin.comprosqlmonitor'
                ,@monitor_server_security_mode = 1
                ,@copy_job_id = @LS_Secondary__CopyJobId OUTPUT
                ,@restore_job_id = @LS_Secondary__RestoreJobId OUTPUT ;
--Create the schedule for the copy job
EXEC msdb.dbo.sp_add_schedule
                 @schedule_name =N'DefaultCopyJobSchedule'
                ,@enabled = 1
                ,@freq_type = 4
                ,@freq_interval = 1
                ,@freq_subday_type = 4
                ,@freq_subday_interval = 15
                ,@freq_recurrence_factor = 0
                ,@active_start_date = 20220517
                ,@active_end_date = 99991231
                ,@active_start_time = 0
                ,@active_end_time = 235900
                ,@schedule_id = @LS_SecondaryCopyJobScheduleID OUTPUT ;
--Attach the schedule to the copy job
EXEC msdb.dbo.sp_attach_schedule
                 @job_id = @LS_Secondary__CopyJobId
                ,@schedule_id = @LS_SecondaryCopyJobScheduleID  ;
--Create the job schedule for the restore job
EXEC msdb.dbo.sp_add_schedule
                 @schedule_name =N'DefaultRestoreJobSchedule'
                ,@enabled = 1
                ,@freq_type = 4
                ,@freq_interval = 1
                ,@freq_subday_type = 4
                ,@freq_subday_interval = 15
                ,@freq_recurrence_factor = 0
                ,@active_start_date = 20220517
                ,@active_end_date = 99991231
                ,@active_start_time = 0
                ,@active_end_time = 235900
                ,@schedule_id = @LS_SecondaryRestoreJobScheduleID OUTPUT ;
--Attach the schedule to the restore job
EXEC msdb.dbo.sp_attach_schedule
                 @job_id = @LS_Secondary__RestoreJobId
                ,@schedule_id = @LS_SecondaryRestoreJobScheduleID  ;
--Enable the jobs
EXEC msdb.dbo.sp_update_job
                 @job_id = @LS_Secondary__CopyJobId
                ,@enabled = 1 ;
EXEC msdb.dbo.sp_update_job
                 @job_id = @LS_Secondary__RestoreJobId
                ,@enabled = 1 ;
Listing 16-4

Sp_add_log_shipping_secondary_primary

Our next step is to configure the secondary database. We can perform this task by using the sp_add_log_shipping_secondary_database stored procedure. The parameters accepted by this procedure are detailed in Table 16-4.
Table 16-4

sp_add_log_shipping_secondary_database Parameters

Parameter

Description

@secondary_database

The name of the secondary database.

@primary_server

The server/instance of the primary server.

@primary_database

The name of the primary database.

@restore_delay

Specifies the load delay, in minutes.

@restore_all

When set to 1, the restore job restores all available log backups. When set to 0, the restore job only applies a single log backup.

@restore_mode

Specifies the backup mode for the restore job to use. 1 means STANDBY and 0 means NORECOVERY.

@disconnect_users

Determines if users should be disconnected from the database while transaction log backups are being applied. 1 means that they are and 0 means that they are not. Only applies when restoring logs in STANDBY mode.

@block_size

Specifies the block size for the backup device, in bytes.

@buffer_count

Specifies the total number of memory buffers that can be used by a restore operation.

@max_transfer_size

Specifies the maximum size of the request that can be sent to the backup device, in bytes.

@restore_threshold

The amount of time that can elapse, without a restore being applied, before an alert is generated; specified in minutes.

@threshold_alert

The alert to be raised if the restore threshold is exceeded.

@threshold_alert_enabled

Specifies if the alert is enabled. 1 means that it is enabled and 0 means that it is disabled.

@history_retention_period

The retention period of the restore history, specified in minutes.

@Ignoreremotemonitor

An undocumented parameter that partially controls how the internal log shipping database journal is updated.

Listing 16-5 demonstrates how we can use the sp_add_log_shipping_secondary_database to configure our secondary database for log shipping. Since we are explicitly connecting to the DRSERVERPROSQLDR instance, the script should run in SQLCMD mode.
:connect drserverprosqldr
EXEC master.dbo.sp_add_log_shipping_secondary_database
                 @secondary_database = N'Chapter16'
                ,@primary_server = N'primaryserverprosqladmin'
                ,@primary_database = N'Chapter16'
                ,@restore_delay = 10
                ,@restore_mode = 0
                ,@disconnect_users       = 0
                ,@restore_threshold = 30
                ,@threshold_alert_enabled = 1
                ,@history_retention_period = 5760
                ,@ignoreremotemonitor = 1
Listing 16-5

Sp_add_log_shipping_secondary_database

The final task is to synchronize the monitor server and the DR server. We do this by using the (surprisingly) undocumented stored procedure sp_processlogshippingmonitorsecondary. The parameters accepted by this procedure are detailed in Table 16-5.
Table 16-5

sp_processlogshippingmonitorsecondary

Parameter

Description

@mode

The recovery mode to use for the database. 0 indicates NORECOVERY and 1 indicates STANDBY.

@secondary_server

The server/instance of the secondary server.

@secondary_database

The name of the secondary database.

@secondary_id

The ID of the secondary server.

@primary_server

The server/instance of the primary server.

@monitor_server

The server/instance of the monitor server.

@monitor_server_security_mode

The authentication mode used to connect to the monitor server.

@primary_database

The name of the primary database.

@restore_threshold

The amount of time that can elapse without a restore being applied before an alert is triggered; specified in minutes.

@threshold_alert

The alert that fires if the alert restore threshold is exceeded.

@threshold_alert_enabled

Specifies if the alert is enabled or disabled.

@last_coppied_file

The file name of the last log backup to be copied to the secondary server.

@last_coppied_date

The date and time of the last time a log was copied to the secondary server.

@last_coppied_date_utc

The date and time of the last time a log was copied to the secondary server, converted to UTC (Coordinated Universal Time).

@last_restored_file

The file name of the last transaction log backup to be restored on the secondary server.

@last_restored_date

The date and time of the last time a log was restored on the secondary server.

@last_restored_date_utc

The date and time of the last time a log was restored on the secondary server, converted to UTC.

@last_restored_latency

The elapsed time between the last log backup on the primary and its corresponding restore operation completing on the secondary.

@history_rentention_period

The duration that the history is retained, specified in minutes.

The script in Listing 16-6 demonstrates how to use the sp_processlogshippingmonitorsecondary stored procedure to synchronize the information between our DR server and our monitor server. We should run the script against the monitor server, and since we are connecting explicitly to the MONITORSERVERPROSQLMONITOR instance, we should run the script in SQLCMD mode.
:connect monitorserverprosqlmonitor
EXEC msdb.dbo.sp_processlogshippingmonitorsecondary
                 @mode = 1
                ,@secondary_server = N'drserverprosqldr'
                ,@secondary_database = N'Chapter16'
                ,@secondary_id = N''
                ,@primary_server = N'primaryserverprosqladmin'
                ,@primary_database = N'Chapter16'
                ,@restore_threshold = 30
                ,@threshold_alert = 14420
                ,@threshold_alert_enabled = 1
                ,@history_retention_period        = 5760
                ,@monitor_server = N'monitorserver.prosqladmin.comprosqlmonitor'
                ,@monitor_server_security_mode = 1
Listing 16-6

Sp_ processlogshippingmonitorsecondary

Log Shipping Maintenance

After you configure log shipping, you still have ongoing maintenance tasks to perform, such as failing over to the secondary server, if you need to, and switching the primary and secondary roles. These topics are discussed in the following sections. We also discuss how to use the monitor server to monitor the log shipping environment.

Failing Over Log Shipping

If your primary server has an issue, or your primary site fails, you need to fail over to your secondary server. To do this, first back up the tail end of the log. We discuss this process fully in Chapter 13, but the process essentially involves backing up the transaction log without truncating it and with NORECOVERY. This stops users from being able to connect to the database, therefore avoiding any further data loss. Obviously, this is only possible if the primary database is accessible. You can perform this action for the Chapter16 database by using the script in Listing 16-7.
BACKUP LOG Chapter16
TO  DISK = N'c:logshippingprimaryChapter16_tail.trn'
WITH  NO_TRUNCATE , NAME = N'Chapter16-Full Database Backup', NORECOVERY
GO
Listing 16-7

Backing Up the Tail End of the Log

The next step is to manually copy the tail end of the log and any other logs that have not yet been copied to the secondary server. Once this is complete, you need to manually restore the outstanding transaction log backups to the secondary server, in sequence. You need to apply the backups with NORECOVERY until the final backup is reached. This final backup is applied with RECOVERY. This causes any uncommitted transactions to be rolled back and the database to be brought online. Listing 16-8 demonstrates applying the final two transaction logs to the secondary database.
--Restore the first transaction log
RESTORE LOG Chapter16
FROM  DISK = N'C:LogShippingDRChapter16.trn'
WITH  FILE = 1,  NORECOVERY,  STATS = 10 ;
GO
--Restore the tail end of the log
RESTORE LOG Chapter16
FROM  DISK = N'C:LogShippingDRChapter16_tail.trn'
WITH  FILE = 1,  RECOVERY, STATS = 10 ;
GO
Listing 16-8

Applying Transaction Logs

Switching Roles

After you have failed over log shipping to the secondary server, you may want to swap the server roles so that the secondary that you failed over to becomes the new primary server and the original primary server becomes the secondary. In order to achieve this, first you need to disable the backup job on the primary server and the copy and restore jobs on the secondary server. If you are not planning to fail back the service, then it’s a good idea to do this as soon as possible. We can perform this task for our Log Shipping topology by using the script in Listing 16-9. Because we are connecting to multiple servers, we need to run this script in SQLCMD mode.
:connect primaryserverprosqladmin
USE [msdb]
GO
--Disable backup job
EXEC msdb.dbo.sp_update_job @job_name = 'LSBackup_Chapter16',
                 @enabled=0 ;
GO
:connect drserverprosqldr
USE [msdb]
GO
--Disable copy job
EXEC msdb.dbo.sp_update_job @job_name='LSCopy_primaryserverprosqladmin_Chapter16',
                 @enabled=0 ;
GO
--Disable restore job
EXEC msdb.dbo.sp_update_job @job_name='LSRestore_primaryserverprosqladmin_Chapter16',
                 @enabled=0 ;
GO
Listing 16-9

Disabling Log Shipping Jobs

The next step is to reconfigure log shipping on the new primary server. When you do this, configure the following:

  • Ensure that you use the same backup share that you used for the original primary server.

  • Ensure that when you add the secondary database, you specify the database that was originally the primary database.

  • Specify the synchronization No, the secondary database is initialized option.

The script in Listing 16-10 performs this action for our new secondary server. Since we are connecting to multiple servers, we should run the script in SQLCMD mode.
:connect drserverprosqldr
DECLARE @LS_BackupJobId        AS uniqueidentifier
DECLARE @SP_Add_RetCode        As int
DECLARE @LS_BackUpScheduleID   AS int
EXEC @SP_Add_RetCode = master.dbo.sp_add_log_shipping_primary_database
                @database = N'Chapter16'
                ,@backup_directory = N'\primaryserverlogshippingprimary'
                ,@backup_share = N'\primaryserverlogshippingprimary'
                ,@backup_job_name = N'LSBackup_Chapter16'
                ,@backup_retention_period = 2880
                ,@backup_compression = 2
                ,@backup_threshold = 60
                ,@threshold_alert_enabled = 1
                ,@history_retention_period = 5760
                ,@backup_job_id = @LS_BackupJobId OUTPUT
                ,@overwrite = 1
EXEC msdb.dbo.sp_add_schedule
                @schedule_name =N'LSBackupSchedule_DRSERVERPROSQLDR1'
                ,@enabled = 1
                ,@freq_type = 4
                ,@freq_interval = 1
                ,@freq_subday_type = 4
                ,@freq_subday_interval = 5
                ,@freq_recurrence_factor = 0
                ,@active_start_date = 20220517
                ,@active_end_date = 99991231
                ,@active_start_time = 0
                ,@active_end_time = 235900
                ,@schedule_id = @LS_BackUpScheduleID OUTPUT
EXEC msdb.dbo.sp_attach_schedule
                @job_id = @LS_BackupJobId
                ,@schedule_id = @LS_BackUpScheduleID
EXEC msdb.dbo.sp_update_job
                @job_id = @LS_BackupJobId
                ,@enabled = 1
EXEC master.dbo.sp_add_log_shipping_primary_secondary
                @primary_database = N'Chapter16'
                ,@secondary_server = N'primaryserverprosqladmin'
                ,@secondary_database = N'Chapter16'
                ,@overwrite = 1
:connect primaryserverprosqladmin
DECLARE @LS_Secondary__CopyJobId        AS uniqueidentifier
DECLARE @LS_Secondary__RestoreJobId        AS uniqueidentifier
DECLARE @LS_Add_RetCode        As int
DECLARE @LS_SecondaryCopyJobScheduleID        AS int
DECLARE @LS_SecondaryRestoreJobScheduleID        AS int
EXEC @LS_Add_RetCode = master.dbo.sp_add_log_shipping_secondary_primary
                @primary_server = N'DRSERVERPROSQLDR'
                ,@primary_database = N'Chapter16'
                ,@backup_source_directory = N'\primaryserverlogshippingprimary'
                ,@backup_destination_directory = N'\primaryserverlogshippingprimary'
                ,@copy_job_name = N'LSCopy_DRSERVERPROSQLDR_Chapter16'
                ,@restore_job_name = N'LSRestore_DRSERVERPROSQLDR_Chapter16'
                ,@file_retention_period = 2880
                ,@overwrite = 1
                ,@copy_job_id = @LS_Secondary__CopyJobId OUTPUT
                ,@restore_job_id = @LS_Secondary__RestoreJobId OUTPUT
EXEC msdb.dbo.sp_add_schedule
                @schedule_name =N'DefaultCopyJobSchedule'
                ,@enabled = 1
                ,@freq_type = 4
                ,@freq_interval = 1
                ,@freq_subday_type = 4
                ,@freq_subday_interval = 5
                ,@freq_recurrence_factor = 0
                ,@active_start_date = 20220517
                ,@active_end_date = 99991231
                ,@active_start_time = 0
                ,@active_end_time = 235900
                ,@schedule_id = @LS_SecondaryCopyJobScheduleID OUTPUT
EXEC msdb.dbo.sp_attach_schedule
                @job_id = @LS_Secondary__CopyJobId
                ,@schedule_id = @LS_SecondaryCopyJobScheduleID
EXEC msdb.dbo.sp_add_schedule
                @schedule_name =N'DefaultRestoreJobSchedule'
                ,@enabled = 1
                ,@freq_type = 4
                ,@freq_interval = 1
                ,@freq_subday_type = 4
                ,@freq_subday_interval = 5
                ,@freq_recurrence_factor = 0
                ,@active_start_date = 20220517
                ,@active_end_date = 99991231
                ,@active_start_time = 0
                ,@active_end_time = 235900
                ,@schedule_id = @LS_SecondaryRestoreJobScheduleID OUTPUT
EXEC msdb.dbo.sp_attach_schedule
                @job_id = @LS_Secondary__RestoreJobId
                ,@schedule_id = @LS_SecondaryRestoreJobScheduleID
EXEC master.dbo.sp_add_log_shipping_secondary_database
                @secondary_database = N'Chapter16'
                ,@primary_server = N'DRSERVERPROSQLDR'
                ,@primary_database = N'Chapter16'
                ,@restore_delay = 10
                ,@restore_mode = 0
                ,@disconnect_users       = 0
                ,@restore_threshold = 30
                ,@threshold_alert_enabled = 1
                ,@history_retention_period = 5760
                ,@overwrite = 1
EXEC msdb.dbo.sp_update_job
                @job_id = @LS_Secondary__CopyJobId
                ,@enabled = 1
EXEC msdb.dbo.sp_update_job
                @job_id = @LS_Secondary__RestoreJobId
                ,@enabled = 1
Listing 16-10

Reconfiguring Log Shipping

The final step is to reconfigure monitoring so it correctly monitors our new configuration. We can achieve this for our log shipping environment by using the script in Listing 16-11. This script connects to both the primary and secondary servers, so we should run it in SQLCMD mode.
:connect drserverprosqldr
USE msdb
GO
EXEC master.dbo.sp_change_log_shipping_secondary_database
        @secondary_database = N'database_name',
        @threshold_alert_enabled = 0 ;
GO
:connect primaryserverprosqladmin
USE msdb
GO
EXEC master.dbo.sp_change_log_shipping_primary_database
        @database=N'database_name',
        @threshold_alert_enabled = 0 ;
GO
Listing 16-11

Reconfiguring Monitoring

Because we have now created the backup, copy, and restore jobs on both servers, switching the roles after subsequent failovers is much more straightforward. From now on, after we have failed over, we can switch roles by simply disabling the backup job on the original primary server and the copy and restore jobs on the secondary server, and then enabling the backup job on the new primary server and the copy and restore jobs on the new secondary server.

Monitoring

The most important aspect of monitoring your Log Shipping topology is ensuring that the backups are occurring on the primary and being restored on the secondary. For this reason, when we configure log shipping in this chapter, we tweak the acceptable thresholds for backups and restores, and Server Agent Alerts are created on the monitor server. Before these alerts are useful, however, we need to configure them with an operator to notify.

On the monitor server, we have configured two alerts. The first is called Log Shipping Primary Server Alert, and when you view the General tab of this alert’s properties, you see that it is configured to respond to Error 14420, as shown in Figure 16-11. Error 14420 indicates that a backup has not been taken of the primary database within the defined threshold.
Figure 16-11

The General tab

On the Response tab, displayed in Figure 16-12, we need to configure an operator to receive the alerts. You can either use the New Operator button to configure a new operator, or as in our case, simply select the appropriate notification channel for the appropriate operator(s) in the list. You can also select to run a SQL Server Agent job, which attempts to remediate the condition.
Figure 16-12

The Response tab

You should configure the Log Shipping Secondary Server Alert in the same way you configured the Log Shipping Primary Server Alert. The secondary server alert works in the same way, except that it is monitoring for Error 14421 instead of 14420. Error 14421 indicates that a transaction log has not been restored to the secondary server within the threshold period.

The log shipping report can be run from SQL Server Management Studio, and when you run it on the monitor server, it displays the status of the primary server and each secondary server. When run on the primary server, it shows the status of each database based on the backup jobs and includes a line for each secondary. When run on the DR server, it shows the status of each database based on the restore jobs. You can access the report by invoking the context menu of the instance and drilling through Reports ➤ Standard Reports, before selecting the Transaction Log Shipping Status report.

Figure 16-13 illustrates the report when run against the primary server. In our case, the replication is in good health, but if there were threshold breaches, then the rows would be highlighted in red. We could have obtained the same information by using the sp_help_log_shipping_monitor stored procedure.
Figure 16-13

The Log Shipping report

Summary

Log shipping is a technology that you can use to implement DR for databases. It synchronizes data by backing up the transaction log of the primary database, copying it to a secondary server, and then restoring it. If the log is restored with STANDBY, then uncommitted transactions are stored in a Transaction Undo File and you can reapply them before subsequent backups. This means that you can bring the database online in read-only mode for reporting. If the logs are restored with NORECOVERY, however, then the servers are ready for a DR invocation, but the databases are in an offline state.

Failing over a database to a secondary server involves backing up the tail end of the transaction log and then applying any outstanding log backups to the secondary database, before finally bringing the database online by issuing the final restore with RECOVERY. If you wish to switch the server roles, then you need to disable the current log shipping jobs, reconfigure log shipping so that the secondary server is now the primary, and then reconfigure monitoring. After subsequent failovers, however, switching the roles becomes easier, because you are able to simply disable and enable the appropriate SQL Server Agent jobs used by log shipping.

To monitor the health of your Log Shipping topology, you should configure the log shipping alerts and add an operator who will be notified if the alert fires. The alert for the primary server is monitoring for Error 14420, which means that the backup threshold has been exceeded. The alert for the secondary server(s) monitors for Error 14421, which indicates that the restore threshold has been exceeded.

A log shipping report is available; it returns data about the primary databases, the secondary databases, or all servers in the topology, depending on whether it is invoked from the primary server, the secondary server, or the monitor server, respectively. The same information can be obtained from the sp_help_log_shipping_monitor stored procedure.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.217.138.138