CHAPTER 27

image

Backup

By Jason Brimhall

In this chapter, you’ll find recipes covering several methods of backing up a database using T-SQL. This chapter is in no way meant to be a comprehensive source for database backups, but rather provides greater insight into the problems or limitations you may encounter. This chapter will outline the different types of backup methods using T-SQL as well as how to query the msdb database to find information about backup information.

27-1. Backing Up a Database

Problem

You want to do a full backup of the AdventureWorks2014 database to your C:Apress folder using T-SQL.

Solution

Execute a BACKUP DATABASE statement. Specify TO DISK and provide the path and desired file name. There are several options that can be used with the BACKUP DATABASE statement that will be covered in the following recipes. Note that this example will perform a backup of the AdventureWorks2014 database, which does not currently have any differential backups. If differential backups were present, the differential recovery chain would be disrupted. More on differential backups will be discussed later in this chapter.

The following example demonstrates using the BACKUP DATABASE statement and specifying the file location where the backup will be stored:

BACKUP DATABASE AdventureWorks2014
TO DISK = 'C:ApressAdventureWorks2014.bak';
GO
Processed 24528 pages for database 'AdventureWorks2014', file 'AdventureWorks2014_Data' on file 1.
Processed 2 pages for database 'AdventureWorks2014', file 'AdventureWorks2014_Log' on file 1.
BACKUP DATABASE successfully processed 24530 pages in 0.451 seconds (424.922 MB/sec).

Image Tip  Make sure that the Apress folder exists on the C: drive, or change the path in the previous query.

This command will make a full backup of all data and log files in the AdventureWorks2014 database, including the FILESTREAM file.

How It Works

The BACKUP DATABASE command, absent all options, will make a full backup of all data and log files in the specified database to the path or device declared in the statement. The process involves copying the data files as well as the active portion of any log files.

This process is rather straightforward, but the resulting backup can be a bit confusing. Consider a database that has a single 50GB data file and a 10MB log file. One would think the resulting backup should be just more than 50GB, but this is not always the case.

During the backup process, SQL Server does not back up empty data pages, meaning that the backup will contain only partially or fully used data pages found within the data files. Comparing the backups may lead some to believe that proprietary compression is involved in the backup process, but really only “used” space is copied to the backup media.

Do not assume that the size of the backup file will translate bit for bit to the size of the database when restored. People frequently ask: “If I restore the database from a backup that is 50GB, will the database only be 50GB?” The database may or may not be just 50GB after the restore is complete, depending on the used space within the database pages that are in the backup. The data files’ size information is contained in the backup media, and that file size is what is used to reserve the space to which the data pages are written. In this example, this would mean that another 50GB data file would be created in the restored path, and the data pages would then be written to the data file.

27-2. Compressing a Backup

Problem

As a database grows, disk space can be a fleeting commodity, and the space needed to store a full backup can be prohibitive. Thus, you want to compress your backup files to reduce disk space requirements.

Solution

Backing up the database using the WITH COMPRESSION clause will compress the associated backup. The following example demonstrates creating a full backup of the AdventureWorks2014 database using compression:

USE master;
GO
BACKUP DATABASE AdventureWorks2014
TO DISK = 'C:ApressAdventureWorks2014compress.bak'
WITH COMPRESSION;
GO
Processed 24528 pages for database 'AdventureWorks2014', file 'AdventureWorks2014_Data' on file 1.
Processed 2 pages for database 'AdventureWorks2014', file 'AdventureWorks2014_Log' on file 1.
BACKUP DATABASE successfully processed 24530 pages in 0.507 seconds (377.974 MB/sec).

How It Works

The solution example created a full backup to the C:Apress folder with the file name AdventureWorks2014compress.bak; it utilized compression to reduce the size of the backup file. Using compression in your backup sets can provide a few benefits, such as reducing the disk space required for the backups and reduced disk I/O. This only seems reasonable: because there is less to write to disk, there is noticeably reduced disk I/O.

The specific compression ratio is dependent on the data that is compressed, which in turn is based on the following factors:

  • The type of data
  • The consistency of the data among rows on a page
  • Whether the data is encrypted
  • Whether the database is compressed

This makes it difficult to ascertain the specific compression ratio and the resulting backup size without first comparing a compressed file to an uncompressed backup file. Since the exact compressed size is mostly unknown until completion, the database engine uses a preallocation algorithm to reserve space for the backup file. The algorithm reserves a predefined percentage of the size of the database for the backup. During the backup process, if more space is required, the database engine will grow the file as needed. Upon completing the backup, the database engine will shrink the file size of the backup as needed.

The following script illustrates the difference, as a ratio, between a compressed and an uncompressed backup file of the AdventureWorks2014 database. This query is dependent on having performed the two backups earlier in this chapter:

USE msdb;
GO
SELECT TOP 2
        bs.database_name
    ,CONVERT(DECIMAL(18,2),backup_size/1024/1024.0) AS backup_mb
    ,CONVERT(DECIMAL(18,2),compressed_backup_size/1024/1024.0) AS compressed_backup_mb
    ,CONVERT(DECIMAL(18,2),backup_size/compressed_backup_size)  AS ratio
        ,(1 - CONVERT(DECIMAL(18,2),compressed_backup_size/backup_size))*100 as CompressPercent
FROM msdb.dbo.backupset bs
WHERE bs.database_name = 'Adventureworks2014'
        AND bs.type = 'D'
ORDER BY backup_start_date DESC;
GO

Results may vary.

------------- ---------------
AdventureWorks2014    192.08  45.02   4.27    77.00
AdventureWorks2014    192.08  192.08  1.00    0.00

This shows that the compressed backup had a compression ratio of just over 4 to 1, while the uncompressed was obviously 1 to 1. This in turn means the compressed backup achieved a 77% reduction in backup size over the uncompressed backup. This substantially reduced file size can also be evaluated by comparing the physical file size from within Windows Explorer.

So as to not paint a one-sided picture, it is important to note that there are some considerations regarding backup compression, including the inability of previous versions (prior to SQL Server 2008) to read a compressed backup. In addition, a compressed SQL backup cannot coexist on tape with an NTBackup.

After this discussion of the performance benefits in both backup size and disk I/O, you may be convinced to begin changing all of your maintenance plans and scripts to utilize WITH COMPRESSION, but you will need to consider the cost. The reduced file size and disk I/O are replaced with increased CPU usage during the backup process, which can prove to be significant. The increase in CPU can be mitigated by using Resource Governor, an Enterprise Edition feature, but it does need to be weighed against server resources and priorities.

Compressed backups also cannot coexist with uncompressed backups on the same media set, which means that full, differential, and transaction log compressed backups must be stored in separate media sets than those that are uncompressed.

27-3. Ensuring That a Backup Can Be Restored

Problem

Backing up a database is straightforward, but you want to make sure that the backup is not corrupt and can be successfully restored.

Solution

There are several ways to ensure that a backup can be successfully restored, the first of which is to utilize the WITH CHECKSUM option in the backup command:

USE master;
GO
BACKUP DATABASE AdventureWorks2014
TO  DISK = 'C:ApressAdventureWorks2014check.bak'
WITH CHECKSUM;

This can be partnered with the command RESTORE VERIFYONLY to ensure not only that a backup is not corrupted, but also that it can be restored:

USE master;
GO
RESTORE VERIFYONLY
FROM  DISK = 'C:ApressAdventureWorks2014check.bak'
WITH CHECKSUM;

Results from the two preceding statements are as follows (results may vary).

Processed 24536 pages for database 'AdventureWorks2014', file 'AdventureWorks2014_Data' on file 1.
Processed 3 pages for database 'AdventureWorks2014', file 'AdventureWorks2014_Log' on file 1.
BACKUP DATABASE successfully processed 24539 pages in 0.464 seconds (413.165 MB/sec).

The backup set on file 1 is valid.

How It Works

Using the WITH CHECKSUM option in a backup will verify each page checksum, if it is present on the page. Regardless of whether page checksums are available, the backup will generate a separate checksum for the backup streams. The backup checksums are stored in the backup media and not in the database pages, which means they can be used in restore operations to validate that the backup is not corrupt.

Using the WITH CHECKSUM option in a backup command allows you to control the behavior of what will happen if an error occurs (such as an invalid checksum or a torn page). The default behavior of CHECKSUM is that if a checksum cannot be verified, the backup will be forced to stop, while reporting an error. This can be changed by adding CONTINUE_AFTER_ERROR:

USE master;
GO
BACKUP DATABASE AdventureWorks2014
TO  DISK = 'C:ApressAdventureWorks2014checkcon.bak'
WITH CHECKSUM, CONTINUE_AFTER_ERROR;

Using WITH CHECKSUM to ensure a backup’s integrity is a good start, but it does not validate the ability to restore the backup. To better ensure the ability to restore a backup, RESTORE VERIFYONLY should also be used in tandem with the CHECKSUM option. When using RESTORE VERIFYONLY, the following checks are performed to verify the backup:

  • Checking that it is a complete backup set with readable volumes
  • Validating certain header fields such as page_id
  • Checking the checksum, if one is present in the media
  • Checking for adequate free space in the restore path

Using WITH CHECKSUM in conjunction with RESTORE VERIFYONLY helps validate a backup’s integrity, but it is not fool-proof. Database backups are stored in a format called Microsoft Tape Format (MTF). This format includes MTF blocks, which contain the backup metadata, while the backed-up data is stored outside of the MTF blocks. RESTORE VERIFYONLY performs simple checks on the MTF blocks and not on the actual data blocks, which means the blocks could be consistent while the backup files could be corrupted. In such a case, RESTORE VERIFYONLY would show that the backup set could be restored, but issuing a restore would result in failure.

The moral of this solution is that you can try to ensure backup integrity, but the only sure way to test backup integrity is to restore backups and run DBCC CHECKDB on the restored database.

27-4. Transaction Log Backup

Problem

You have been tasked with creating a backup solution that includes a point-in-time recovery ability while ensuring minimal data loss in the event of a required recovery.

Solution

The solution simply requires planning so that the transaction logs for databases not in SIMPLE recovery mode are backed up routinely. The frequency of these transaction log backups should be determined in coordination with the business so as to determine the maximum acceptable data loss. In some cases, this may be a full day, and in other cases it may be no more than 15 minutes. If the database is not in SIMPLE recovery model, consider it mandatory to include the transaction logs in your backup scheme.

After performing a full backup, and after determining the desired schedule for the transaction log backups, one can then use a command such as the following to back up the transaction logs:

BACKUP LOG AdventureWorks2014
TO DISK = 'C:ApressAdventureWorks2014.trn';
GO

If backing up the logs, one may desire a script that is a little more complex to allow for different log backup files, rather than placing all log backups into the same backup file. To script a log backup and provide this ability, one could employ a script such as the following:

DECLARE @DiskPath VARCHAR(256)
        , @DBName sysname = 'AdventureWorks2014';

SET @DiskPath = 'C:Apress' + @DBName + '_'
        + REPLACE(REPLACE(REPLACE(CONVERT(CHAR(19), GETDATE(), 126), ' ', '_'), '-',
                                                ''), ':', '') + '.trn';

BACKUP LOG @DBName
        TO DISK = @DiskPath
        WITH INIT,CHECKSUM,COMPRESSION;
GO

How It Works

The backup of a transaction log follows the same basic syntax as with a database backup. The key difference with the transaction log backup is the specification of the LOG keyword in the backup statement.

The second example provided a little more complexity by adding a date and time component to the file name. This additional complexity will allow for the backup of the transaction log at a regular interval while sending each backup to a different file.

27-5. Understanding Why the Transaction Log Continues to Grow

Problem

A database transaction log has grown larger than the data files and continues to grow larger by the day. You want to know why.

Solution

The solution to this may be as simple as performing a transaction log backup for the database in question. If the database is not in the simple recovery model, then transaction log backups should be performed on a regular interval. To determine if this is the case, the backupset table in msdb should be queried and compared to the sys.databases catalog view:

USE msdb;
GO

DECLARE @DBName VARCHAR(128) = 'AdventureWorks2014'

SELECT d.name,ca.backup_finish_date AS LastFullBackup
                ,bs.backup_finish_date AS LastLogBackup
        FROM sys.databases d
                LEFT OUTER JOIN msdb.dbo.backupset bs
                        ON d.name = bs.database_name
                        AND bs.type = 'l'
                OUTER APPLY (SELECT TOP 1 database_name,backup_finish_date
                                                FROM msdb.dbo.backupset
                                                WHERE database_name = d.name
                                                        AND type = 'D'
                                                ORDER BY backup_finish_date DESC) ca
        WHERE d.recovery_model_desc <> 'simple'
                AND bs.database_name IS NULL
                AND d.name = @DBName;
GO

If the database in question shows that it is in either the full or bulk-logged recovery models, and the LastLogbackup column reports a NULL value (or the time is greater than the interval for the log backups), then a log backup should be performed. See the following:

BACKUP LOG AdventureWorks2014
TO DISK = 'C:ApressAdventureWorks2014.trn';

How It Works

So that you get a better feel for this concept, I will cover each of the recovery models, beginning with the SIMPLE model. For performance reasons, when a transaction occurs, SQL Server will first check to see whether the affected data pages are in memory. If the necessary pages are not, then they will be read into memory. Subsequent requests for affected pages are presented from memory, because these reflect the most up-to-date information. All transactions are then written to the transaction log. Occasionally, based upon the recovery interval, SQL will run a checkpoint in which all “dirty” pages and log file information will be written to the data file.

Transaction log backups can only be performed on databases using the FULL or BULK_LOGGED recovery model. A database that is set to the SIMPLE recovery model can never be restored to a point in time, because the transaction log is truncated upon a checkpoint, and log backups are not possible in this recovery model. Aside from allowing a restore from the point that the transaction log backup completed, transaction log backups also allow for point-in-time (if no bulk-logged changes exist in the log backup under the BULK-LOGGED model) and transaction mark recovery. Point-in-time recovery is useful for restoring a database to a point prior to a database modification or failure. Transaction marking allows you to recover to the first instance of a marked transaction (using BEGIN TRAN...WITH MARK) and includes the updates made within this transaction.

The size of the transaction log backup file depends on the level of database activity and whether or not you are using a FULL or BULK_LOGGED recovery model. Again, the SIMPLE recovery model does not allow transaction log backups.

Databases that use the SIMPLE recovery model will truncate the inactive portion of the log upon checkpoint. This prevents the ability to back up the transaction log, while potentially reducing the risk of runaway log-file size.

The following script creates a database called Logging, which uses the SIMPLE recovery model and creates a single table, FillErUp:

--Create the Logging database
USE master;
GO
CREATE DATABASE Logging;
GO

ALTER DATABASE Logging
SET RECOVERY SIMPLE;
GO

USE Logging;

CREATE TABLE FillErUp(
RowInfo CHAR(150)
);
GO

Monitoring the log-file size and the reason why a log file is waiting to reclaim space can be done through the sys.database_files and sys.databases catalog views. The following queries show the initial log- and data-file sizes as well as the log_reuse_wait_desc column; keep in mind that the size column represents the size in 8KB data pages:

USE master;
GO
DECLARE @DBName VARCHAR(128) = 'Logging';

SELECT d.name as DBName, mf.size, d.recovery_model_desc, d.log_reuse_wait_desc
        FROM sys.master_files mf
                INNER JOIN sys.databases d
                        ON d.database_id = mf.database_id
        WHERE d.name = @DBName
                AND mf.type_desc = 'LOG';

Execute this query, and the results should be similar to the following:

DBName         size    recovery_model_desc    log_reuse_wait_desc
Logging        70      SIMPLE                 NOTHING

The log_reuse_wait_desc field is of particular importance in this solution because it provides the reason why the log file is not being truncated. Since the database has just been created and no transactions have been posted to it, there is nothing preventing the log from being truncated.

The following query uses a loop to post 10,000 rows to the FillErUp table and again queries the log-file size and the log_reuse_wait_desc:

USE Logging;
GO
SET NOCOUNT ON;
DECLARE @count INT = 10000
WHILE @count > 0
BEGIN
   INSERT INTO FillErUp (RowInfo)
   SELECT ‘This is row # ‘ + CONVERT(CHAR(4), @count)
   SET @count -= 1
END;
GO

CHECKPOINT;
GO

/* check catalog views */
USE master;
GO
DECLARE @DBName VARCHAR(128) = ‘Logging’;

SELECT d.name as DBName, mf.size, d.recovery_model_desc, d.log_reuse_wait_desc
        FROM sys.master_files mf
                INNER JOIN sys.databases d
                        ON d.database_id = mf.database_id
        WHERE d.name = @DBName
                AND mf.type_desc = ‘LOG’;
“‘’‘’

The results of the query show that the log-file size is relatively the same and that the log_reuse_wait_desc value is still nothing.

DBName         size    recovery_model_desc    log_reuse_wait_desc
Logging        104     SIMPLE                 NOTHING

Since the database recovery model is set to simple, the transaction log will be truncated after a checkpoint, allowing the log file to remain at relatively the same size. A database being set to the SIMPLE recovery model does not guarantee that the log file will not grow. A checkpoint will truncate the inactive portion of the log, but the active portion cannot be truncated.

The following example uses BEGIN TRANSACTION (and intentionally leaves it open in order to run some additional queries prior to committing the transaction) to display the effects of a long-running transaction on log-file growth and also uses DBCC SQLPERF to display some statistics on the log file after the transaction runs:

USE Logging;
GO

BEGIN TRANSACTION

DECLARE @count INT = 10000
WHILE @count > 0
BEGIN
   INSERT INTO FillErUp
   SELECT 'This is row # ' + CONVERT(CHAR(4), @count)
   SET @count -= 1
END;
GO

USE master;
GO
DECLARE @DBName VARCHAR(128) = 'Logging';

SELECT d.name as DBName, mf.size, d.recovery_model_desc, d.log_reuse_wait_desc
        FROM sys.master_files mf
                INNER JOIN sys.databases d
                        ON d.database_id = mf.database_id
        WHERE d.name = @DBName
                AND mf.type_desc = 'LOG';

DBCC SQLPERF(LOGSPACE);
GO

The results of the query show that the log file has grown, and it cannot be truncated because of an open transaction. A portion of the DBCC SQLPERF results are included to show the size and percentage of used log space.

DBName         size   recovery_model_desc     log_reuse_wait_desc
Logging        728    SIMPLE                  ACTIVE_TRANSACTION

Database Name  Log Size (MB)        Log Space Used (%)
----------------------   ------------------       ----------------------------
Logging                  5.679688                 97.52407

The results show that the log-file size has grown and that the log space is more than 97 percent used. Issuing a COMMIT TRANSACTION and a manual CHECKPOINT will close the transaction, but you will also notice that the log file remains the same size:

COMMIT TRANSACTION;
GO
CHECKPOINT;
GO

USE master;
GO
DECLARE @DBName VARCHAR(128) = 'Logging';

SELECT d.name as DBName, mf.size, d.recovery_model_desc, d.log_reuse_wait_desc
        FROM sys.master_files mf
                INNER JOIN sys.databases d
                        ON d.database_id = mf.database_id
        WHERE d.name = @DBName
                AND mf.type_desc = 'LOG';

DBCC SQLPERF(LOGSPACE);
GO
DBName         size    recovery_model_desc    log_reuse_wait_desc
Logging        728     SIMPLE                 NOTHING

Database Name            Log Size (MB)            Log Space Used (%)
----------------------   ------------------       ----------------------------
Logging                  5.679688                 16.96183

It is obvious from the query results that the transaction has been closed, but the log-file size is the same size as when the transaction remained open. The difference between the results is the percentage of used log space. The used log space before the commit and checkpoint is more than 97 percent. However, after the commit the used space falls to just under 17 percent.

Image Tip  There are ways by which to regain disk space from bloated log files, but the focus of this solution is to highlight maintenance. Any steps taken to shrink the log file should be performed only after careful consideration.

Unlike the SIMPLE recovery model, both the FULL and BULK_LOGGED recovery models require transaction log backups to be taken before a transaction log can be truncated.

To demonstrate, the following example will drop and recreate the Logging database, then create and populate the FillErUp table via a set-based transaction. The log-file size and log reuse wait description are then queried immediately after the creation of the database:

USE master;
GO

--Create the Logging database
IF EXISTS (SELECT * FROM sys.databases WHERE name = 'Logging')
BEGIN
DROP DATABASE Logging;
END
GO

CREATE DATABASE Logging
ON PRIMARY
( NAME = N'Logging', FILENAME = N'C:APRESSLogging.mdf' , SIZE = 4096KB  )
 LOG ON
( NAME = N'Logging_log', FILENAME = N'C:APRESSLogging_log.ldf' , SIZE = 512KB );
GO

ALTER DATABASE Logging
SET RECOVERY FULL;
GO

ALTER DATABASE Logging
MODIFY FILE
    (NAME = Logging_log,
    SIZE = 520KB);
GO

--Size is 101
USE master;
GO
DECLARE @DBName VARCHAR(128) = 'Logging';

SELECT d.name as DBName, mf.size, d.recovery_model_desc, d.log_reuse_wait_desc
        FROM sys.master_files mf
                INNER JOIN sys.databases d
                        ON d.database_id = mf.database_id
        WHERE d.name = @DBName
                AND mf.type_desc = 'LOG';
GO

USE Logging;
GO
SELECT TOP 10000
        RowInfo = 'This is row # ' + CONVERT(CHAR(5)
                    , ROW_NUMBER() OVER (PARTITION BY t1.autoval ORDER BY (SELECT NULL)))
   INTO dbo.FillErUp
   FROM master.dbo.syscolumns t1,
        master.dbo.syscolumns t2;
GO

CHECKPOINT;
GO

USE master;
GO
DECLARE @DBName VARCHAR(128) = 'Logging';

SELECT d.name as DBName, mf.size, d.recovery_model_desc, d.log_reuse_wait_desc
        FROM sys.master_files mf
                INNER JOIN sys.databases d
                        ON d.database_id = mf.database_id
        WHERE d.name = @DBName
                AND mf.type_desc = 'LOG';
GO
DBName        size    recovery_model_desc     log_reuse_wait_desc
Logging       101     FULL                    NOTHING

DBName        size    recovery_model_desc     log_reuse_wait_desc
Logging       101     FULL                    NOTHING

The results may be different than expected, but this is for good reason. The database was created and then altered to the FULL recovery model. After a number of transactions, the log file remained the same size. Based on the definition of the FULL recovery model, the log should have grown. The log file will continue to be truncated upon a checkpoint until a full backup is taken. This is by design, because the transaction log backup requires a full backup be taken first. After a full backup has been performed, and until a transaction log backup is taken, the log file will grow.

After taking a full backup of the database and again running the INSERT query, the results are much different:

USE master;
GO

BACKUP DATABASE Logging
TO DISK =  'C:ApressLogging.bak';
GO

USE Logging;
GO
INSERT INTO dbo.FillErUp
SELECT TOP 10000
        RowInfo = 'This is row # ' + CONVERT(CHAR(5)
                    , ROW_NUMBER() OVER (PARTITION BY t1.autoval ORDER BY (SELECT NULL)))
   FROM master.dbo.syscolumns t1,
        master.dbo.syscolumns t2;
GO

CHECKPOINT;
GO

USE master;
GO
DECLARE @DBName VARCHAR(128) = 'Logging';

SELECT d.name as DBName, mf.size, d.recovery_model_desc, d.log_reuse_wait_desc
        FROM sys.master_files mf
                INNER JOIN sys.databases d
                        ON d.database_id = mf.database_id
        WHERE d.name = @DBName
                AND mf.type_desc = 'LOG';
GO
DBName        size    recovery_model_desc     log_reuse_wait_desc
Logging       536     FULL                    LOG_BACKUP

The end result is that the log file continued to grow, and the log reuse wait description reflects that a transaction log backup is required. The size of the transaction log can be maintained by scheduled transaction log backups. The transaction log could be backed up (and subsequently the size managed) by using a script such as the following:

   BACKUP LOG Logging
   TO DISK =  'C:ApressLogging.trn'

A database using the BULK_LOGGED recovery model still uses the log file to record transactions, but bulk-logged transactions are minimally logged, causing less growth in the log. Unlike the FULL recovery model, BULK_LOGGED recovery does not provide the ability to restore the database to a point in time if bulk-logged changes exist within the log backup. In this case, the entire log must be recovered. However, if there are no bulk-logged changes in the backup, then recovery to a point in time is possible.

Only specific operations are marked as BULK_LOGGED, such as BULK INSERT, SELECT INTO, bcp, and INSERT INTO...SELECT, to name a few. A common misconception is that the BULK_LOGGED recovery model will not cause the log file to grow, which is untrue.

The following code demonstrates that bulk operations can cause the transaction log file to grow with a database using the BULK_LOGGED recovery model:

USE master;
GO

IF EXISTS (SELECT * FROM sys.databases WHERE name = 'Logging')
BEGIN
DROP DATABASE Logging;
END
GO

CREATE DATABASE Logging;
GO

ALTER DATABASE Logging
SET RECOVERY BULK_LOGGED;
GO

BACKUP DATABASE Logging
TO DISK =  'C:ApressLogging_bulk.bak';
GO

USE master;
GO
DECLARE @DBName VARCHAR(128) = 'Logging';

SELECT d.name as DBName, mf.size, d.recovery_model_desc, d.log_reuse_wait_desc
        FROM sys.master_files mf
                INNER JOIN sys.databases d
                        ON d.database_id = mf.database_id
        WHERE d.name = @DBName
                AND mf.type_desc = 'LOG';
GO
DBName        size    recovery_model_desc     log_reuse_wait_desc
Logging       70      BULK_LOGGED             NOTHING
USE Logging;
GO

SELECT *
        INTO PurchaseOrderDetail
        FROM AdventureWorks2014.Purchasing.PurchaseOrderDetail

USE master;
GO
DECLARE @DBName VARCHAR(128) = 'Logging';

SELECT d.name as DBName, mf.size, d.recovery_model_desc, d.log_reuse_wait_desc
        FROM sys.master_files mf
                INNER JOIN sys.databases d
                        ON d.database_id = mf.database_id
        WHERE d.name = @DBName
                AND mf.type_desc = 'LOG';
GO
DBName        size    recovery_model_desc     log_reuse_wait_desc
Logging       136     BULK_LOGGED             LOG_BACKUP

In this example, I used a SELECT...INTO statement, which is minimally logged under the BULK_LOGGED recovery model. The results show that a database in a BULK_LOGGED recovery model will cause the log file to grow, even when using bulk operations.

27-6. Performing a Differential Backup

Problem

You have discovered that full backups are taking too long to perform and require too much storage. You would like to decrease the backup duration and reduce the storage requirement.

Solution

A differential backup contains all the changes made since the last full backup. A backup/restore strategy can use as many differential backups as desired, and since a differential contains all the changes from the last full backup, it can speed up the restoration process. After performing a full backup, a differential backup can be performed using a script such as the following:

USE master;
GO

BACKUP DATABASE AdventureWorks2014
TO DISK =  'C:ApressAdventureWorks2014_diff.bak'
WITH DIFFERENTIAL;
GO

How It Works

The differential backup statement is almost identical to a full backup statement, with the exception of the WITH DIFFERENTIAL. A differential backup will contain all changes in a database since the last full backup, which means that a full backup must already exist before a differential can be taken.

Although a differential backup can reduce the time required to back up and reduce the storage requirements, it is important to understand the mechanics of the backup. Consider a backup strategy that utilizes a full backup at 12 p.m. and a differential every two hours thereafter. The further away in time the differential backup gets from the full backup, the larger, based on activity, that backup would be. The 2 p.m. differential would contain all the changes from 12 p.m. to 2 p.m., the 4 p.m. differential would contain all the changes from 12 p.m. to 4 p.m., and so on.

While the differential backup will require less space and run faster initially, it can become slower over time as well as require more space than a full backup. A backup plan that involves differential backups needs to be carefully evaluated to find the balance between size, speed, and frequency of the differential backups and the full backups.

27-7. Backing Up a Single Row or Table

Problem

In preparation for a deployment, you have determined that it would be helpful to create a backup of a table that will undergo data or schema changes during the deployment.

Solution

Backup granularity within SQL Server starts with the database and then moves to the filegroup and finally to the file. Unless a table resides on its own filegroup, the backup statement does not natively support this functionality, but you can use several workarounds to meet this need.

To back up a table, or even specific records from a table, one of the easiest solutions is to utilize the SELECT...INTO statement. In this example we will perform a backup of the Person.Person table from the AdventureWorks2014 database and store it in the AdventureWorks2014_Bak database. See the following:

USE master;
GO

IF EXISTS (SELECT * FROM sys.databases WHERE name = 'AdventureWorks2014_Bak')
BEGIN
DROP DATABASE AdventureWorks2014_Bak;
END
CREATE DATABASE AdventureWorks2014_Bak;
GO

USE AdventureWorks2014_Bak;
GO
CREATE SCHEMA Person
GO

SELECT BusinessEntityID,
         FirstName,
         MiddleName,
         LastName
INTO Person.Person
FROM AdventureWorks2014.Person.Person;
GO

SELECT TOP 6 *
FROM Person.Person
ORDER BY BusinessEntityID;
GO

Executing the previous query will create the AdventureWorks2014_Bak database and the Person schema, and will populate the Person table with the result of the SELECT statement from the Person.Person table in the AdventureWorks2014 database.

BusinessEntityID      FirstName      MiddleName   LastName
----------------      ---------      ----------   ---------------
1                     Ken            J            Sánchez
2                     Terri          Lee          Duffy
3                     Roberto        NULL         Tamburello
4                     Rob            NULL         Walters
5                     Gail           A            Erickson
6                     Jossef         H            Goldberg

Once the database is created and the table is populated, the data can then be validated with the final SELECT statement in the previous query. Once complete, you are ready to continue with the deployment, having essentially completed a table backup.

How It Works

The process behind this method is fairly self-evident. The SELECT...INTO creates a granular backup without using the BACKUP syntax of the table to be affected. Should something happen, this backup can provide the means to a quick recovery of the data or schema that had been modified without performing a database restore. Several issues may complicate method recovery using this method, such as a column that is an IDENTITY, replication, or triggers on the affected table.

This method also reduces the storage requirements you might encounter with other methods, such as a full backup (and the restore of the full backup so as to recover a single table as illustrated in this example).

27-8. Creating a Database Snapshot

Problem

In preparation for a deployment, you need to create a backup of the database, but the full backup would take longer than the maintenance window you have been given. You need a means by which to quickly back up the database and still have adequate time for the maintenance window.

Solution

You can utilize a database snapshot as a means of backing up “state” data. A database snapshot is created on a user database from within an instance of SQL and works on a “write”-on change basis. When creating a database snapshot, disk space is reserved for the snapshot that is equal to the reserved space of the data files of the user database. The disk space reserved for the snapshot remains completely empty until a data page from the user database is modified or deleted. Once this change occurs, the original data page is written to the database snapshot, preserving the data as it appeared at the point in time of the snapshot being taken.

The following code will utilize the AdventureWorks2014 database to create a database snapshot called AventureWorks2014_SS:

USE master;
GO

CREATE DATABASE AdventureWorks2014_SS ON
( NAME = AdventureWorks2014_Data, FILENAME =
'C:ApressAdventureWorks2014_SS.ss' )
AS SNAPSHOT OF AdventureWorks2014;
GO

USE master;
GO

SELECT DB_NAME(database_id) AS DBName,
     name AS FileName,
         type_desc,
         size
FROM sys.master_files
WHERE DB_NAME(database_id) LIKE 'AdventureWorks2014%'
        AND type_desc = 'ROWS';
GO

The results of the previous query show that the AventureWorks2014 and AdventureWorks2014_SS databases were created and that the file sizes are identical.

DBName                FileName                        type_desc      size
AdventureWorks2014    AdventureWorks2014_Data         ROWS           26272
AdventureWorks2014_SS AdventureWorks2014_Data         ROWS           26272

These results demonstrate that the snapshot has a data file that is the exact same size as the data files from the source database (AdventureWorks2014, in this case). Browsing the directory of the snapshot would reveal that a single file with a .ss extension would exist as the database snapshot.

Upon creating a database snapshot, the single snapshot file is completely empty and is used only as a placeholder. Data pages are added to the snapshot file as changes occur within the original database. This means that when directly querying the database snapshot, the requested 8KB data pages, which have not been modified since the snapshot was taken, are being returned from the original database.

How It Works

While not a traditional backup, a snapshot can serve the same purpose. A snapshot does not use the BACKUP syntax, but rather uses the CREATE DATABASE..AS SNAPSHOT syntax. This provides a very quick means to make a backup of the database by creating a sparse file on the operating system. You can validate the existence of a snapshot and the source of the snapshot via the following query:

USE master;
GO

SELECT d.name AS DBName,
         DB_NAME(d.source_database_id) AS SourceDB,
         d.create_date,
         d.is_read_only,
         mf.is_sparse
FROM sys.master_files mf
        INNER JOIN sys.databases d
                ON d.database_id = mf.database_id
WHERE d.name LIKE 'AdventureWorks2014%'
        AND type_desc = 'ROWS';
GO

This query demonstrates how to find the snapshots that may exist for a particular database within an instance. Snapshots have a few indicators to help identify them. These indicators are the is_sparse, is_read_only, and source_database_id fields. The is_sparse field can be found in the sys.master_files or sys.database_files catalog views, while the other fields are found within the sys.databases system catalog view. If all three fields contain a value, the database in question is a snapshot of another database.

27-9. Backing Up Data Files or Filegroups

Problem

Your database size is so large that it is prohibitive to complete a full database backup on a daily basis.

Solution #1: Perform a File Backup

It can become burdensome to maintain an effective and efficient backup procedure in very large databases (VLDBs) because of the time it takes to perform a full backup and the amount of space required. Rather than a full backup, backups can be made of the data files individually. This can be demonstrated by creating a database with multiple files and filegroups:

CREATE DATABASE BackupFiles
 ON  PRIMARY
( NAME = N'BackupFiles', FILENAME = N'C:ApressBackupFiles.mdf' , SIZE = 4096KB , FILEGROWTH = 1024KB ),
 FILEGROUP [Current]
( NAME = N'CurrentData', FILENAME = 'C:ApressCurrentData.ndf' , SIZE = 4096KB , FILEGROWTH = 1024KB ),
 FILEGROUP [Historic]
( NAME = N'HistoricData', FILENAME = 'C:ApressHistoricData.ndf' , SIZE = 4096KB , FILEGROWTH = 1024KB )
 LOG ON
( NAME = N'BackupFiles_log', FILENAME = 'C:ApressBackupFiles_log.ldf' , SIZE = 1024KB , FILEGROWTH = 512KB);
GO
ALTER DATABASE [BackupFiles] SET RECOVERY FULL;
GO

To back up a single file from a database, simply use the BACKUP DATABASE command and specify the files to back up:

USE master;
GO

BACKUP DATABASE BackupFiles
FILE = 'HistoricData'
TO DISK =  'C:ApressHistoric.bak';
GO

Solution #2: Perform a Filegroup Backup

Sometimes a filegroup contains multiple files that need to be backed up in a single backup set. This is accomplished easily enough using the BACKUP DATABASE command and specifying the filegroup to be backed up:

USE master;
GO

BACKUP DATABASE BackupFiles
FILEGROUP = 'Historic'
TO DISK =  'C:ApressHistoricFG.bak';
GO

How It Works

Backing up a file or filegroup works the same as a full database backup while providing a more focused, granular approach to the specific filegroup or file in the database. Either method can be employed to reduce the amount of time and space required for a full database backup.

It is critical to fully plan a database design if this backup/recovery method is going to be used so as to ensure that the entire database can be restored to a point in time and remain consistent. Consider the impact of placing one table on a file that references another table on a separate file. If either of the files need to be restored, and referential integrity would be violated because of the point in time of the restoration, this can cause a great deal of work when restoring the database to a consistent and valid state.

It is also important to remember that the primary file/filegroup must be backed up, because it contains all of the system tables and database objects.

27-10. Mirroring Backup Files

Problem

You want to ensure that multiple backups are written to different disks/tapes without affecting the backup media set or having to manually copy the backup files.

Solution

SQL Server 2005 Enterprise Edition introduced the MIRROR TO clause, which will write the backup to multiple devices:

BACKUP DATABASE AdventureWorks2014
TO DISK =  'C:ApressAdventureWorks2014.bak'
MIRROR TO DISK = 'C:ApressMirroredBackupAdventureWorks2014.bak'
WITH
   FORMAT,
   MEDIANAME = 'AdventureWorksSet1';
GO

Image Tip  Make sure that the Apress and ApressMirroredBackup folders exist on the C: drive, or change the path in the previous query.

How It Works

During the backup, using MIRROR TO will write the backup to multiple devices, which ensures that the backup file resides on separate tapes or disks in case one should become corrupt or unusable. There are several limitations when using the MIRROR TO clause, the first being that it requires either the Developer or Enterprise edition. The mirrored devices must be the same type, meaning you cannot write one file to disk and the other to tape. The mirrored devices must be similar and have the same properties. Insufficiently similar devices will generate the error message 3212.

27-11. Backing Up a Database Without Affecting the Normal Sequence of Backups

Problem

An up-to-date backup needs to be created that will not affect the normal sequence of backups.

Solution

Using the BACKUP command and specifying WITH COPY_ONLY will create the desired backup without affecting the backup or restore sequence (particularly if differential backups are involved). See the following:

USE master;
GO

BACKUP DATABASE AdventureWorks2014
TO DISK =  'C:ApressAdventureWorks2014Copydiff.bak'
WITH COPY_ONLY;
GO

How It Works

The only difference in the backup process when using WITH COPY_ONLY is that the backup will have no effect on the backup or restore procedure for a database.

Image Caution  If COPY_ONLY is used with a transaction log backup, the transaction log will not be truncated once the backup is complete.

27-12. Querying Backup Data

Problem

You have to create a programmatic way to return backup information.

Solution

The msdb database maintains all of the backup history in the system tables. The system tables backupfile, backupfilegroup, backupmediafamily, backupmediaset, and backupset contain the full history of database backups as well as the media types and locations. These tables can be queried to return information on any database backup that has occurred.

The following query will return the database name, the date and time the backup began, the type of backup that was taken, whether it used COPY_ONLY, the path and file name or device name, and the backup size, ordering by the start date in descending order. See the following:

USE msdb;
GO
SELECT bs.database_name,
           bs.backup_start_date,
           CASE bs.type
             WHEN 'D' THEN 'Database'
             WHEN 'I' THEN 'Differential database'
             WHEN 'L' THEN 'Log'
             WHEN 'F' THEN 'File or filegroup'
             WHEN 'G' THEN 'Differential file'
             WHEN 'P' THEN 'Partial'
             WHEN 'Q' THEN 'Differential partial'
             ELSE 'Unknown'
           END AS BackupType,
           bmf.physical_device_name,
           bs.backup_size/1024/1024 as BackSizeMB
FROM dbo.backupset bs
INNER JOIN dbo.backupmediafamily bmf
ON bs.media_set_id = bmf.media_set_id
ORDER BY bs.database_name,bs.backup_start_date DESC;
GO

Your results will vary.
database_name         backup_start_date          BackupType
AdventureWorks2014    2015-03-18 17:12:33.000    Differential Database
AdventureWorks2014    2015-03-18 17:06:53.000    Differential Database
AdventureWorks2014    2015-03-18 16:45:22.000    Full Database

physical_device_name                      BackSizeMB
C:ApressAdventureWorks2014Copydiff.bak  1.07617187500
C:ApressAdventureWorks2014_diff.bak     1.07617187500
C:ApressAdventureWorks2014.bak          192.07617187500

The results show the most recent backups in descending order by the date the backup was taken.

How It Works

Whenever a database backup is performed, it is recorded in the msdb database. The system tables that record this information are made available to query. This information can be used for a number of different purposes, including automating the restoration of a database.

27-13. Encrypting a Backup

Problem

It has been determined that the backups of a database contain some confidential information, and measures must be taken to protect that data at rest within the backup.

Solution

SQL Server 2014 has the ability to perform a backup while encrypting the data for that backup. To use encryption in a backup, an encryption algorithm and an encryptor (such as a certificate) must be specified as options in the BACKUP command. In addition to these requirements, a database master key must exist in the database that is to be backed up while utilizing the encryption option.

The following example demonstrates how to create a database master key (DMK) and a certificate, as well as how to utilize both of those to create an encrypted database backup:

-- Create a DMK.
-- The DMK is encrypted using the password "SQL2014Rocks"
USE master;
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'SQL2014Rocks';
GO

--create our encryptor
USE master;
GO
CREATE CERTIFICATE AW2014BackupCert
   WITH SUBJECT = 'AdventureWorks2014 Backup Encryption Certificate';
GO

--backup the database
USE master;
GO

BACKUP DATABASE AdventureWorks2014
TO DISK = N'C:ApressAdventureWorks2014_enc.bak'
WITH
  ENCRYPTION
   (
   ALGORITHM = AES_256,
   SERVER CERTIFICATE = AW2014BackupCert
   ),
  STATS = 5
GO

Image Note  The creation of the certificate in this example will produce the following warning. Certificate backups will be covered later in this chapter.

Image Warning  The certificate used for encrypting the database encryption key has not been backed up. You should immediately back up the certificate and the private key associated with the certificate. If the certificate ever becomes unavailable or if you must restore or attach the database on another server, you must have backups of both the certificate and the private key or you will not be able to open the database.

How It Works

In this example, a database backup was created using encryption via the ENCRYPTION option. Performing this encrypted backup required that a DMK and a certificate be created on the instance.

The ENCRYPTION option was issued with the specification to use the AES_256 algorithm and the AW2014BackupCert that was created earlier in the script. If I wanted to validate whether a backup was encrypted, I could use the following query:

USE msdb;
GO
SELECT bs.database_name,
           bs.backup_start_date,
           CASE bs.type
             WHEN 'D' THEN 'Full Database'
             WHEN 'I' THEN 'Differential Database'
             WHEN 'L' THEN 'Log'
             WHEN 'F' THEN 'File or Filegroup'
             WHEN 'G' THEN 'Differential File'
             WHEN 'P' THEN 'Partial'
             WHEN 'Q' THEN 'Differential Partial'
             ELSE 'Unknown'
           END AS BackupType,
           bmf.physical_device_name,
           bs.backup_size/1024/1024 as BackSizeMB,
           bs.encryptor_type, bs.key_algorithm
FROM dbo.backupset bs
INNER JOIN dbo.backupmediafamily bmf
ON bs.media_set_id = bmf.media_set_id
WHERE bs.key_algorithm IS NOT NULL
ORDER BY bs.database_name,bs.backup_start_date DESC;
GO

27-14. Compressing an Encrypted Backup

Problem

The encrypted backups are consuming a lot of storage and you are running short on storage. You would like to implement a solution to maintain the current backup retention as well as reduce the storage-space requirements for the backups.

Solution

While I already discussed the ability to compress a backup in SQL Server in this chapter, I left the discussion for combining an encrypted backup with compression for this recipe. In prior versions of SQL Server, backups were encrypted either by the use of a third-party app or by having the database encrypted with TDE. In the latter of these two options, a compressed backup of a TDE-enabled database would provide no space savings. In SQL Server 2014, that has changed with the introduction of the native backup encryption option, which can be easily combined with the compression option.

The following example demonstrates the ability to encrypt and compress the same backup while achieving the desired space savings:

USE master;
GO

BACKUP DATABASE AdventureWorks2014
TO DISK = N'C:ApressAdventureWorks2014_compenc.bak'
WITH
  COMPRESSION,
  ENCRYPTION
   (
   ALGORITHM = AES_256,
   SERVER CERTIFICATE = AW2014BackupCert
   ),
  STATS = 5
GO

How It Works

In this example, I combined the options for encryption and compression into a single backup. I took advantage of the already existing DMK and certificate created in the prior recipe and specified the use of the AW2014BackupCert in the backup statement, as was done in the previous recipe.

The use of the COMPRESSION option reduced the overall size of the backup on disk to just over 45MB. This combination of compression and encryption can be essential to an environment tight on storage space and under the requirement of protecting the data in the backup.

To confirm the encryption and compression, the following query would be useful:

USE msdb;
GO
SELECT bs.database_name,
           bs.backup_start_date,
           CASE bs.type
             WHEN 'D' THEN 'Full Database'
             WHEN 'I' THEN 'Differential Database'
             WHEN 'L' THEN 'Log'
             WHEN 'F' THEN 'File or Filegroup'
             WHEN 'G' THEN 'Differential File'
             WHEN 'P' THEN 'Partial'
             WHEN 'Q' THEN 'Differential Partial'
             ELSE 'Unknown'
           END AS BackupType,
           bmf.physical_device_name,
           bs.backup_size/1024/1024 as BackSizeMB,
           bs.compressed_backup_size/1024/1024 as CompBackSizeMB,
           bs.encryptor_type, bs.key_algorithm
FROM dbo.backupset bs
INNER JOIN dbo.backupmediafamily bmf
ON bs.media_set_id = bmf.media_set_id
WHERE bs.key_algorithm IS NOT NULL
ORDER BY bs.database_name,bs.backup_start_date DESC;
GO

27-15. Backing Up Certificates

Problem

You have implemented certificates on your server for the purpose of creating encrypted backups. You want to ensure these certificates are available should the server crash.

Solution

SQL Server provides the ability to create a backup of the certificates that you implement within the instance. In the prior two recipes, I utilized a certificate to perform encrypted backups. When performing the backups in those recipes, a warning message was generated due to the lack of a backup of the certificate. The warning is often and easily overlooked, but should be heeded.

To perform a backup of a certificate, use the following syntax, which is very similar to what would be done for a database backup. Here is the general syntax:

BACKUP CERTIFICATE certname TO FILE = 'path_to_file'
    [ WITH PRIVATE KEY
      (
        FILE = 'path_to_private_key_file' ,
        ENCRYPTION BY PASSWORD = 'encryption_password'
        [ , DECRYPTION BY PASSWORD = 'decryption_password' ]
      )
    ]

The following script demonstrates how to use BACKUP CERTIFICATE to perform a backup of the certificate that has been used in the preceding recipes on backup encryption in this chapter:

USE master;
GO

BACKUP CERTIFICATE AW2014BackupCert
TO FILE = 'c:ApressAW2014BackupCert.cer'
WITH PRIVATE KEY ( FILE = 'C:ApressAW2014BackupCertKey.bak' ,
ENCRYPTION BY PASSWORD = 'SQL2014Rocks'),
GO

How It Works

In this example I demonstrated the use of BACKUP CERTIFICATE to create a backup of the certificate used for the encrypted backups. This recipe not only applied to the certificates created for encrypted backups, but for all certificates that may have been created within the instance. Recovery plans should include the recovery of certificates, and therefore those certificates need to have backups created.

In this example, I showed how to back up the certificate and the private key for that certificate to flat files on disk. Once the backup has been created, it can be confirmed that the certificate and its private key have been backed up by querying the sys.certificates catalog view:

USE master;
GO

SELECT name, pvt_key_encryption_type_desc,pvt_key_last_backup_date
        FROM sys.certificates
        WHERE name = 'AW2014BackupCert';

27-16. Backing Up to Azure

Problem

You wish to utilize Azure to store your database backups.

Solution

SQL Server 2014 introduces the ability to create a backup to Azure blob storage via the use of a credential and the URL option. This feature requires that an Azure storage object already exist and that a credential be created in SQL Server in order to access that storage object from Azure.

This next example demonstrates how to perform this backup using the WITH CREDENTIAL and TO URL options:

USE master;
GO

CREATE CREDENTIAL SQL2014
WITH IDENTITY= 'Recipes2014'
, SECRET = 'SQL2014Rocks'

BACKUP DATABASE AdventureWorks2014
TO URL = N'https://Recipes2014.blob.core.windows.net/backuptest/AW2014_blob.bak'
WITH
  CREDENTIAL = 'SQL2014'
  ,COMPRESSION
  ,STATS = 10
GO

How It Works

Performing a backup to Azure is very similar to performing a backup to local storage. The main difference is that the TO URL option must be specified in order to perform the backup to Azure blob storage.

This example showed the creation of a credential first and then the use of that credential to perform the backup TO URL. Additionally, the COMPRESSION option was used, but was entirely optional. Also, I included another optional parameter called STATS, which prints an informational message concerning the percentage complete (in whichever increment has been specified) for that operation.

To create the credential, I specified a specific IDENTITY and a specific SECRET. These required values will be different for your specific Azure Storage object. The identity must be the name of the storage account used when creating the Azure Storage object. The secret will be either the primary or secondary access key associated with that Azure Storage object that you created.

27-17. Backing Up to Multiple Files

Problem

You wish to utilize multiple backup paths because of inadequate storage space on any single path.

Solution

SQL Server gives you the ability to perform a backup to multiple files and paths. The implementation of a backup across multiple files or paths is called a striped backup. When performing a striped backup, the blocks of the backup are written to the files in the media set in a fixed order. Each file in the backup set contains a different set of blocks and therefore a different piece of the data. This is different from a mirrored backup in that a mirrored backup is a full copy of the backup in each backup file.

In this example, I demonstrate how to perform a striped backup:

USE master;
GO

BACKUP DATABASE AdventureWorks2014
TO DISK = 'C:ApressAdventureWorks2014_01.bak'
, DISK = 'C:ApressAdventureWorks2014_02.bak'
WITH COMPRESSION
        ,STATS = 5;

How It Works

A striped backup utilizes the same basic syntax as if performing a backup to a single file or device. The difference with a striped backup is that the TO DISK option is specified multiple times. The TO DISK must be specified for each device to be added to the striped set.

The use of striped backups is limited to local media or a networked path that can be designated from within the TO DISK syntax. A striped set cannot be performed when using the TO URL option, as of SQL Server 2014.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.225.55.198