10.7. Best practice considerations: backup and recovery

Developing a reliable backup strategy is arguably the most fundamental and important of all DBA tasks. Fortunately, there are a number of well-established best practices to assist in this process.

  • Design a backup strategy for the speed and ease of restoration, not the convenience of the backup. The design should be centered around the service level agreements for restoration time and acceptable data loss.

  • Thoroughly document the backup and restore process and include actual code for various restore scenarios. Anyone with moderate DBA skills should be able to follow the documentation to ensure the correct restore process is executed in the shortest possible time.

  • When developing disaster recovery plans, consider smaller events as potential disasters in addition to complete site failure. "Small" disasters such as the accidental deletion of a production table can have just as much impact as big ones.

  • Simulate and practice recovering from disasters on a regular basis to ensure that documentation is up to date and that all appropriate support staff are comfortable with, and trained in, the recovery process. Consider implementing random "fire drills" to more accurately simulate disaster.

  • To minimize the performance impact, schedule full backups for periods of low-usage times.

  • Ensure system databases (with the exception of tempdb) are backed up on a regular basis, and immediately after the installation of any service packs, hotfixes, or cumulative updates. System databases store important instance-level data such as login information, maintenance plans, SQL Agent job definitions, and execution history. Restoring a master database backup that was taken when an earlier service pack version was installed is not an experience I recommend!

  • Use COPY_ONLY backups to avoid breaking backup chains when additional backups are required.

  • Backing up the tail of a transaction log using the WITH NO_TRUNCATE option should be limited to situations in which the database is damaged and inaccessible; otherwise, the COPY_ONLY option should be used in its place.

  • After first creating a database or changing the recovery model, take a full backup to initialize the log chain.

  • To provide point-in-time restore capabilities and manage transaction log growth, production databases should be in the full recovery mode with regular transaction log backups.

  • Development and test databases that don't require point-in-time restoration capabilities should be placed in the simple recovery mode to limit administration overhead and disk space usage.

  • Use the Bulk-_Logged model on a temporary basis only during bulk-load operations. Take transaction log backups immediately before and after using the bulk logged model for maximum point-in-time protection.

  • Consider the disk then tape backup technique whereby backups are written to disk before being archived to tape and removed from disk after a number of days. As well as enabling two copies of recent backups for resilience against media failure, the local disk copies provide fast restoration if needed, and you maintain offsite tape copies for long-term archival purposes.

  • Assuming the CPU overhead is measured and within the available headroom, consider backup checksums (along with page checksums) as a means of enabling constant and ongoing I/O verification.

  • Consider the MIRROR TO DISK option when performing disk backups to create an off-server disk backup for tape archive. With this approach, you avoid the need for tape backup software and drivers on the SQL Server, and you create an additional disk backup with independent retention periods.

  • If using the MIRROR TO DISK option to back up to a secondary backup file over the network, consider a private LAN connection to the backup server to maximize network performance and minimize the effect on the public LAN.

  • Streaming a backup to multiple backup files can produce a significant performance increase compared to single file backups, particularly for very large databases.

  • For small databases, full nightly backups with regular transaction log backups through the day are ideal. For larger databases, consider a weekly full, daily differential, and hourly transaction log model. For very large databases running on the Enterprise edition of SQL Server, consider a filegroup backup/restore design centered around online piecemeal restores.

  • Keep in mind the diminishing returns of differential backups. The frequency of the full backup needs to be assessed on the basis of the rate of database change.

  • Restore backups on purpose-built backup verification servers or as part of an infrastructure solution, such as a reporting server with automated restores. Log shipping (covered in the next chapter) is an excellent way of verifying transaction log backup validity as well as providing a mechanism to enable reporting databases to be refreshed with current data.

  • An alternate means of verification is the RESTORE WITH VERIFYONLY operation, which will read the contents of the backup file to ensure its validity without actually restoring it. In the absence of an automated restore process, this is a good method for verifying that backups are valid.

  • Consider the use of backup devices (not covered in this book) for more flexibility when scripting backup jobs. Rather than creating script jobs containing hard-coded directory paths and filenames, using backup devices enables portability of backup scripts; each environment's backup devices can be configured for the appropriate drive letters, directory paths, and filenames.

  • If using database snapshots for reporting purposes, ensure they're consistently named to assist with programmatically redirecting access to new snapshots, and make sure old snapshots are removed to reduce disk space requirements and the performance overhead of copy-on-write.

  • If using the Enterprise edition of SQL Server, consider backup compression as a means of reducing backup disk cost. Alternatively, consider keeping more backups on disk for longer periods (or use both strategies).

  • Compressing backups of databases that use Transparent Data Encryption isn't recommended because the compression rate is likely to be low while still incurring CPU overhead.

Additional information on the best practices covered in this chapter can be found online at http://www.sqlCrunch.com/backup.

As we've covered in this chapter, transaction logs form a fundamental part of a backup and recovery plan. In the next chapter, we'll take a look at log shipping, an excellent mechanism for ongoing verification of the validity of the transaction log backups.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.144.59