9.5. Best practice considerations: data management

Despite SQL Server 2008's inclusion of advanced data management features such as FileStream and data compression, the importance of basic best practices such as presizing files is not to be overlooked.

  • To avoid transaction throughput bottlenecks, ensure the transaction log is stored on a dedicated RAID-protected (ideally RAID 1 or 10) disk volume. Combining data and transaction logs on the same disk won't achieve the same levels of throughput given the disk heads will be moving between the conflicting requirements of random data access/updates and sequential transaction log entries. For database applications with high transaction rates, separation of data and transaction logs in this manner is crucial.

  • Defragging a transaction log disk may improve performance if the transaction log has had significant growth/shrink activity and/or is shared with other files undergoing frequent modification. Ensure SQL Server is shut down before defragging, and make sure frequent growth/shrink operations are addressed through adequate log file sizing and the absence of manual shrink operations.

  • For maximum performance, ensure backups and the tempdb database are stored on physically separate volumes from data, transaction log, and Windows system files.

  • To avoid data loss in the event of disk failure, never store backups on the same disk(s) as the database files, and ensure all volumes are RAID protected.

  • Designing SAN-based virtualized storage is quite different from that of direct-attached storage; that being said, the principles of high performance and fault tolerance remain. For transaction log files in particular, consider creating LUNs on dedicated disks that aren't shared by any other server or application.

  • Consider using mount points as a means to maintain physical disk volume separation within the confines of the 26-letter drive limitation.

  • Unless supported by solid load-test evidence and a very good understanding of data access patterns, creating additional filegroups should be considered for administrative benefits such as piecemeal backup restores rather than as an attempt to maximize performance through control over object placement. In almost all cases, it's a far better option to create simpler filegroup structures spread across more physical disk spindles.

  • To support the fastest piecemeal restore process, avoid using the primary filegroup for any user objects. Create a secondary filegroup and mark it as the default.

  • Given its unique usage, the tempdb database does benefit from multiple files, typically 0.25-0.5 times the number of CPU cores available to the SQL Server instance. For example, an instance with access to four quad-core CPUs (16 cores) would benefit from a tempdb database with between four and eight data files. Hyperthreaded CPUs should be considered as a single core. There are some rare examples of user databases with allocation bitmap contention that may derive a performance benefit from multiple data files, but in almost all cases, multiple data files are used for filegroup and administrative benefits.

  • Compared to one large data file, multiple smaller data files provide administrative benefits and should be considered from that perspective rather than as a performance optimization technique. The benefits of smaller files include the ability to restore backups to another server with smaller disk drives and to detach a database and rearrange files to balance drive space usage.

  • There's no benefit in multiple transaction log files. Better transaction log performance comes through fast and dedicated RAID-protected disks with an adequate write cache.

  • For maximum performance and minimum disk fragmentation, database files should be presized appropriately based on historical usage and growth expectations. In particular, the tempdb database should be manually sized to prevent it from returning to its initial small size when SQL Server restarts. The Enable Autogrowth option should be enabled for emergency situations only, and shouldn't be used as a substitute for good capacity planning and proactive database maintenance.

  • Similarly, disable the AutoShrink and AutoClose options (which they are by default) to avoid poor performance and disk fragmentation.

  • In order for SQL Server's proportional fill algorithm to operate with maximum efficiency, when multiple files are added to a filegroup, they should be set to the same size, with the same autogrowth values. Further, manual growth increments should be the same across all files.

  • The Perform Volume Maintenance Tasks right should be granted to the SQL Server service account in order to take full advantage of the Instant Initialization feature, which reduces the time taken in initializing files during database creation, restores, and autogrowth events.

  • Despite the obvious strengths of FileStream, consider database mirroring as a high-availability option before implementing it in place of other BLOB storage techniques. SQL Server 2008 is unable to mirror a database containing FileStream data.

  • The best FileStream performance is gained with large BLOBs (>1MB) which are either read only or updated in whole—that is, no partial updates—where the data is accessed via streaming APIs (as opposed to T-SQL access). A database containing BLOBs with an average size of less than 1MB may perform better with the objects stored within the database.

  • For the best performance and minimal fragmentation, ensure disk volumes storing FileStream data are formatted with a 64K allocation unit size (the default NTFS allocation unit size is 4K). Further, the volume should ideally be dedicated to the FileStream data with no other SQL Server database files or paging files.

  • Data compression yields the biggest gains at the lowest costs when used to compress infrequently used tables and indexes representing a significant portion of the total database size.

  • Together with dynamic management views such as sys.dm_db_index_usage_ stats, the provided compression estimate tools can be used to target compression candidates. For example, a table containing many large indexes that are used infrequently represents a good opportunity to reduce database size while minimizing the performance impact.

  • In most cases, particularly for heavily used OLTP systems, row compression will usually have a smaller performance impact compared to page compression, but page compression can result in big performance increases for certain operations such as large table scans.

  • As with all other configuration changes, measuring the performance impact of data compression in a load-testing environment is crucial before production implementation. This obviously requires an up-to-date copy of the production database, production-configured hardware, and load-testing tools and strategies.

Additional information on the best practices covered in this chapter can be found online at http://www.sqlCrunch.com/data.

One of the things we covered in this chapter was data compression. SQL Server 2008 also introduces backup compression, and we'll discuss that in the next chapter. We'll also expand on this chapter's introduction to piecemeal restores.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.191.233