Avoiding and Curing Unplanned Table Fragmentation

There are several practices that you can follow to eliminate and cure unplanned table fragmentation. It is important to realize the difference between planned and unplanned fragmentation. Planned fragmentation occurs when the DBA makes a purposeful decision to locate various chunks of the table on different disk devices to improve disk access performance. This type of fragmentation is a feature of the IDS engine that was enacted to enhance the DBA's ability to intelligently locate disk chunks and indexes for top performance.

Unplanned fragmentation can happen with either OnLine or IDS engines. This happens when the engine is forced to allocate extents for a table in such a way that they are not contiguous with the other data in the table. This forces the disk heads to thrash about and slows disk I/O speed significantly. To avoid this unplanned fragmentation, estimate the anticipated size of your tables and create them with proper FIRST and NEXT SIZE parameters. Doing this means that the engine will create larger, contiguous extents for the data and indexes to occupy. If you have created tables with a large enough FIRST extent parameter, you can be sure that the data will be compact and relatively easy for the engine to get to without forcing it to jump all over the disk.

It is also possible to reduce table fragmentation by intelligent use of the bulk loading utilities. As a general rule, it is preferable to do your loading into tables that have no indexes, creating them after the load. This has the advantage of being faster, especially in versions of OnLine newer than 5.0 that have improved indexing algorithms. The second advantage is that your data pages and index pages become contiguous. All of the data pages are together and all of your index pages are together. When you do your bulk loads with the indexes already built, your extents will have a few data pages followed by a few index pages, followed by more data pages, etc. When you do your bulk loads with no indexes in place and create the indexes later, the data pages are grouped together, followed by the index pages. If you create your indexes one at a time after the data is loaded, the index pages will become even more effectively located, with pages for one index being together followed by pages from the next index, etc. Having these pages located contiguously makes it easier for the engine to access them without thrashing all over the disk. This also allows the use of big buffer reads, which use more efficient reading mechanisms if the data is located contiguously.

You can cure fragmented tables in the same way that you drop unused extents from tables by re-creating the tables or by altering indexes to cluster. Actually, any DDL command that alters the table, such as adding a column or changing the structure of the table will cause a rebuild of the table, but altering to cluster is the most innocuous approach.

The benefits of defragmenting tables begin to go away as the table is used and modified. If you begin to notice a slow degradation of access time and performance, look at table fragmentation. It often occurs so slowly that you don't notice it. It should be noted that there is one surefire way to avoid unplanned fragmentation of tables. If you place each of your active tables in its own dbspace and continuously monitor the space for adequate free space, you will not experience any unwanted fragmentation. This strategy can also help in several other areas, such as in archiving just one table.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.120.133