CHAPTER 26

image

Files, Filegroups, and Integrity

by Wayne Sheffield

Every database has a minimum of two files associated with it: the data file and the log file. However, sometimes you may want to add more files (of either type) to the database, increase their size, move them to a different drive, or perform other file-level activities. And once you have all the files on your databases placed and sized appropriately, you will need to perform regular maintenance activities on them to ensure that their integrity does not become compromised. This chapter will show you how to perform these activities.

This chapter simulates having three disk drives by using three subdirectories in the C:Apress directory. The recipes in this chapter use the following database and are designed to be followed in order. The following code will create these directories and the database:

EXECUTE sys.xp_create_subdir 'C:ApressDrive1';
EXECUTE sys.xp_create_subdir 'C:ApressDrive2';
EXECUTE sys.xp_create_subdir 'C:ApressDrive3';

USE master;
GO
IF DB_ID('BookStoreArchive') IS NOT NULL DROP DATABASE BookStoreArchive;
GO

CREATE DATABASE BookStoreArchive
ON PRIMARY
(NAME = 'BookStoreArchive',
 FILENAME = 'C:ApressDrive1BookStoreArchive.MDF',
 SIZE = 4MB,
 MAXSIZE = UNLIMITED,
 FILEGROWTH = 10MB)
LOG ON
(NAME = 'BookStoreArchive_log',
 FILENAME = 'C:ApressDrive3BookStoreArchive_log.LDF',
 SIZE = 512KB,
 MAXSIZE = UNLIMITED,
 FILEGROWTH = 512KB);

26-1. Adding a Data File or a Log File

Problem

You need to add a data file and transaction log file to your database.

Solution

Utilize the ALTER DATABASE statement to add new files to a database as follows:

ALTER DATABASE BookStoreArchive
ADD FILE
(  NAME = 'BookStoreArchive2',
FILENAME = 'C:ApressDrive2BookStoreArchive2.NDF' ,
SIZE = 1MB ,
MAXSIZE = 10MB,
FILEGROWTH = 1MB )
TO FILEGROUP [PRIMARY];

ALTER DATABASE BookStoreArchive
ADD LOG FILE
(  NAME = 'BookStoreArchive2Log',
FILENAME = 'C:ApressDrive3BookStoreArchive2_log.LDF' ,
SIZE = 1MB ,
MAXSIZE = 5MB,
FILEGROWTH = 1MB)
GO

How It Works

Once a database is created, assuming you have available disk space, you can add data files or transaction log files to it as needed. This allows you to expand to new drives if the current physical drive/array is close to filling up or if you are looking to improve performance by spreading I/O across multiple drives. It usually makes sense to add additional data files and log files to a database only if you plan on putting these files on a separate drive/array. Putting multiple files on the same drive/array doesn’t improve performance and may benefit you only if you plan on performing separate file or filegroup backups for a very large database.

Adding files doesn’t require you to bring the database offline. The syntax for ALTER DATABASE when adding a data file or transaction log file is as follows:

ALTER DATABASE database_name {ADD FILE <filespec> [ ,...n ]
[ TO FILEGROUP { filegroup_name | DEFAULT } ] ADD LOG FILE <filespec> [ ,...n ] }

Table 26-1 describes the syntax arguments.

Table 26-1. ALTER DATABASE...ADD FILE Arguments

Argument

Description

database_name

Defines the name of the existing database.

<filespec> [ ,...n ]

Designates one or more explicitly defined data files to add to the database.

filegroup_name | DEFAULT

Designates the logical name of the filegroup. If followed by the DEFAULT keyword, this filegroup will be the default filegroup of the database (meaning all objects will by default be created there).

[ LOG ON { <filespec> [ ,...n ] } ]

Designates one or more explicitly defined transaction log files for the database.

In this recipe, new data and transaction log files were added to the BookStoreArchive database. To add the data file, the ALTER DATABASE statement was used with the ADD FILE argument, followed by the file specification:

ALTER DATABASE BookStoreArchive ADD FILE

The filegroup where the new file was added was specified using the TO FILEGROUP clause, followed by the filegroup name in brackets:

TO FILEGROUP [PRIMARY]

In the second query in the recipe, a new transaction log file was added using the ALTER DATABASE statement and the ADD LOG FILE argument:

ALTER DATABASE BookStoreArchive ADD LOG FILE

26-2. Retrieving Information about the Files in a Database

Problem

You want to view information about the files that make up a database.

Solution

Query the database’s sys.database_files view or the instance’s sys.master_files view:

SELECT  file_id, type_desc, name, physical_name, state_desc, size, max_size, growth, is_percent_growth
FROM    BookStoreArchive.sys.database_files;

SELECT  file_id, type_desc, name, physical_name, state_desc, size, max_size, growth, is_percent_growth
FROM    sys.master_files
WHERE   database_id = DB_ID('BookStoreArchive'),

Both of these queries produce the following results (results are split into two sections for formatting):

file_id type_desc name                  physical_name
------- --------- --------------------- -------------------------------------------
1       ROWS      BookStoreArchive      C:ApressDrive1BookStoreArchive.MDF
2       LOG       BookStoreArchive_log  C:ApressDrive3BookStoreArchive_log.LDF
3       ROWS      BookStoreArchive2     C:ApressDrive2BookStoreArchive2.NDF
4       LOG       BookStoreArchive2Log  C:ApressDrive3BookStoreArchive2_log.LDF

state_desc size max_size    growth is_percent_growth
---------- ---- ----------- ------ -----------------
ONLINE     512  -1          1280   0
ONLINE     64   268435456   64     0
ONLINE     128  1280        128    0
ONLINE     128  640         128    0

How It Works

The sys.database_files catalog view stores information about a database’s files. This is a database-level view, so it contains information about the files in the current database only.

The sys.master_files catalog view stores information about all of the database files on an instance. In addition to all of the columns that are in the sys.database_files view, sys.master_files also contains the database_id in order to identify the database that each file belongs to.

The queries in this recipe returned the logical and physical name for each file, along with the type of file, the status of the file, the size and max_size of the file, and the growth settings for that file. If the growth is by percent, then the is_percent_growth column will return 1, and the data in the growth column will be the percentage growth. The size, max_size, and growth columns (for fixed-size growths) will show the size in number of 8K pages. A max_size value of -1 indicates that the file will be able to grow until all of the disk space is used.

26-3. Removing a Data File or a Log File

Problem

You need to remove a data file or transaction log file from a database.

Solution

Utilize the ALTER DATABASE statement to remove data files or transaction log files from a database:

ALTER DATABASE BookStoreArchive REMOVE FILE BookStoreArchive2;

Running this command produces the following message:

The file 'BookStoreArchive2' has been removed.

How It Works

The ALTER DATABASE statement removed the specified logical file name from the database. You might want to do this if you are relocating a database from one drive to another by creating a new file on one drive and then dropping the old file from the other.

The syntax for dropping a file is as follows:

ALTER DATABASE database_name
REMOVE FILE logical_file_name

where database_name is the name of an existing database, and logical_file_name is the name of the logical file to be removed from the database.

The logical file being removed must be empty (no data and no active transactions), and it cannot be the primary data file or primary transaction log file. You can use DBCC SHRINKFILE with the EMPTYFILE parameter to empty a file and move any data within it to another file.

26-4. Relocating a Data File or a Log File

Problem

You need to move a data or transaction log file from one physical location to another—for example, from one drive to another.

Solution

Utilize the ALTER DATABASE statement to move data files or transaction log files belonging to a database. The first step is as follows:

ALTER DATABASE BookStoreArchive
MODIFY FILE
(NAME = 'BookStoreArchive', FILENAME = 'O:ApressBookStoreArchive.mdf')
GO

Upon executing this statement, the following message is returned:

The file "BookStoreArchive" has been modified in the system catalog. The new path will be used the next time the database is started.

Image Note  This does not physically move the specified file. Within SQL Server, the location for the file has just been updated. The file will still need to be moved to its proper location using operating system file copy ­routines.

How It Works

The ALTER DATABASE statement updated the specified logical file name to a new file name. As the returned message indicated, this new path will be used when the database is next started. This can occur by stopping and starting the SQL Server instance or by taking the database offline and then bringing it back online. After the SQL Server instance has been shut down or the database has been taken offline, you will still have to move this file to its new location before starting up the SQL Server instance or bringing the database back online. The database can be taken offline, and then be brought back online, with the following commands:

USE master;
GO
-- This next statement will close all open connections for users that are not sysadmins
ALTER DATABASE BookStoreArchive SET RESTRICTED_USER WITH ROLLBACK IMMEDIATE;
GO
-- This next statement will close the database
ALTER DATABASE BookStoreArchive SET OFFLINE;
GO
-- Move BookStoreArchive.mdf file from N:Apress to O:Apress now.
-- On my Windows 7 PC, I had to use Administrator access to move the file.
-- On other operating systems, you may have to modify file/folder permissions
-- to prevent an access denied error.

USE master;
GO
ALTER DATABASE BookStoreArchive SET ONLINE;
GOALTER DATABASE BookStoreArchive SET MULTI_USER WITH ROLLBACK IMMEDIATE;
GO

The ALTER DATABASE BookStoreArchive SET RESTRICTED_USER WITH ROLLBACK IMMEDIATE; statement sets the database to where only users that are members of the db_owner database role, or the db_creator or sysadmin server roles, can connect to the database. Any statements currently being run by other connections are canceled and rolled back. The database is then taken offline.

After the database file has been physically moved to its new location, the database is brought back online, and then the database is opened back up to all users.

26-5. Changing a File’s Logical Name

Problem

You need to change the logical name of a file in a database.

Solution

Utilize the ALTER DATABASE statement to rename the logical name of a file belonging to a database, as follows:

SELECT  name
FROM    BookStoreArchive.sys.database_files;

ALTER DATABASE BookStoreArchive
MODIFY FILE
(NAME = 'BookStoreArchive',
NEWNAME = 'BookStoreArchive_Data'),

SELECT  name
FROM    BookStoreArchive.sys.database_files;

This statement returns the following message and result set:

name
---------------------------------------------------
BookStoreArchive
BookStoreArchive_log
BookStoreArchive2Log

The file name 'BookStoreArchive_Data' has been set.
name
---------------------------------------------------
BookStoreArchive_Data
BookStoreArchive_log
BookStoreArchive2Log

How It Works

The ALTER DATABASE statement allows you to change the logical name of a file belonging to the database without taking the database offline. The logical name of a database doesn’t affect the functionality of the database itself, allowing you to change the name for consistency and naming-convention purposes. For example, if you restore a database from a backup using a new database name, you may want the file’s logical name to match the new database name.

The syntax of the ALTER DATABASE statement to change the logical name is as follows:

ALTER DATABASE database_name
MODIFY FILE
(NAME = logical_file_name, NEWNAME = new_logical_name);

where database_name is the name of an existing database, logical_file_name is the logical name of the file to be renamed, and new_logical_name is the new logical file name.

26-6. Increasing the Size of a Database File

Problem

You have a scheduled downtime for your database. During this downtime, you want to increase the database’s size to prevent autogrowth operations until your next scheduled downtime.

Solution

Utilize the ALTER DATABASE statement to increase the size of a file belonging to a database:

SELECT name, size FROM BookStoreArchive.sys.database_files;

ALTER DATABASE BookStoreArchive
MODIFY FILE
(NAME = 'BookStoreArchive_Data',
 SIZE = 5MB);

SELECT name, size FROM BookStoreArchive.sys.database_files;

This statement returns the following result sets:

name                   size
---------------------- ----
BookStoreArchive_Data  512
BookStoreArchive_log   64
BookStoreArchive2Log   128

name                   size
---------------------- ----
BookStoreArchive_Data  640
BookStoreArchive_log   64
BookStoreArchive2Log   128

How It Works

The MODIFY FILE clause of the ALTER DATABASE statement allows you to increase the size of a file. In the previous example, the size of the BookStoreArchive_Data file was changed from 4MB to 5MB. If you specify the same file size, or lower, you will receive this error message:

Msg 5039, Level 16, State 1, Line 1
MODIFY FILE failed. Specified size is less than or equal to current size.

Image Note  The size column of the sys.databases_files system view reports the quantity of 8KB pages. You will need to convert this number to an appropriate size (MB, GB, etc.)

The syntax of the ALTER DATABASE statement to increase a file size or to modify the file’s growth/maxsize settings is as follows:

ALTER DATABASE database_name
MODIFY FILE
(
NAME = logical_file_name
[ , SIZE = size [ KB | MB | GB | TB ] ]
[ , MAXSIZE = { max_size [ KB | MB | GB | TB ]
UNLIMITED } ]
[ , FILEGROWTH = growth_increment [ KB | MB | % ] ]
)

Table 26-2 shows the arguments of this syntax.

Table 26-2. ALTER DATABASE...MODIFY FILE Arguments

Argument

Description

database_name

The name of the existing database

logical_file_name

The logical file name to change size or growth options for

size [ KB | MB | GB | TB ]

The new size (must be larger than the existing size) of the file based on the sizing attribute of choice (kilobytes, megabytes, gigabytes, terabytes)

{ max_size [ KB | MB | GB | TB ] | UNLIMITED } ]

The new maximum allowable size of the file based on the chosen sizing attributes. If UNLIMITED is chosen, the file can grow to the available space of the physical drive.

growth_increment [ KB | MB | % ] ]

The new amount that the file size increases when space is required. You can designate either the number of kilobytes or megabytes or the percentage of existing file size. If you select 0, file growth will not occur.

These changes are instantaneous. If you are changing the size of a data file, and the local security permission “Perform Volume Maintenance Tasks” (PVMT) has not been granted to the SQL Server service account, or if you are changing the size of a log file, then the newly added space to the file must be zero-initialized. During the time that this is occurring, all other database activity is paused. To minimize the impact of the growth on database operations, it is a best practice to manually grow the files during scheduled maintenance periods. This manual growth should be large enough so that the database will have enough space so that an automatic growth would not be necessary until the next maintenance period. If the PVMT security permission has been granted to the SQL Server service account, data-file growths will be nearly instantaneous (log-file growths always require the zero-initialization process). If the PVMT security permission is being added to the SQL Server service account, the SQL Server instance will need to be restarted in order to pick up this change.

26-7. Adding a Filegroup

Problem

You want to add a new filegroup to your database.

Solution

Utilize the ALTER DATABASE statement to add a filegroup to a database, as follows:

ALTER DATABASE BookStoreArchive
ADD FILEGROUP FG2;
GO

How It Works

The ALTER DATABASE was utilized to add a filegroup to a database. The syntax is as follows:

ALTER DATABASE database_name
ADD FILEGROUP filegroup_name

where database_name is the name of an existing database, and filegroup_name is the name of the new filegroup being added.

You might want to add a new filegroup for a multitude of reasons. Some of these include the following:

  • Putting read-only tables into a read-only filegroup
  • Moving data that must be restored first into a separate file group in order to bring your application back up faster in the event of a disaster. Filegroups can be backed up and restored individually. This may enable your core business functions to get back online faster while the restoration of other filegroups proceeds.
  • Relocating the database for disk maintenance.

26-8. Adding a File to a Filegroup

Problem

You want to add a new file to a filegroup.

Solution

Utilize the ALTER DATABASE statement to add a new file to a specified filegroup, as follows:

ALTER DATABASE BookStoreArchive
ADD FILE
(  NAME = 'BW2',
FILENAME = 'N:ApressFG2_BookStoreArchive.NDF' ,
SIZE = 1MB ,
MAXSIZE = 50MB,
FILEGROWTH = 5MB )
TO FILEGROUP FG2;

How It Works

Just like in Recipe 26-1, this added a new file to the database. The difference is the specification of the filegroup that the file should be added to. Without this specification, the file would be added to the default filegroup.

26-9. Setting the Default Filegroup

Problem

You want to change the default filegroup so that new tables will be added to the files in that filegroup.

Solution

Utilize the ALTER DATABASE statement to set a filegroup as the default filegroup for a database, as follows:

ALTER DATABASE BookStoreArchive
MODIFY FILEGROUP FG2 DEFAULT;
GO

This query returns the following message:

The filegroup property 'DEFAULT' has been set.

How It Works

The ALTER DATABASE statement was used to set the default filegroup for a database. The default filegroup is the filegroup to which new objects will be added if a filegroup is not specified. Only one filegroup can be the default filegroup at any point in time. The syntax for this statement is as follows:

ALTER DATABASE database_name
MODIFY FILEGROUP filegroup_name DEFAULT

where database_name is the name of an existing database and filegroup_name is the name of an existing filegroup within the specified database.

26-10. Adding Data to a Specific Filegroup

Problem

You want to add a new table to a specific filegroup.

Solution

In the CREATE TABLE statement, specify the filegroup that the table is to be added to, as follows:

CREATE TABLE dbo.Test
       (
        TestID  INT IDENTITY,
        Column1 INT,
        Column2 INT,
        Column3 INT
       )
ON     FG2;

How It Works

The ON clause specified the partition scheme or filegroup that the table was to be built in.

Image Note  If the CREATE TABLE statement also specifies the creation of a clustered index on a different partition or filegroup, the table will be created on the partition or filegroup specified by the clustered index.

26-11. Moving Data to a Different Filegroup

Problem

You need to remove a table from one filegroup and place it in a different filegroup.

Solution #1

If the table does not have a clustered index, add a clustered index or constraint to the table, specifying the new filegroup:

ALTER TABLE dbo.Test
  ADD CONSTRAINT PK_Test PRIMARY KEY CLUSTERED (TestId)
  ON [PRIMARY];

Solution #2

If the table does have a clustered index that is enforcing a constraint, drop and recreate the clustered constraint, specifying the new filegroup:

CREATE TABLE dbo.Test2
       (
        TestID INT IDENTITY
                   CONSTRAINT PK__Test2 PRIMARY KEY CLUSTERED,
        Column1 INT,
        Column2 INT,
        Column3 INT
       )
ON     FG2;
GO

ALTER TABLE dbo.Test2
DROP CONSTRAINT PK__Test2;

ALTER TABLE dbo.Test2
ADD CONSTRAINT PK__Test2 PRIMARY KEY CLUSTERED (TestId)
ON [PRIMARY];

Solution #3

If the table has a clustered index that is not enforcing a constraint, rebuild the index using the DROP EXISTING clause and specify the filegroup that it should be moved to, as follows:

CREATE TABLE dbo.Test3
       (
        TestID INT IDENTITY,
        Column1 INT,
        Column2 INT,
        Column3 INT
       )
ON     FG2;
GO

CREATE CLUSTERED INDEX IX_Test3 ON dbo.Test3 (TestId)
ON FG2;
GO

CREATE CLUSTERED INDEX IX_Test3 ON dbo.Test3 (TestId)
WITH (DROP_EXISTING = ON)
ON [PRIMARY];
GO

How It Works

Since a clustered index contains, at the leaf level, all the data for the table, moving the clustered index to a different filegroup moves the table to the new filegroup as well. In the same manner, adding a clustered index to a table that doesn’t have one will move the data from the table into the clustered index and thus into the filegroup as specified by the index. If the clustered index is enforcing a constraint, the constraint will need to be dropped and recreated in order to move the table; you can rebuild an index on a constraint only if everything about the new index is identical to the current index and the filegroup that the index is on is being changed. If this is the only method available to you, you should do this during a maintenance period so that you can ensure that data won’t be entered that would violate the constraint.

In the first solution, the dbo.Test table did not have a clustered index, so one was created on it with the ALTER TABLE statement, specifying the filegroup to put the index on. Creating the clustered index on a different filegroup moved the table to the other filegroup.

In the second solution, a new table was created on filegroup FG2 with a clustered index on a primary-key constraint. To move this table, the constraint was first dropped with the ALTER TABLE statement, creating a table without any clustered index. The clustered primary-key constraint was then recreated on the desired filegroup, thus moving the table to that filegroup.

In the third solution, a table and a clustered index were created on FG2. Since the clustered index was not enforcing a constraint, this table could be moved to the new filegroup by utilizing the CREATE INDEX statement and by specifying the DROP_EXISTING = ON clause along with the filegroup to put the index on.

Image Tip  For more information on utilizing the ALTER TABLE statement, see the “Managing Tables” chapter. For more information on utilizing indexes and the CREATE INDEX statement, see the “Managing Indexes” chapter.

26-12. Removing a Filegroup

Problem

You want to remove an empty filegroup from your database.

Solution

Utilize the ALTER DATABASE statement to remove filegroups from a database:

ALTER DATABASE BookStoreArchive
MODIFY FILEGROUP [PRIMARY] DEFAULT;
GO

ALTER DATABASE BookStoreArchive
REMOVE FILE BW2;
GO

ALTER DATABASE BookStoreArchive
REMOVE FILEGROUP FG2;
GO

These statements return the following messages:

The filegroup property 'DEFAULT' has been set.
The file 'BW2' has been removed.
The filegroup 'FG2' has been removed.

How It Works

To remove a filegroup, it cannot contain any files within it. Furthermore, you cannot remove the last file from the default filegroup. Therefore, the first ALTER DATABASE statement was necessary to change the default filegroup back to the PRIMARY filegroup. Since the filegroup name PRIMARY is a keyword, it had to be enclosed in brackets. The second ALTER DATABASE statement removes the empty file from the filegroup (see Recipe 26-3). The third ALTER DATABASE statement removed the filegroup. The syntax is as follows:

ALTER DATABASE database_name
REMOVE FILEGROUP filegroup_name

where database_name is the name of the existing database and filegroup_name is the name of the existing and empty filegroup to be removed.

26-13. Making a Database or a Filegroup Read-Only

Problem #1

You have historical data in your database that cannot have any modifications made to it. However, the data needs to be available for querying.

Problem #2

Your entire database contains historical data, and it cannot have any modifications made to it. However, the data needs to be available for querying.

Solution #1

Move the historical data to a separate filegroup, and then set that filegroup to be read-only. See the following:

ALTER DATABASE BookStoreArchive SET RESTRICTED_USER WITH ROLLBACK IMMEDIATE;
GO

ALTER DATABASE BookStoreArchive
ADD FILEGROUP FG3;
GO

ALTER DATABASE BookStoreArchive
ADD FILE
(  NAME = 'ArchiveData',
FILENAME = 'N:ApressBookStoreArchiveData.NDF' ,
SIZE = 1MB ,
MAXSIZE = 10MB,
FILEGROWTH = 1MB )
TO FILEGROUP [FG3];
GO
-- move historical tables to this filegroup

ALTER DATABASE BookStoreArchive
MODIFY FILEGROUP FG3 READ_ONLY;
GO

ALTER DATABASE BookStoreArchive SET MULTI_USER;
GO

Solution #2

Since the entire database consists of the historical data, you can set the entire database to READ_ONLY with this statement:

ALTER DATABASE BookStoreArchive SET READ_ONLY;
GO

If you ran this statement, please set the database back to multi-user mode with this statement so that the remaining recipes will work:

ALTER DATABASE BookStoreArchive SET READ_WRITE;
GO

How It Works

In Solution #1, a new filegroup was created on this database, and a file was added to this filegroup. The archived data was then moved into this filegroup. Finally, the filegroup was set to READ_ONLY. When changing the status of the filegroup, you cannot have other users in the database, so the database is first set so as to only allow restricted users. Once all work has been finished, it is opened back up to all users. The filegroup can be set back to a read-write status by executing this statement (after setting it to allow only restricted users again):

ALTER DATABASE BookStoreArchive
MODIFY FILEGROUP FG3 READ_ONLY;

In Solution #2, the entire database was set to a READ_ONLY status. You can set it back to a read-write status with this statement:

ALTER DATABASE BookStoreArchive SET READ_WRITE;

26-14. Viewing Database Space Usage

Problem

You need to know how much space is being used by the objects in the database.

Solution #1

Utilize the sp_spaceused stored procedure to obtain information about space usage within the database and transaction log, as follows:

EXECUTE sp_spaceused;

Executing the sp_spaceused stored procedure without any parameters returns the following result set:

database_name     database_size      unallocated space
----------------- ------------------ ------------------
BookStoreArchive  7.50 MB            3.88 MB

reserved           data               index_size         unused
------------------ ------------------ ------------------ ------------------
2168 KB            824 KB             1128 KB            216 KB

Solution #2

Utilize the sp_spaceused stored procedure to obtain information about space usage for a specific object within a database, as follows:

EXECUTE sp_spaceused 'dbo.test';

Executing the sp_spaceused stored procedure with an object name returns the following result set:

name      rows   reserved           data               index_size         unused
--------- ------ ------------------ ------------------ ------------------ ------
Test      0      0 KB               0 KB               0 KB               0 KB

Solution #3

Utilize DBCC_SQLPERF to obtain space-used information about all transaction logs on your SQL Server instance, as follows:

DBCC SQLPERF(LOGSPACE);

Executing this returns the following result set (results will contain a row for each database on your SQL Server instance that this command is being run on):

Database Name       Log Size (MB) Log Space Used (%) Status
------------------- ------------- ------------------ -----------
master              2.242188      32.40418           0
tempdb              0.4921875     86.0119            0
model               0.7421875     84.47369           0
msdb                0.7421875     71.31579           0
AdventureWorks2014  113.9922      3.003564           0
BookStoreArchive    1.484375      30.49342           0

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Solution #4

Query the system views/dynamic-management views to obtain the allocation information for the database files:

SELECT  sdf.physical_name,
        su.allocated_extent_page_count / 128.0 allocated_ mb,
        su.unallocated_extent_page_count / 128.0 unallocated_ mb,
        su.total_page_count / 128.0 total_size_mb
FROM    sys.database_files sdf
        JOIN sys.dm_db_file_space_usage su
            ON sdf.file_id = su.file_id;

This query returns the following information:

physical_name                               allocated_mb unallocated_mb total_size_mb
------------------------------------------- ------------ -------------- -------------
C:ApressDrive2BookStoreArchive.mdf       2.312500     2.687500       5.000000
C:ApressDrive1BookStoreArchiveData.NDF   0.062500     0.937500       1.000000

How It Works

The sp_spaceused system-stored procedure returns information about the specified object, including the number of rows in the object, how much space the data and indexes are using, and any unused space. If an object isn’t specified, the information returned is about the database: the size, unallocated space, data space, index space, and unused space. The syntax for this procedure is as follows:

sp_spaceused [[ (@objname = ] 'objname' ]
[,[ (@updateusage = ] 'updateusage' ]

Table 26-3 describes the parameters of this procedure.

Table 26-3. sp_spaceused Parameters

Parameter

Description

'objname'

This parameter defines the optional object name (table, for example) to view space usage. If not designated, the entire database’s space-usage information is returned.

'updateusage'

This parameter is used with a specific object and accepts either true or false. If true, DBCC UPDATEUSAGE is used to update space-usage information in the system tables.

In Solution #3, DBCC SQLPERF was used to obtain transaction log space-usage statistics for all databases. (It can also be used to reset wait and latch statistics.) The syntax for DBCC SOLPERF is as follows:

DBCC SQLPERF
(
     [ LOGSPACE ]
     |
     [ "sys.dm_os_latch_stats" , CLEAR ]
     |
     [ "sys.dm_os_wait_stats" , CLEAR ]
)
     [WITH NO_INFOMSGS ]

Table 26-4 briefly describes this DBCC command’s arguments.

Table 26-4. DBCC SQLPERF Arguments

Parameter

Description

LOGSPACE

Returns the current size of the transaction log and the percentage of log space used for each database. You can use this information to monitor the amount of space being used in a transaction log.

"sys.dm_os_latch_stats", CLEAR

Resets the last statistics. For more information, see sys.dm_os_latch_stats at https://msdn.microsoft.com/en-us/library/ms175066.aspx.

"sys.dm_os_wait_stats", CLEAR

Resets the wait statistics. For more information, see sys.dm_os_wait_stats at https://msdn.microsoft.com/en-us/library/ms179984.aspx.

WITH NO_INFOMSGS

When included in the command, WITH NO_INFOMSGS suppresses informational messages from the DBCC output that have severity levels from 0 through 10.

In Solution #4, the system and dynamic management views (DMV) were queried to obtain allocation-usage information for the database files in the current database. The sys.dm_db_file_space_usage DMV can additionally return information about which filegroup the file is in, and the used-page counts can be broken down into what is used by the version store, internal objects, and mixed extents, as well as by user objects. This DMV reports the number of pages; since a page is 8KB, the query divides the page count by 128 to convert to MB.

26-15. Shrinking the Database or a Database File

Problem

You need to shrink either one database file or the entire database.

Solution #1

Utilize DBCC SHRINKDATABASE to shrink an entire database. We will first expand some of the files in the database, and then we will use DBCC SHRINKDATABASE to shrink all of the files in the database (we will use sp_spaceused to show the information before and after executing DBCC SHRINKDATABASE). See the following:

ALTER DATABASE BookStoreArchive
MODIFY FILE (NAME = 'BookStoreArchive_log', SIZE = 100MB);

ALTER DATABASE BookStoreArchive
MODIFY FILE (NAME = 'BookStoreArchive_Data', SIZE = 200MB);
GO

USE BookStoreArchive;
GO

EXECUTE sp_spaceused;
GO

DBCC SHRINKDATABASE ('BookStoreArchive', 10);
GO

EXECUTE sp_spaceused;
GO

These statements produce the following result sets and messages:

database_name    database_size      unallocated space
---------------- ------------------ ------------------
BookStoreArchive 302.00 MB          198.88 MB

reserved           data               index_size         unused
------------------ ------------------ ------------------ ------------------
2296 KB            944 KB             1176 KB            176 KB

DbId   FileId      CurrentSize MinimumSize UsedPages   EstimatedPages
------ ----------- ----------- ----------- ----------- --------------
8      1           512         512         288         288
8      2           1656        64          1656        64
8      4           128         128         128         128

DBCC execution completed. If DBCC printed error messages, contact your system administrator.
database_name    database_size      unallocated space
---------------- ------------------ ------------------
BookStoreArchive 17.94 MB           1.88 MB

reserved           data               index_size         unused
------------------ ------------------ ------------------ ------------------
2168 KB            824 KB             1128 KB            216 KB

Solution #2

Utilize DBCC SHRINKFILE to shrink one file in the database. Here we will expand one file in the database and then use DBCC SHRINKFILE to shrink that file. Again, we will use sp_spaceused to view the database space information before and after shrinking the file. See the following:

ALTER DATABASE BookStoreArchive
MODIFY FILE (NAME = 'BookStoreArchive_Log', SIZE = 200MB);
GO

USE BookStoreArchive;
GO

EXECUTE sp_spaceused;
GO

DBCC SHRINKFILE ('BookStoreArchive_Log', 2);
GO

EXECUTE sp_spaceused;
GO

These statements produce the following result sets and messages:

database_name    database_size      unallocated space
---------------- ------------------ ------------------
database_name      database_size      unallocated space
------------------ ------------------ ------------------
BookStoreArchive   206.00 MB          2.76 MB

reserved           data               index_size         unused
------------------ ------------------ ------------------ ------------------
2296 KB            944 KB             1176 KB            176 KB

DbId   FileId      CurrentSize MinimumSize UsedPages   EstimatedPages
------ ----------- ----------- ----------- ----------- --------------
8      2           1656        64          1656        64

DBCC execution completed. If DBCC printed error messages, contact your system administrator.
database_name      database_size      unallocated space
------------------ ------------------ ------------------
BookStoreArchive   18.94 MB           2.76 MB

reserved           data               index_size         unused
------------------ ------------------ ------------------ ------------------
2296 KB            944 KB             1176 KB            176 KB

How It Works

DBCC SHRINKDATABASE shrinks the data and log files in your database. In the first example, data and log files were both increased to a larger size. After that, the DBCC SHRINKDATABASE command was used to reduce them down to a target free-space size of 10 percent:

DBCC SHRINKDATABASE (BookStoreArchive, 10)

After execution, the command returned a result set showing the current size (in 8KB pages), minimum size (in 8KB pages), currently used 8KB pages, and estimated 8KB pages that SQL Server could shrink the file down to.

The syntax for DBCC SHRINKDATABASE is as follows:

DBCC SHRINKDATABASE
( 'database_name' | database_id | 0
[ ,target_percent ]
[ , { NOTRUNCATE | TRUNCATEONLY } ] ) [ WITH NO_INFOMSGS ]

Table 26-5 describes the arguments for this command.

Table 26-5. DBCC SHRINKDATABASE Arguments

Argument

Description

'database_name' | database_id | 0

You can designate a specific database name or the database ID of the database to shrink or, if 0 is specified, the current database your query session is connected to will be shrunk.

target_percent

The target percentage designates the free space remaining in the database file after the shrinking event.

NOTRUNCATE | TRUNCATEONLY

NOTRUNCATE performs the data movements needed to create free space but retains the freed space in the file without releasing it to the operating system. If NOTRUNCATE is not designated, the free file space is released to the operating system. TRUNCATEONLY frees up space without relocating data within the files. If not designated, data pages are reallocated within the files to free up space, which can lead to extensive I/O.

WITH NO_INFOMSGS

This argument prevents informational messages from being returned from the DBCC command.

In the second solution, one of the log files was increased to a larger size. This time, the DBCC SHRINKFILE command was used to shrink that individual file down to a specified size (in megabytes):

DBCC SHRINKFILE ('BookStoreArchive_Log', 2);

The syntax for DBCC SHRINKFILE is as follows:

DBCC SHRINKFILE (
{ ' file_name ' | file_id }
{ [ , EMPTYFILE]
| [ [ , target_size ] [ , { NOTRUNCATE | TRUNCATEONLY } ] ]
} ) [ WITH NO_INFOMSGS ]

Table 26-6 describes the arguments for this command.

Table 26-6. DBCC SHRINKFILE Arguments

Argument

Description

' file_name ' | file_id

This option defines the specific logical file name or file ID to shrink.

EMPTYFILE

This argument moves all data off the file so that it can be dropped using ALTER DATABASE and REMOVE FILE.

target_size

This option specifies the free space to be left in the database file (in megabytes). Leaving this blank instructs SQL Server to free up space equal to the default file size.

NOTRUNCATE | TRUNCATEONLY

NOTRUNCATE relocates allocated pages from within the file to the front of the file but does not free the space to the operating system. Target size is ignored when used with NOTRUNCATE. TRUNCATEONLY causes unused space in the file to be released to the operating system but does so only with free space found at the end of the file. No pages are rearranged or relocated. Target size is also ignored with the TRUNCATEONLY option. Use this option if you must free up space on the database file with minimal impact on database performance (rearranging pages on an actively utilized production database can cause performance issues, such as slow query response time).

WITH NO_INFOMSGS

This argument prevents informational messages from being returned from the DBCC command.

This command shrinks the specified physical file. In this example, we specified a log file. Transaction log files are shrunk by removing inactive virtual log files. The transaction log for any database is managed as a set of virtual log files (VLFs). VLFs are created when the transaction log is created or undergoes expansion, and the quantity and size of the new VLFs are based upon the size of the growth of the transaction log file, with a minimum size of 256KB.

Within the transaction log is the “active” logical portion of the log. This is the area of the transaction log containing active transactions. This active portion does not usually match the physical bounds of the file, but will instead “round-robin” from VLF to VLF. Once a VLF )no longer contains active transactions, it can be marked as reusable through a BACKUP LOG operation or automated system truncation, which makes the VLFs available for new log records.

It needs to be pointed out that when SQL Server talks about truncating the transaction log, the transaction log is not actually truncated; the process will mark zero or more VLFs as reusable. This is an example of misused verbiage in SQL Server documentation.

DBCC SHRINKFILE or DBCC SHRINKDATABASE will make their best effort to remove inactive VLFs from the end of the physical file. SQL Server will also attempt to add “dummy” rows to push the active logical log toward the beginning of the physical file—so sometimes issuing a BACKUP LOG after the first execution of the DBCC SHRINKFILE command and then issuing the DBCC SHRINKFILE command again will allow you to free up the originally requested space.

Database transaction log files should be sized so that they will not need to grow during normal operations. This size needs to be able to encompass the regular data-modification activity on the database, as well as periodic maintenance (specifically index rebuilds).

Database data files, when autogrowth is enabled, can expand because of index rebuilds or data-modification activity. You may have extra space in the database because of those data modifications and index rebuilds. If you don’t need to free up the unused space, you should allow the database to keep it reserved. However, if you do need the unused space and want to free it up, use DBCC SHRINKDATABASE or DBCC SHRINKFILE. It is a best practice to manually grow your data files during scheduled maintenance periods, and they should be grown to a size such that they will not need to grow automatically until the next manual growth during the next scheduled maintenance period.

Image Caution  When either DBCC SHRINKDATABASE or DBCC SHRINKFILE is run against a data file, pages from the end of the file are moved to unallocated space, starting from the beginning of the file. This action obviously will cause those pages for that data structure to now be fragmented. Additionally, due to the high amount of disk I/O activity, this is an expensive operation. Because of these reasons, shrinking activities should only be performed when absolutely necessary. If you find that the database/database files are being shrunk on a regular basis (or even through a job), then the size of the files needs to be re-evaluated and files properly sized so that the regular shrinking can be stopped. Shrinking should only be performed on rare occurrences, when unplanned or unanticipated activity has caused the files to expand to unacceptable levels such that the space must be reclaimed.

26-16. Checking the Consistency of Allocation Structures

Problem

You want to test a database’s disk-space-allocation structures for consistency.

Solution

Utilize DBCC CHECKALLOC to check page usage and allocation within the database:

DBCC CHECKALLOC ('BookStoreArchive'),

This statement produces the following messages. (Since this actually produces more than 500 lines of output, this result set as shown is greatly abridged.)

DBCC results for 'BookStoreArchive'.
***************************************************************
Table sys.sysrscols                Object ID 3.
Index ID 1, partition ID 196608, alloc unit ID 196608 (type In-row data). FirstIAM (1:157). Root (1:158). Dpages 12.
Index ID 1, partition ID 196608, alloc unit ID 196608 (type In-row data). 14 pages used in 1 dedicated extents.
Total number of extents is 1.
***************************************************************
Table sys.sysrowsets                Object ID 5.
Index ID 1, partition ID 327680, alloc unit ID 327680 (type In-row data). FirstIAM (1:131). Root (1:270). Dpages 1.
Index ID 1, partition ID 327680, alloc unit ID 327680 (type In-row data). 4 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
...
File 3. The number of extents = 1, used pages = 6, and reserved pages = 8.
           File 3 (number of mixed extents = 0, mixed pages = 0).
    Object ID 99, index ID 0, partition ID 0, alloc unit ID 6488064 (type Unknown), index extents 1, pages 6, mixed extent pages 0.
The total number of extents = 36, used pages = 257, and reserved pages = 288 in this database.
       (number of mixed extents = 21, mixed pages = 168) in this database.
CHECKALLOC found 0 allocation errors and 0 consistency errors in database 'BookStoreArchive'.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

How It Works

DBCC CHECKALLOC checks page usage and allocation in the database and will report on any errors that are found (this command is automatically included in the execution of DBCC CHECKDB, so if you are already running CHECKDB periodically, there is no need to also run CHECKALLOC). The syntax is as follows:

DBCC CHECKALLOC (
[ 'database_name' | database_id | 0 ] [ , NOINDEX
{ REPAIR_ALLOW_DATA_LOSS | REPAIR_FAST |   REPAIR_REBUILD } ] )
[ WITH { [ ALL_ERRORMSGS ]
[ , N0_INFOMSGS ]
[ , TABLOCK ]
[ , ESTIMATEONLY ]
}
]

Table 26-7 describes the arguments of this command.

Table 26-7. DBCC CHECKALLOC Arguments

Argument

Description

'database_name' | database_id | 0

This defines the database name or database ID that you want to check for errors. When 0 is selected, the current database is used.

NOINDEX

When NOINDEX used, nonclustered indexes are not included in the checks. This is a backward-compatible option that has no effect on DBCC CHECKALLOC.

REPAIR_ALLOW_DATA_LOSS | REPAIR_FAST | REPAIR_REBUILD

REPAIR_ALLOW_DATA_LOSS attempts a repair of the table or indexed view, with the risk of losing data in the process. REPAIR_FAST and REPAIR_REBUILD are maintained for backward compatibility only.

ALL_ERRORMSGS

When ALL_ERRORMSGS is chosen, every error found will be displayed. If this option isn’t designated, a maximum of 200 error messages can be displayed.

NO_INFOMSGS

NO_INFOMSGS represses all informational messages from the DBCC output.

TABLOCK

When TABLOCK is selected, an exclusive table lock is placed on the table instead of using an internal database snapshot, thus potentially decreasing query concurrency in the database.

ESTIMATEONLY

This provides the estimated space needed by the tempdb database to execute the command.

Image Caution  This DBCC command has several REPAIR options. Microsoft recommends that you solve data-integrity issues by restoring the database from the last good backup rather than resorting to a REPAIR ­option. If restoring from backup is not an option, the REPAIR option should be used only as a last resort. ­Depending on the REPAIR option selected, data loss can and will occur, and the problem may still not be resolved.

The output includes information about pages used and extents for each index. The key piece of information is in the next-to-last line, where you can see the reporting of the number of allocation and consistency errors encountered in the database being checked. If it reports anything other than 0 allocation errors and 0 consistency errors, then the errors need to be investigated and the corruption corrected.

When DBCC CHECKALLOC is executed, an internal database snapshot is created to maintain transactional consistency during the operation. If for some reason a database snapshot can’t be created or if TABLOCK is specified, an exclusive database lock is acquired during the execution of the command (thus potentially hurting database query concurrency). Unless you have a good reason not to, you should allow SQL Server to issue an internal database snapshot so that concurrency in your database is not impacted.

26-17. Checking Allocation and Structural Integrity

Problem

You want to check the integrity of all objects in a database.

Solution

Use DBCC CHECKDB to check the allocation and structural integrity of all objects in the database, as follows:

DBCC CHECKDB ('BookStoreArchive'),

Executing this command produces the following messages (as in the previous recipe, this output can be quite large, so only abridged results are being displayed):

DBCC results for 'BookStoreArchive'.
Service Broker Msg 9675, State 1: Message Types analyzed: 14.
Service Broker Msg 9676, State 1: Service Contracts analyzed: 6.
...
DBCC results for 'sys.sysrscols'.
There are 883 rows in 12 pages for object "sys.sysrscols".
DBCC results for 'sys.sysrowsets'.
There are 127 rows in 2 pages for object "sys.sysrowsets".
...
DBCC results for 'Test'.
There are 0 rows in 0 pages for object "Test".
...
DBCC results for 'sys.queue_messages_1977058079'.
There are 0 rows in 0 pages for object "sys.queue_messages_1977058079".
DBCC results for 'sys.queue_messages_2009058193'.
There are 0 rows in 0 pages for object "sys.queue_messages_2009058193".
DBCC results for 'sys.queue_messages_2041058307'.
There are 0 rows in 0 pages for object "sys.queue_messages_2041058307".
DBCC results for 'sys.filestream_tombstone_2073058421'.
There are 0 rows in 0 pages for object "sys.filestream_tombstone_2073058421".
DBCC results for 'sys.syscommittab'.
There are 0 rows in 0 pages for object "sys.syscommittab".
DBCC results for 'sys.filetable_updates_2105058535'.
There are 0 rows in 0 pages for object "sys.filetable_updates_2105058535".
CHECKDB found 0 allocation errors and 0 consistency errors in database 'BookStoreArchive'.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

How It Works

The DBCC CHECKDB command checks the integrity of objects in a database. Running DBCC CHECKDB periodically against your databases is a good maintenance practice. Weekly execution is usually sufficient; however, the optimal frequency depends on the activity and size of the database in question. If possible, DBCC CHECKDB should be executed during periods of light or no database activity. Executing DBCC CHECKDB in this manner will allow DBCC CHECKDB to finish faster and keep other processes from being slowed down by its overhead.

When executing DBCC CHECKDB, an internal database snapshot is created to maintain transactional consistency during the operation. If for some reason a database snapshot cannot be created (or the TABLOCK option is specified), shared table locks are held for table checks and exclusive database locks for allocation checks. (One of the reasons a snapshot cannot be created is if there are read-only filegroups; for this reason, the example first changes the FG3 filegroup to be read-write.)

As part of its execution, DBCC CHECKDB executes other DBCC commands that are discussed elsewhere in this chapter, including DBCC CHECKTABLE, DBCC CHECKALLOC, and DBCC CHECKCATALOG. In addition to this, CHECKDB verifies the integrity of Service Broker data indexed views and FILESTREAM link consistency for table and file-system directories.

The syntax for DBCC CHECKDB is as follows:

DBCC  CHECKDB
(
        'database_name'  |  database_id  |  0
        [  ,  NOINDEX
        |  {  REPAIR_ALLOW_DATA_LOSS
        |  REPAIR_FAST
        |  REPAIR_REBUILD
        }  ]
)
        [  WITH  {
        [  ALL_ERRORMSGS    ]
        [  ,  [EXTENDED_LOGICAL_CHECKS]  ]
        [  ,  [  NO_INFOMSGS  ]  ]
        [  ,  [  TABLOCK  ]    ]
        [  ,  [  ESTIMATEONLY  ]  ]
        [  ,  {  PHYSICAL_ONLY  |  DATA_PURITY  }  ]
        }
        ]

Table 26-8 describes the arguments of this command.

Table 26-8. DBCC CHECKDB Arguments

Argument

Description

'database_name' | database_id | 0

This defines the database name or database ID that you want to check for errors. When 0 is selected, the current database is used.

NOINDEX

Nonclustered indexes are not included in the integrity checks when this option is selected.

REPAIR_ALLOW_DATA_LOSS | REPAIR_FAST | REPAIR_REBUILD

REPAIR_ALLOW_DATA_LOSS attempts a repair of the table or indexed view, with the risk of losing data in the process. REPAIR_FAST is maintained for backward compatibility only, and REPAIR_REBUILD performs fixes without risk of data loss.

ALL_ERRORMSGS

When ALL_ERRORMSGS is chosen, every error found will be displayed (instead of just the default 200 error message limit). If you should happen to run CHECKDB and receive more than 200 error messages, you should rerun it with this option so that you can ascertain the full extent of errors in the database.

EXTENDED_LOGICAL_CHECKS

When EXTENDED_LOGICAL_CHECKS is chosen, it enables logical consistency checks on spatial and XML indexes, as well as on indexed views. This option can impact performance significantly and should be used sparingly.

NO_INFOMSGS

NO_INFOMSGS represses all informational messages from the DBCC output.

TABLOCK

When TABLOCK is selected, an exclusive database lock is used instead of an internal database snapshot. Using this option decreases concurrency with other queries being executed against objects in the database.

ESTIMATEONLY

This argument provides the estimated space needed by the tempdb database to execute the command.

PHYSICAL_ONLY | DATA_PURITY

The PHYSICAL_ONLY argument limits the integrity checks to physical issues only, skipping logical checks. DATA_PURITY is selected for use on upgraded databases (pre–SQL Server 2005 databases); it instructs DBCC CHECKDB to detect column values that do not conform to the data type (for example, if an integer value has a bigint-sized value stored in it). Once all bad values in the upgraded database are cleaned up, SQL Server maintains the column-value integrity moving forward.

Image Caution  This DBCC command has several REPAIR options. Microsoft recommends that you solve data-integrity issues by restoring the database from the last good backup rather than resorting to a REPAIR option. If restoring from backup is not an option, the REPAIR option should be used only as a last resort. Depending on the REPAIR option selected, data loss can and will occur, and the problem may still not be resolved.

Despite all of these syntax options, the common form of executing this command is also most likely the simplest. The example for this recipe executes DBCC CHECKDB against the BookStoreArchive database. For thorough integrity and data checking of your database, the default is often suitable:

DBCC CHECKDB('BookStoreArchive'),

As with the previous recipe, it is the next-to-last line of output that is the most important, where CHECKDB reports on the number of allocation and consistency errors found.

DBCC CHECKDB performs its validation checks against disk-based tables only. If your database is using In-Memory OLTP, you will need to back up the database and test the restore to ensure that the memory-optimized table structures are not corrupted. If issues arise in a memory-optimized table, you will need to restore from the last good backup.

26-18. Checking the Integrity of Tables in a Filegroup

Problem

You want to perform CHECKDB on a database, but you want to limit it to running against a specific filegroup.

Solution

Utilize DBCC CHECKFILEGROUP to perform CHECKDB operations against a specific filegroup:

USE BookStoreArchive;
GO
DBCC CHECKFILEGROUP ('PRIMARY'),
GO

This returns the following (abridged) results:

DBCC results for 'BookStoreArchive'.
DBCC results for 'sys.sysrscols'.
There are 883 rows in 12 pages for object "sys.sysrscols".
DBCC results for 'sys.sysrowsets'.
There are 127 rows in 2 pages for object "sys.sysrowsets".
...
DBCC results for 'sys.syscommittab'.
There are 0 rows in 0 pages for object "sys.syscommittab".
DBCC results for 'sys.filetable_updates_2105058535'.
There are 0 rows in 0 pages for object "sys.filetable_updates_2105058535".
CHECKFILEGROUP found 0 allocation errors and 0 consistency errors in database 'BookStoreArchive'.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

How It Works

The DBCC CHECKFILEGROUP command is very similar to DBCC CHECKDB, but limits its integrity and allocation checking to objects within a single filegroup. For very large databases (VLDBs), performing a DBCC CHECKDB operation may be time prohibitive. If you use user-defined filegroups in your database, you can employ DBCC CHECKFILEGROUP to perform your weekly (or periodic) checks instead—spreading out filegroup checks across different days.

When this command is executed, an internal database snapshot is created to maintain transactional consistency during the operation. If for some reason a database snapshot can’t be created (or the TABLOCK option is specified), shared table locks are created by the command for table checks, as well as an exclusive database lock for the allocation checks.

Again, if errors are found by DBCC CHECKFILEGROUP, Microsoft recommends that you solve any discovered issues by restoring from the last good database backup. Unlike other DBCC commands in this chapter, DBCC CHECKFILEGROUP doesn’t have repair options, so you would need to utilize DBCC CHECKDB to resolve them (although repair options are not recommended by Microsoft anyway).

The syntax is as follows:

DBCC  CHECKFILEGROUP
(
[  {  'filegroup'  |  filegroup_id  |  0  }  ]
[  ,  NOINDEX  ]
)
        [  WITH
        {
        [  ALL_ERRORMSGS  |  NO_INFOMSGS  ]
        [  ,  [  TABLOCK  ]  ]
        [  ,  [  ESTIMATEONLY  ]  ]
        }
        ]

Table 26-9 describes the arguments of this command.

Table 26-9. DBCC CHECKFILEGROUP Arguments

Argument

Description

'filegroup' | filegroup_id | 0

This defines the filegroup name or filegroup ID that you want to check. If 0 is designated, the primary filegroup is used.

NOINDEX

When NOINDEX is designated, nonclustered indexes are not included in the integrity checks.

ALL_ERRORMSGS

When ALL_ERRORMSGS is chosen, all errors are displayed in the output, instead of the default 200 message limit.

NO_INFOMSGS

NO_INFOMSGS represses all informational messages from the DBCC output.

TABLOCK

When TABLOCK is selected, an exclusive database lock is used instead of using an internal database snapshot (using this option decreases concurrency with other database queries but speeds up the DBCC command execution).

ESTIMATEONLY

ESTIMATEONLY provides the estimated space needed by the tempdb database to execute the command.

As with the previous recipes, it is the next-to-last line of output that is the most important, where the number of allocation and consistency errors are reported.

26-19. Checking the Integrity of Specific Tables and Indexed Views

Problem

You want to check for integrity issues within a specific table or indexed view.

Solution #1

Utilize DBCC CHECKTABLE to check a specific table or indexed view for integrity issues. (This solution utilizes the AdventureWorks2014 database.) See the following:

DBCC CHECKTABLE ('Production.Product'),

Executing this command produces the following messages:

DBCC results for 'Production.Product'.
There are 504 rows in 13 pages for object "Production.Product".
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Solution #2

Utilize DBCC CHECKTABLE with the optional WITH ESTIMATEONLY clause to obtain an estimate of the space required in the tempdb database for checking the specified table.

DBCC CHECKTABLE ('Sales.SalesOrderDetail') WITH ESTIMATEONLY;

Executing this command produces the following messages:

Estimated TEMPDB space (in KB) needed for CHECKTABLE on database AdventureWorks2014 = 1154.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Solution #3

Utilize DBCC CHECKTABLE to check a specified index:

DECLARE @IndexID INTEGER;
SELECT  @IndexID = index_id
FROM    sys.indexes
WHERE   object_id = OBJECT_ID('Sales.SalesOrderDetail')
AND     name = 'IX_SalesOrderDetail_ProductID';

DBCC CHECKTABLE ('Sales.SalesOrderDetail', @IndexID) WITH PHYSICAL_ONLY;

Executing this command produces the following messages:

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

How It Works

To identify issues in a specific table or indexed view, you can use the DBCC CHECKTABLE command. (If you want to run it for all tables and indexed views in the database, use DBCC CHECKDB instead, which performs DBCC CHECKTABLE for each table in your database.)

When DBCC CHECKTABLE is executed, an internal database snapshot is created to maintain transactional consistency during the operation. If for some reason a database snapshot can’t be created, a shared table lock is applied to the target table or indexed view instead (thus potentially hurting database query concurrency against the target objects). DBCC CHECKTABLE checks for errors regarding data page linkages, pointers, verification that rows in a partition are actually in the correct partition, and more.

The syntax is as follows:

DBCC CHECKTABLE
(
    table_name | view_name
    [ , { NOINDEX | index_id }
     |, { REPAIR_ALLOW_DATA_LOSS | REPAIR_FAST | REPAIR_REBUILD }
    ]
)
    [ WITH
        { ALL_ERRORMSGS ]
          [ , EXTENDED_LOGICAL_CHECKS ]
          [ , NO_INFOMSGS ]
          [ , TABLOCK ]
          [ , ESTIMATEONLY ]
          [ , { PHYSICAL_ONLY | DATA_PURITY } ]
        }
    ]

Table 26-10 describes the arguments of this command.

Table 26-10. DBCC CHECKTABLE Arguments

Argument

Description

'table_name' | 'view_name'

This defines the table or indexed view you want to check.

NOINDEX

This keyword instructs the command to not check nonclustered indexes.

index_id

This specifies the specific ID of the index to be checked (if you are checking a specific index).

REPAIR_ALLOW_DATA_LOSS | REPAIR_FAST | REPAIR_REBUILD

REPAIR_ALLOW_DATA_LOSS attempts a repair of the table or indexed view, with the risk of losing data in the process. REPAIR_FAST is no longer used and is kept for backward compatibility only. REPAIR_REBUILD does repairs and index rebuilds without any risk of data loss.

ALL_ERRORMSGS

When ALL_ERRORMSGS is chosen, every error found during the command execution will be displayed.

EXTENDED_LOGICAL_CHECKS

EXTENDED_LOGICAL_CHECKS enables logical consistency checks on spatial and XML indexes, as well as on indexed views. This option can impact performance significantly and should be used sparingly.

NO_INFOMSGS

NO_INFOMSGS represses all informational messages from the DBCC output.

TABLOCK

When TABLOCK is selected, a shared table lock is placed on the table instead of using an internal database snapshot. Using this option decreases concurrency with other database queries accessing the table or indexed view.

ESTIMATEONLY

ESTIMATEONLY provides the estimated space needed by the tempdb database to execute the command (but doesn’t actually execute the integrity checking).

PHYSICAL_ONLY

PHYSICAL_ONLY limits the integrity checks to physical issues only, skipping logical checks.

DATA_PURITY

This argument is used on upgraded databases (pre–SQL Server 2005 databases); this instructs DBCC CHECKTABLE to detect column values that do not conform to the data type (for example, if an integer value has a bigint-sized value stored in it). Once all bad values in the upgraded database are cleaned up, SQL Server maintains the column-value integrity moving forward.

Image Caution  This DBCC command has several REPAIR options. Microsoft recommends that you solve data-integrity issues by restoring the database from the last good backup rather than resorting to a REPAIR option. If restoring from backup is not an option, the REPAIR option should be used only as a last resort. ­Depending on the REPAIR option selected, data loss can and will occur, and the problem may still not be resolved.

In the first example, the integrity of the AdventureWorks2014.Production.Product table was examined for integrity issues.

In the second example, an estimate of tempdb space required for a check on the AdventureWorks2014.Sales.SalesOrderDetail table was returned. This allows you to know ahead of time if a specific CHECKTABLE operation requires more space than you have available.

The third example examined a specific index for physical errors only (not logical errors). To specify an index, you must pass in the index_id, so we first have to query the sys.indexes system view to obtain this value.

26-20. Checking Constraint Integrity

Problem

You want to check a specific table or constraint for any violations in CHECK or FOREIGN KEY constraints.

Solution

Utilize DBCC CHECKCONSTRAINTS to validate that CHECK or FOREIGN KEY constraints in a table are valid. (This solution utilizes the AdventureWorks2014 database.)

In this example, we are going to disable a check constraint, then enter data that violates this constraint. To view the existing constraint definition, the following query can be run:

SELECT  definition
FROM    sys.check_constraints
WHERE   name = 'CK_WorkOrder_EndDate';
GO

Which returns the following:

definition
---------------------------------------------
([EndDate]>=[StartDate] OR [EndDate] IS NULL)
-- Disable the constraint
ALTER TABLE Production.WorkOrder NOCHECK CONSTRAINT CK_WorkOrder_EndDate;
GO
-- Set an EndDate to earlier than a StartDate to violate the constraint
UPDATE Production.WorkOrder
SET EndDate = '2001-01-01T00:00:00'
WHERE WorkOrderID = 1;
GO
-- Enable the constraint
ALTER TABLE Production.WorkOrder CHECK CONSTRAINT CK_WorkOrder_EndDate;
GO
DBCC CHECKCONSTRAINTS ('Production.WorkOrder'),
GO

This code produces the following messages:

Table                     Constraint              Where
------------------------- ----------------------- ------------------------------------------
[Production].[WorkOrder]  [CK_WorkOrder_EndDate]  [StartDate] = '2005-07-04 00:00:00.000'
                                                  AND [EndDate] = '2001-01-01 00:00:00.000'

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

How It Works

DBCC CHECKCONSTRAINTS alerts you to any CHECK or FOREIGN KEY constraint violations found in a specific table or constraint. This command allows you to return the violating data so that you can correct the constraint violation accordingly (although this command does not catch constraints that have been disabled using NOCHECK unless ALL_CONSTRAINTS is used). The syntax is as follows:

DBCC CHECKCONSTRAINTS
[( 'table_name' | table_id | 'constraint_name'
constraint_id )]
[ WITH
{ ALL_CONSTRAINTS | ALL_ERRORMSGS } [ , NO_INFOMSGS ] ]

Table 26-11 describes the arguments of this command.

Table 26-11. DBCC CHECKCONSTRAINTS Arguments

Argument

Description

'table_name' | table_id | 'constraint_name' | constraint_id

This defines the table name, table ID, constraint name, or constraint ID that you want to validate. If a specific object isn’t designated, all the objects in the database will be evaluated.

ALL_CONSTRAINTS | ALL_ERRORMSGS

When ALL_CONSTRAINTS is selected, all constraints (enabled or disabled) are checked. When ALL_ERRORMSGS is selected, all rows that violate constraints are returned in the result set (instead of the default maximum of 200 rows).

NO_INFOMSGS

NO_INFOMSGS represses all informational messages from the DBCC output.

In this recipe, the check constraint named CK_Work0rder on the Production.WorkOrder table was disabled, using the ALTER TABLE...NOCHECK CONSTRAINT command:

ALTER TABLE Production.WorkOrder NOCHECK CONSTRAINT CK_WorkOrder_EndDate;

This disabled constraint restricted values in the EndDate column from being less than the date in the StartDate column. After disabling the constraint, a row was updated to violate this check constraint’s rule:

UPDATE Production.WorkOrder SET EndDate = '2001-01-01T00:00:00' WHERE WorkOrderID = 1;

The constraint was then reenabled:

ALTER TABLE Production.WorkOrder CHECK CONSTRAINT CK_WorkOrder_EndDate;

The DBCC CHECKCONSTRAINTS command was then executed against the table:

DBCC CHECKCONSTRAINTS('Production.WorkOrder'),

When the command was run, it returned the data that failed the validation. Now that we know that the table has invalid data, the data can be corrected and validated, as follows:

UPDATE Production.WorkOrder
SET EndDate = '2011-06-13T00:00:00.000'
WHERE WorkOrderID = 1;
GO

DBCC CHECKCONSTRAINTS ('Production.WorkOrder'),
GO

This code returned the following message:

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

DBCC CHECKCONSTRAINTS will only validate an enabled constraint; however it does not enable a constraint or make it trusted. You can see that the constraint was not marked trusted with the following query (the constraint was previously enabled so that DBCC CHECKCONSTRAINTS would be able to validate it):

SELECT  name,
        is_disabled,
        is_not_trusted
FROM    sys.check_constraints
WHERE   name = 'CK_WorkOrder_EndDate';

This returns the following result set:

name                  is_disabled is_not_trusted
--------------------- ----------- --------------
CK_WorkOrder_EndDate  0           1

The constraint can be enabled with the following:

ALTER TABLE Production.WorkOrder WITH CHECK CHECK CONSTRAINT CK_WorkOrder_EndDate;

This query will now return:

name                  is_disabled is_not_trusted
--------------------- ----------- --------------
CK_WorkOrder_EndDate  0           0

Image Note  Unlike several other database integrity DBCC commands, DBCC CHECKCONSTRAINTS is not run within DBCC CHECKDB, so you must execute it as a stand-alone process if you need to identify data constraint violations in the database.

26-21. Checking System Table Consistency

Problem

You want to check for consistency in and between system tables in your database.

Solution

Execute DBCC CHECKCATALOG against the database to verify consistency in and between system tables, as follows:

DBCC CHECKCATALOG ('BookStoreArchive'),

Assuming no errors are found, the following message is returned:

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

How It Works

DBCC CHECKCATALOG checks for consistency in and between system tables. The syntax is as follows:

DBCC CHECKCATALOG
[ ( 'database_name' | database_id | 0)] [ WITH N0_INFOMSGS ]

Table 26-12 describes the arguments of this command.

Table 26-12. DBCC CHECKCATALOG Arguments

Argument

Description

'database_name' | database_id | 0

This defines the database name or database ID to be checked for errors. When 0 is selected, the current database is used.

NO_INFOMSGS

NO_INFOMSGS suppresses all informational messages from the DBCC output.

In this recipe, the system catalog data was checked in the BookStoreArchive database. If any errors were identified, they would be returned in the command output. DBCC CHECKCATALOG doesn’t have repair options, so if any errors are found, then a restore from the last good database backup may be your only repair option.

When DBCC CHECKCATALOG is executed, an internal database snapshot is created to maintain transactional consistency during the operation. If for some reason a database snapshot cannot be created, an exclusive database lock is acquired during the execution of the command (thus potentially hurting database query concurrency).

Image Note  CHECKCATALOG is executed automatically within a DBCC CHECKDB command, so a separate execution is not necessary, unless you want to investigate only system table consistency issues.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.59.118.157