Databases involve lots of I/O. When you have a lot of I/O, you inherently run the risk of corruption. Your primary defense against database corruption is to take regular backups of your database and to periodically test that these backups can be restored. You need to look out for database corruption, however, and SQL Server provides tools you can use to check the consistency of your database, as well as to resolve consistency issues if backups are not available. This chapter will look at the options you have for both checking and fixing consistency issues.
Consistency Errors
Consistency errors can occur in user databases or system databases, leaving tables, databases, or even the entire instances in an inaccessible state. Consistency errors can occur for many reasons, including hardware failures and issues with the Database Engine. The following sections discuss the types of error that can occur, how to detect these errors, and what to do if your system databases become corrupt.
Understand Consistency Errors
Different database consistency errors can occur; these cause a query to fail or a session to be disconnected and a message to be written to the SQL Server error log. The most common errors are detailed in the following sections.
605 Error
A 605 error can point to one of two issues, depending on the error severity. If the severity is level 12, then it indicates a dirty read. A dirty read is a transactional anomaly that occurs when you are using the Read Uncommitted isolation level or the NOLOCK query hint. It occurs when a transaction reads a row that never existed in the database, due to another transaction being rolled back. Transactional anomalies will be discussed in more detail in Chapter 18. To resolve this issue, either rerun the query until it succeeds or rewrite the query to avoid the use of the Read Uncommitted isolation level or the NOLOCK query hint.
The 605 error may indicate a more serious issue, however, and often it indicates a hardware failure. If the severity level is 21, then the page may be damaged, or the incorrect page may be being served up from the operating system. If this is the case, then you need to either restore from a backup or use DBCC CHECKDB to fix the issue. (DBCC CHECKDB is discussed later in this chapter.) Additionally, you should also have the Windows administrators and storage team check for possible hardware or disk-level issues.
823 Error
An 823 error occurs when SQL Server attempts to perform an I/O operation and the Windows API that it uses to perform this action returns an error to the Database Engine. An 823 error is almost always associated with a hardware or driver issue.
If an 823 error occurs, then you should use DBCC CHECKDB to check the consistency of the rest of the database and any other databases that reside on the same volume. You should liaise with your storage team to resolve the issue with the storage. Your Windows administrator should also check the Windows event log for correlated error messages. Finally, you should either restore the database from a backup or use DBCC CHECKDB to “fix” the issue.
824 Error
If the call to the Windows API succeeds but there are logical consistency issues with the data returned, then an 824 error is generated. Just like an 823 error, an 824 error usually means that there is an issue with the storage subsystem. If an 824 error is generated, then you should follow the same course of action as you do when an 823 error is generated.
5180 Error
A 5180 error occurs when a file ID is discovered that is not valid. File IDs are stored in page pointers, as well as in system pages at the beginning of each file. This error is usually caused by a corrupt pointer within a page, but it can potentially also indicate an issue with the Database Engine. If you experience this error, you should restore from a backup or run DBCC CHECKDB to fix the error.
7105 Error
A 7105 error occurs when a row within a table references an LOB (Large Object Block) structure that does not exist. This can happen because of a dirty read in the same manner as a 605 severity 12 error, or it can happen as the result of a corrupt page. The corruption can either be in the data page that points to the LOB structure or in a page of the LOB structure itself.
If you encounter a 7105 error, then you should run DBCC CHECKDB to check for errors. If you don’t find any, then the error is likely the result of a dirty read. If you find errors, however, then either restore the database from a backup or use DBCC CHECKDB to fix the issue.
Detecting Consistency Errors
SQL Server provides mechanisms for verifying the integrity of pages as they are read from and written to disk. It also provides a log of corrupt pages that helps you identify the type of error that has occurred, how many times it has occurred, and the current status of the page that has become corrupt. These features are discussed in the following sections.
Page Verify Option
A database-level option called Page Verify determines how SQL Server checks for page corruption that the I/O subsystem causes when it is reading and writing pages to disk. It can be configured as CHECKSUM, which is the default option, TORN_PAGE_DETECTION, or NONE.
The recommended setting for Page Verify is CHECKSUM. When this option is selected, every time a page is written, a CHECKSUM value is created against the entire page and saved in the page header. A CHECKSUM value is a hash sum, which is deterministic and unique based on the value that the hashing function is run against. This value is then recalculated when a page is read into the buffer cache and compared to the original value.
When TORN_PAGE_DETECTION is specified, whenever a page is written to disk, the first 2 bytes of every 512-byte sector of the page are written to the page’s header. When the page is subsequently read into memory, these values are checked to ensure that they are the same. The flaw here is obvious; it is perfectly possible for a page to be corrupt, and for this corruption not to be noticed, because it is not within the bytes that are checked. TORN_PAGE_DETECTION is a deprecated feature of SQL Server, which means that it will not be available in future versions. You should avoid using it. If Page Verify is set to NONE, then SQL Server performs no page verification whatsoever. This is not good practice.
Checking the Page Verify Option
Changing the Page Verify option does not cause the CHECKSUM to be created against the data pages immediately. The CHECKSUM is only generated when the pages are written back to disk after being modified.
Reconfiguring All Databases to Use CHECKSUM
You can use this technique any time you require a script to perform an operation against multiple databases. The code is far more efficient than using a cursor and promotes good practice by allowing DBAs to lead by example. You are always telling your developers not to use the cursor, right?
Suspect Pages
suspect_pages Columns
Column | Description |
---|---|
Database_id | The ID of the database that contains the suspect page |
File_id | The ID of the file that contains the suspect page |
Page_id | The ID of the page that is suspect |
Event_Type | The nature of the event that caused the suspect pages to be updated |
Error_count | An incremental counter that records the number of times that the event has occurred |
Last_updated_date | The last time the row was updated |
Event Types
Event_type | Description |
---|---|
1 | 823 or 824 error |
2 | Bad checksum |
3 | Torn page |
4 | Restored |
5 | Repaired |
7 | Deallocated by DBCC CHECKDB |
After recording the suspect page in the suspect_pages table, SQL Server updates the row after you have fixed the issue by either restoring the page from a backup or by using DBCC CHECKDB. It also increments the error count every time an error with the same event_type is encountered. You should monitor this table for new and updated entries and you should also periodically delete rows from this table, which have an event_type of 4 or 5, to stop the table from becoming full.
Page restores will be discussed in Chapter 12.
Corrupting a Page
DBCC WRITEPAGE is used here for educational purposes only. It is undocumented and also extremely dangerous. It should never be used on a production system and should only be used on any database with extreme caution.
Querying suspect_pages
We will fix the error later in this chapter, but that involves losing data that was stored on the page. If a backup is available, then a page restore is a better option than a repair in this scenario.
Consistency Issues for Memory-Optimized Tables
Corruption usually occurs during a physical I/O operation, so you can be forgiven for thinking that memory-optimized tables are immune to corruption, but this is a fallacy. As you may remember from Chapter 7, although memory-optimized tables reside in memory, a copy of the tables—and depending on your durability settings, a copy of your data—is kept in physical files. This is to ensure that the tables and data are still available after a restart of the instance. These files can be subject to corruption. It is also possible for data to become corrupt in memory, due to issues such as a faulty RAM chip.
Unfortunately, the repair options of DBCC CHECKDB are not supported against memory tables. However, when you take a backup of a database that contains a memory-optimized filegroup, a checksum validation is performed against the files within this filegroup. It is therefore imperative that you not only take regular backups, but that you also check that they can be restored successfully, on a regular basis. This is because your only option, in the event of a corrupted memory-optimized table, is to restore from the last known good backup.
System Database Corruption
If system databases become corrupt, your instance can be left in an inaccessible state. The following sections discuss how to respond to corruption in the Master database and the Resource database.
Corruption of the Master Database
If the Master database becomes corrupted, it is possible that your instance will be unable to start. If this is the case, then you need to rebuild the system databases and then restore the latest copies from backups. Chapter 12 discusses strategies for database backups in more detail, but this highlights why backing up your system databases is important. In the event that you need to rebuild your system databases, you will lose all instance-level information, such as Logins, SQL Server Agent jobs, Linked Servers, and so on, if you are not able to restore from a backup. Even knowledge of the user databases within the instance will be lost and you will need to reattach the databases.
System Database Rebuild Parameters
Parameter | Description |
---|---|
/ACTION | Specifies Rebuilddatabase for the action parameter. |
/INSTANCENAME | Specifies the instance name of the instance that contains the corrupt system database. |
/Q | This parameter stands for quiet. Use this to run setup without any user interaction. |
/SQLCOLLATION | This is an optional parameter that you can use to specify a collation for the instance. If you omit it, the collation of the Windows OS is used. |
/SAPWD | If your instance uses mixed-mode authentication, then use this parameter to specify the password for the SA account. |
/SQLSYSADMINACCOUNTS | Use this parameter to specify which accounts should be made sysadmins of the instance. |
Rebuilding System Databases
Reattaching a Database
Corruption of the Resource Database or Binaries
It is possible for the instance itself to become corrupt. This can include corrupt Registry keys or the Resource database becoming corrupt. If this happens, then find the repair utility that ships with the SQL Server installation media. To invoke this tool, select Repair from the Maintenance tab of the SQL Server Installation Center.
After you select the instance that needs to be repaired, the following page of the wizard runs an additional rules check to ensure that the required features can be repaired. Finally, on the Ready To Repair page, you see a summary of the actions that are to be performed. After choosing to repair, you see the repair progress report. Once the repair completes, a Summary page displays, which provides you with the status of each operation that was performed and also a link to a log file that you may wish to review if you need to perform troubleshooting.
Instance Repair Parameters
Parameter | Description |
---|---|
/ACTION | Specifies Repair for the action parameter. |
/INSTANCENAME | Specifies the instance name of the instance that contains the corrupt system database. |
/Q | This parameter is Quiet. Use this to run without any user interaction. |
/ENU | An optional parameter that you can use on a localized operating system to specify that the English version of SQL Server should be used. |
/FEATURES | An optional parameter you can use to specify a list of components to repair. |
/HIDECONSOLE | An optional parameter that causes the console to be suppressed. |
Repairing an Instance
DBCC CHECKDB
DBCC CHECKDB is a utility that can be used to both discover corruption and also fix the errors. When you run DBCC CHECKDB, by default it creates a database snapshot and runs the consistency checks against this snapshot. This provides a transactionally consistent point from which the checks can occur while at the same time reducing contention in the database. It can check multiple objects in parallel to improve performance, but this depends on the number of cores that are available and the MAXDOP setting of the instance.
Checking for Errors
DBCC CHECKDB Arguments
Argument | Description |
---|---|
NOINDEX | Specifies that that integrity checks should be performed on heap and clustered index structures but not on nonclustered indexes. |
EXTENDED_LOGICAL_CHECKS | Forces the logical consistency of XML indexes, indexed views, and spatial indexes to be performed. |
NO_INFOMSGS | Prevents informational messages from being returned in the results. This can reduce noise when you are searching for an issue, since only errors and warnings with a severity level greater than 10 are returned. |
TABLOCK | DBCC CHECKDB creates a database snapshot and runs its consistency checks against this structure to avoid taking out locks in the database, which cause contention. Specifying this option changes that behavior so that instead of creating a snapshot, SQL Server takes out a temporary exclusive lock on the database, followed by exclusive locks on the structures that it is checking. In the event of high write load, this can reduce the time it takes to run DBCC CHECKDB, but at the expense of contention with other processes that may be running. It also causes the system table metadata validation and service broker validation to be skipped. |
ESTIMATEONLY | When this argument is specified, no checks are performed. The only thing that happens is that the space required in TempDB to perform the checks is calculated based on the other arguments specified. |
PHYSICAL_ONLY | When this argument is used, DBCC CHECKDB is limited to performing allocation consistency checks on the database, consistency checks on system catalogs, and validation on each page of every table within the database. This option cannot be used in conjunction with DATA_PURITY. |
DATA_PURITY | Specifies that column integrity checks are carried out, such as ensuring that values are within their data type boundaries. For use with databases that have been upgraded from SQL Server 2000 or below only. For any newer databases, or SQL Server 2000 databases that have already been scanned with DATA_PURITY, the checks happen by default. |
ALL_ERRORMSGS | For backward compatibility only. Has no effect on SQL 2022 databases. |
DBCC CHECKDB is a very intensive process that can consume many CPU and I/O resources. Therefore, it is advisable to run it during a maintenance window to avoid performance issues for applications. The Database Engine automatically decides how many CPU cores to assign the DBCC CHECKDB based on the Instance level setting for MAXDOP and the amount of throughput to the sever when the process begins. If you expect load to increase during the window when DBCC CHECKDB will be running, however, then you can throttle the process to a single core by turning on Trace Flag 2528. This flag should be used with caution, however, because it causes DBC CHECKDB to take much longer to complete. If a snapshot is not generated, either because you have specified TABLOCK or because there was not enough space on disk to generate a snapshot, then it also causes each table to be locked for a much longer period.
Checking TempDB Space Required for DBCC CHECKDB
Running DBCC CHECKDB
SQL Server Agent jobs are discussed fully in Chapter 24.
Fixing Errors
When we use DBCC CHECKDB to repair a corruption in the database, we need to specify an additional argument that determines the repair level to use. The options available are REPAIR_REBUILD or REPAIR_ALLOW_DATA_LOSS. REPAIR_REBUILD is, of course, the preferred option, and it can be used to resolve issues that will not cause data loss, such as bad page pointers, or corruption inside a nonclustered index. REPAIR_ALLOW_DATA_LOSS attempts to fix all errors it encounters, but as its name suggests, this may involve data being lost.
You should only use this option to restore the data if no backup is available.
In our case, we are informed that we need to use the REPAIR_ALOW_DATA_LOSS option. If we try to use the REPAIR_REBUILD option, we receive the following message, from DBCC CHECKDB:
Repairing Corruption with DBCC CHECKDB
Emergency Mode
If your database files are damaged to the extent that your database is inaccessible and unrecoverable, even by using the REPAIR_ALLOW_DATA_LOSS option, and you do not have usable backups, then your last resort is to run DBCC CHECKDB in emergency mode using the REPAIR_ALLOW_DATA_LOSS option. Remember, emergency mode is a last resort option for repairing your databases, and if you cannot access them through this mode, you will not be able to access them via any other means. When you perform this action with the database in emergency mode, DBCC CHECKDB treats pages that are inaccessible due to corruption as if they do not have errors in an attempt to recover data.
This operation can also back up databases that are inaccessible due to log corruption. This is because it attempts to force the transaction log to recover, even if it encounters errors. If this fails, it rebuilds the transaction log. Of course, this may lead to transaction inconsistencies, but as mentioned, this is an option of last resort.
Finding the Transaction Log Path
DBCC CHECKDB in Emergency Mode
If running DBCC CHECKDB in emergency mode fails, then there is no other way that the database can be repaired.
Other DBCC Commands for Corruption
A number of other DBCC commands perform a subset of the work carried out by DBCC CHECKDB. These are discussed in the following sections.
DBCC CHECKCATALOG
In SQL Server, the system catalog is a collection of metadata that describes the database and data held within it. When DBCC CHECKCATALOG is run, it performs consistency checks on this catalog. This command is run as part of DBCC CHECKDB but can also run as a command in its own right. When run in its own right, it accepts the same arguments as DBCC CHECKDB, with the exception of PHYSICAL_ONLY and DATA_PURITY, which are not available for this command.
DBCC CHECKALLOC
DBCC CHECKALLOC performs consistency checks against the disk allocation structures within a database. It is run as part of DBCC CHECKDB but can also be run as a command in its own right. When run in its own right, it accepts many of the same arguments as DBCC CHECKDB, with the exception of PHYSICAL_ONLY, DATA_PURITY, and REPAIR_REBUILD, which are not available for this command. The output is by table, index, and partition.
DBCC CHECKTABLE
DBCC CHECKTABLE is run against every table and indexed view in a database, as part of DBCC CHECKDB. However, it can also be run as a separate command in its own right against a specific table and the indexes of that table. It performs consistency checks against that specific table, and if any indexed views reference the table, it also performs cross table consistency checks. It accepts the same arguments as DBCC CHECKDB, but with it, you also need to specify the name or ID of the table that you want to check.
I have witnessed people split their tables into two buckets and replace DBCC CHECKDB with a run DBCC CHECKTABLE against half of their tables on alternate nights. This not only leaves gaps in what is being checked, but a new database snapshot is generated for every table that is checked, as opposed to one snapshot being generated for all checks to be performed. This can lead to longer run times, per table.
DBCC CHECKFILEGROUP
DBCC CHECKFILEGROUP performs consistency checks on the system catalog, the allocation structures, tables, and indexed views within a specified filegroup. There are some limitations to this, however, when a table has indexes that are stored on a different filegroup. In this scenario, the indexes are not checked for consistency. This still applies if it is the indexes that are stored on the filegroup that you are checking, but the corresponding base table is on a different filegroup.
If you have a partitioned table, which is stored on multiple filegroups, DBCC CHECKFILEGROUP only checks the consistency of the partition(s) that are stored on the filegroup being checked. The arguments for DBCC CHECKFILEGROUP are the same as those for DBCC CHECKDB, with the exception of DATA_PURITY, which is not valid and you cannot specify any repair options. You also need to specify the filegroup name or ID.
DBCC CHECKIDENT
DBCC CHECKIDENT Arguments
Argument | Description |
---|---|
Table Name | The name of the table to be checked. |
NORESEEED | Returns the maximum value of the IDENTITY column and the current IDENITY value, but will not reseed the column, even if required. |
RESEED | Reseeds the current IDENITY value to that of the maximum IDENTITY value in the table. |
New Reseed Value | Used with RESEED, specifies a seed for the IDENITY value. This should be used with caution, since setting the IDENTITY value to lower than the maximum value in the table can cause errors to be generated, if there is a primary key or unique constraint on the IDENTITY column. |
WITH NO_INFOMSGS | Causes informational messages to be suppressed. |
DBCC CHECKIDENT
DBCC CHECKCONSTRAINTS
DBCC CHECKCONSTRAINTS Arguments
Argument | Description |
---|---|
Table or Constraint | Specifies either the name or ID of the constraint you wish to check, or specifies the name or ID of a table to check all enabled constraints on that table. Omitting this argument causes all enabled constraints on all tables within the database to be checked. |
ALL_CONSTRAINTS | If DBCC CHECKCONSTRAINTS is being run against an entire table or entire database, then this option forces disabled constraints to be checked as well as enabled ones. |
ALL_ERRORMSGS | By default, if DBCC CHECKCONSTRAINTS finds rows that violate a constraint, it returns the first 200 of these rows. Specifying ALL_ERRORMSGS causes all rows violating the constraint to be returned, even if this number exceeds 200. |
NO_INFOMSGS | Causes informational messages to be suppressed. |
DBCC CHECKCONSTRAINTS
After running DBCC CHECKDB, or other DBCC commands to repair corruption, it is good practice to run DBCC CHECKCONSTRAINTS. This is because the repair options of the DBCC commands do not take constraint integrity into account.
Marking a Constraint as Trusted
Consistency Checks on VLDBs
If you have VLDBs (very large databases), then it may be difficult to find a maintenance window long enough to run DBCC CHECKDB, and running it while users or ETL processes are connected is likely to cause performance issues. You may also encounter a similar issue if you have a large estate of smaller databases that are using a common infrastructure, such as a SAN or a private cloud. Ensuring that your databases are consistent, however, should be a priority near the top of your agenda, so you should try and find a strategy that achieves both your maintenance and performance goals. The following sections discuss strategies that you may choose to adopt to achieve this balance.
DBCC CHECKDB with PHYSICAL_ONLY
One strategy that you can adopt is to run DBCC CHECKDB regularly, ideally nightly, using the PHYSICAL_ONLY option, and then run a complete check on a periodic, but less-frequent basis, ideally weekly. When you run DBCC CHECKDB with the PHYSICAL_ONLY option, consistency checks are carried out of system catalogs and allocation structures and each page of every table is scanned and validated. The net result of this is that corruption caused by I/O errors is trapped, but other issues, such as logical consistency errors, are identified. This is why it is important to still run a full scan weekly.
Backing Up WITH CHECKSUM and DBCC CHECKALLOC
If you are in a position where all of your databases have a full backup every night and all are configured with a PAGE_VERIFY option of CHECKSUM, then an alternative approach to the one mentioned in the previous section is to add the WITH CHECKSUM option to your full backups, followed by a DBCC CHECKALLOC, to replace the DBCC CHECKDB with the PHYSICAL_ONLY option specified on a nightly basis. The DBCC CHECKALLOC command, which is actually a subset of the DBCC CHECKDB command, validates the allocation structures within the database. When the full backups are taken WITH CHECKSUM, then this fulfills the requirement to scan and verify each page of every table for I/O errors. Just like running DBCC CHECKDB, with the PHYSICAL_ONLY option specified, this identifies any corruption caused by I/O operations and identifies any bad checksums. Any page errors that occurred in memory, however, are not identified. This means that like the PHYSICAL_ONLY strategy, you still require a full run of DBCC CHECKDB once a week to trap logical consistency errors or corruptions that occurred in memory. This option is very useful if you have an environment with common infrastructure and you are performing full nightly backups of all databases, since you will reduce the overall amount of I/O on a nightly basis. This is at the expense of the duration of your backup window, however, and it increases the resources used during this time.
Splitting the Workload
Another strategy for VLDBs may be to split the load of DBCC CHECKDB over multiple nights. For example, if your VLDB has multiple filegroups, then you could run DBCC CHECKFILEGROUP against half of the filegroups on Monday, Wednesday, and Friday, and against the other half of the filegroups on Tuesday, Thursday, and Saturday. You could reserve Sunday for a full run of DBCC CHECKDB. A full run of DBCC CHECKDB is still advised on a weekly basis, since DBCC CHECKFILEGROUP does not perform checks, such as validating Service Broker objects.
If your issue is common infrastructure, as opposed to VLDBs, then you can adapt the concept just described so that you run DBCC CHECKDB on a subset of databases on alternate nights. This can be a little complex, since the approach here, in order to avoid swamping the SAN, is to segregate the databases intelligently, and based on size, as opposed to a random 50/50 split. This will often involve centralizing the maintenance routines, on a central server, known as a Central Management Server (CMS). A CMS is a SQL Server instance, which sits in the middle of your SQL Server estate, and provides centralized monitoring and maintenance. Often, for intelligent maintenance, such as I am describing here, the CMS will control maintenance routines by scheduling PowerShell scripts, which will interact with metadata, stored on the CMS, to decide which maintenance jobs to run.
Offloading to a Secondary Server
The final strategy for reducing the load on production systems caused by DBCC CHECKDB is to offload the work to a secondary server. If you decide to take this approach, it involves taking a full backup of the VLDB and then restoring it on a secondary server before you run DBCC CHECKDB on the secondary server. This approach has several disadvantages, however. First, and most obviously, it means that you have the expense of procuring and maintaining a secondary server, just for the purpose of running consistency checks. This makes it the most expensive of the options discussed (unless of course, you reuse an existing server, such as a UAT server). Also, if you find corruption, you will not know if the corruption was generated on the production system and copied over in the backup or if the corruption was actually generated on the secondary server. This means that if errors are found, you still have to run DBCC CHECKDB on the production server.
Summary
Many types of corruption can occur in SQL Server. These include pages that have been damaged at the file system level, logical consistency errors, and corrupt pages that have a bad checksum. Pages can also be damaged in memory, which would not be identified through a checksum.
Three-page verification options can be selected for a database. The NONE option leaves you totally exposed to issues and is regarded as bad practice. The TORN_PAGE_DETECTION option is deprecated and should not be used, since it only checks the first 2 bytes in every 512-byte sector. The final option is CHECKSUM. This is the default option and should always be selected.
Pages that are damaged are stored in a table called dbo.suspect_pages in the MSDB database. Here, the error count is increased every time the error is encountered and the event type of the page is updated to indicate that it has been repaired or restored as appropriate.
If a system database, especially Master, becomes corrupt, you may be unable to start your instance. If this is the case, then you can rectify the issue by running setup with the ACTION parameter set to Rebuilddatabases. Alternatively, if the instance itself has become corrupt, then you can run setup with the ACTION parameter set to repair. This resolves issues such as corrupt Registry keys or corruption to the Resource database.
DBCC CHECKDB is a command that you should run on a regular basis to check for corruption. If you find corruption, then you can also use this command to fix the issue. There are two repair modes that are available, depending on the nature of the corruption: REPAIR_REBUILD or REPAIR_ALLOW_DATA_LOSS. You should only use the REPAIR_ALLOW_DATA_LOSS option as a last resort in the event that no backup is available from which to restore the database or corrupt pages. This is because the REPAIR_ALLOW_DATA_LOSS option is liable to deallocate the corrupt pages, causing all data on these pages to be lost.
Other DBCC commands can be used to perform a subset of DBCC CHECKDB functionality. These include DBCC CHECKTABLE, which can validate the integrity of a specific table, and DBCC CONSTRAINTS, which you can use to verify the integrity of foreign keys and check constraints, especially after you run DBCC CHECKDB with a repair option.
For VLDBs or estates that share infrastructure, running DBCC CHECKDB can be an issue because of performance impact and resource utilization. You can mitigate this by adopting a strategy that offers a trade-off between maintenance and performance goals. These strategies include splitting the workload, offloading the workload to a secondary server, or running only subsets of the checking functionality on a nightly basis and then performing a full check on a weekly basis.