© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2023
P. A. CarterPro SQL Server 2022 Administrationhttps://doi.org/10.1007/978-1-4842-8864-1_11

11. Auditing and Ledger

Peter A. Carter1  
(1)
SOUTHAMPTON, UK
 

Passive security refers to the practice of logging user activity in order to avoid the threat of nonrepudiation. This is important because if an attack is launched by a privileged user, it allows for appropriate disciplinary or even legal action to be taken. SQL Server provides SQL Server Audit to assist with implementing passive security. SQL Server 2022 also introduces Ledger, which uses blockchain technology to make data tamper-evident, which allows for streamlined auditing processes. In this chapter, we will discuss the new ledger functionality, how to implement it and how to use it to discover who has changed data. We will also explore the concepts involved in auditing before demonstrating how to implement SQL Server Audit, including the creation of custom audit event.

Ledger

Any DBA who has been involved in a regulatory audit will know how troublesome and laborious proving the authenticity of your data can be. SQL Server 2022 addresses this issue by introducing a blockchain technology, which can not only provide a change history for your data, but also prove that your data has not been tampered with and avoid nonrepudiation in the event that tampering has occurred—even by accounts with administrative access. Because ledger functionality is built on blockchain technology, it can also be used to fulfill use cases such as secure supply chain processes between companies.

SQL Server 2022 introduces two types of ledger table, to provide this functionality: append-only ledger tables and updateable ledger tables. These table types, along with ledger databases, digest management, and ledger management will be discussed in the following sections.

Append-Only Ledger Tables

Append-only ledger tables do not allow any UPDATE or DELETE statements to be issued against them. Therefore, they are perfect for tables that store data such as transactional information against a credit account, for example.

While the table will probably be created by a developer, it is important for a DBA involved with audit requirements to understand the process. Therefore, Listing 11-1 first creates the Chapter11 database and then creates an append-only table, within the database, called dbo.AccountTransactions. The script uses the new GENERATE ALWAYS AS syntax in the column definitions to create two system columns, which will hold the transaction ID of the transaction that inserted the row and the sequence number, within the transaction of that insert. If you omit these column definitions, for an append-only ledger table, then SQL Server will automatically create them, using the names ledger_start_transaction_id and ledger_start_sequence_number, respectively. The script also uses the WITH clause to specify that the table will be an append-only ledger table. Lastly, the script inserts some data into the table.
CREATE DATABASE Chapter11
GO
USE Chapter11
GO
CREATE TABLE dbo.AccountTransactions (
    AccountTransactionID INT              NOT NULL    IDENTITY    PRIMARY KEY,
    CustomerID           INT              NOT NULL,
    TxDescription        NVARCHAR(256)    NOT NULL,
    TxAmount             DECIMAL(6,2)     NOT NULL,
    NewBalance           DECIMAL(6,2)     NOT NULL,
    account_tx_transaction_id    BIGINT GENERATED ALWAYS AS transaction_id START HIDDEN NOT NULL,
    account_tx_sequence_number  BIGINT GENERATED ALWAYS AS sequence_number START HIDDEN NOT NULL,
) WITH(LEDGER = ON (APPEND_ONLY = ON)) ;
INSERT INTO dbo.AccountTransactions (CustomerID, TxDescription, TxAmount, NewBalance)
VALUES (5, 'Card Payment', 5.00, 995.00),
       (5, 'Card Payment', 250.00, 745.00),
       (5, 'Repayment', 255.00, 1000.00),
       (5, 'Card Payment', 20.00, 980.00) ;
Listing 11-1

Create an Append-Only Ledger Table

So let’s see what happens if we try to tamper with the data in this table and update a row. This is demonstrated in Listing 11-2.
UPDATE dbo.AccountTransactions
SET TxAmount = 5000.00
WHERE TxDescription = 'Repayment' ;
Listing 11-2

Attempt to Modify Data in Table

The result of running this query is displayed in Figure 11-1.
Figure 11-1

Result of trying to modify data in append-only table

If we wanted to check who had inserted a row into the table, or when a row was inserted, then we could make use of the system-populated columns that we created in the table and join them to a catalog view called sys.database_ledger_transactions, which records information about all DML (data manipulation language) statements which have been run against all ledger tables. The script in Listing 11-3 demonstrates how to return the time that each row was inserted and the account that inserted them.
SELECT
        dlt.commit_time
      , dlt.principal_name
      , atx.*
      , atx.account_tx_sequence_number
FROM dbo.AccountTransactions atx
INNER JOIN sys.database_ledger_transactions dlt
      ON atx.account_tx_transaction_id = dlt.transaction_id
Listing 11-3

Discover Who Inserted Data

The results of this query are shown in Figure 11-2. You can see that it is a simple task to identify who inserted each row and when, giving a forensic capability.
Figure 11-2

Results of row insertion discovery

Once a table has been configured as append-only, it cannot be turned off. This prevents a highly privileged user from turning the feature off, inserting rows and then turning the feature back on.

Ledger Database

As previously mentioned, it will usually be the responsibility of a developer to create tables and there is a risk of human error, where a developer may forget to configure a table as a ledger table. This risk can be mitigated for databases where all tables should be ledger tables, with the use of a ledger database.

When a ledger database is used, all tables within the database will automatically become ledger tables, even if LEDGER = ON is not specified when creating the table. The default behavior is for tables to be created as updateable ledger tables, but developers can use the WITH(LEDGER = ON (APPEND_ONLY = ON)) syntax and specify the table should be an append-only table. Developers can also specify the LEDGER = ON syntax explicitly to control the names of ledger columns, the ledger view, or the history table. If you attempt to create a table in a ledger database with the LEDGER = OFF syntax, the table creation will fail.

The script in Listing 11-4 creates a ledger database called Chapter11Ledger. It then creates two tables. The GoodsIn table is automatically created as an updateable ledger table, while the AccountTransactions table is created as an append-only table.
CREATE DATABASE Chapter11Ledger WITH LEDGER = ON ;
GO
USE Chapter11Ledger
GO
CREATE TABLE dbo.GoodsIn (
     ID               INT                NOT NULL   IDENTITY   PRIMARY KEY,
     StockID          INT                NOT NULL,
     QtyOrdered       INT                NOT NULL,
     QtyReceived      INT                NOT NULL,
     ReceivedBy       NVARCHAR(128)      NOT NULL,
     ReceivedDate     DATETIME2          NOT NULL,
     Damaged          BIT                NOT NULL
) ; --Will automatically be created as an updateable ledger table
CREATE TABLE dbo.AccountTransactions (
    AccountTransactionID  INT             NOT NULL   IDENTITY  PRIMARY KEY,
    CustomerID            INT             NOT NULL,
    TxDescription         NVARCHAR(256)   NOT NULL,
    TxAmount              DECIMAL(6,2)    NOT NULL,
    NewBalance            DECIMAL(6,2)    NOT NULL
) WITH(LEDGER = ON (APPEND_ONLY = ON)) ; --Will be created as an append-only ledger table
Listing 11-4

Create a Ledger Database

Figure 11-3 shows how Ledger tables are displayed within SQL Server Management Studio’s Object Explorer. You will notice that SSMS v19 adds folders for dropped ledger columns and dropped ledger tables. You will also notice that the updatable ledger table provides a drill-through to the history table.
Figure 11-3

Ledger tables in SSMS

Ledger status of databases and tables can also be viewed through the sys.databases and sys.tables catalog views. The script in Listing 11-5 demonstrates how to return the ledger status at the database level, for the Chapter11 and Chapter11Ledger databases. It then demonstrates how to return the ledger table status for each table across the two databases.
SELECT
        name
      , is_ledger_on
FROM sys.databases
WHERE name LIKE 'Chapter11%'
SELECT
        'Chapter11' AS DatabaseName
      , name
      , ledger_type_desc
      , ledger_view_id
      , is_dropped_ledger_table
FROM Chapter11.sys.tables
UNION ALL
SELECT
        'Chapter11Ledger'
      , name
      , ledger_type_desc
      , ledger_view_id
      , is_dropped_ledger_table
FROM Chapter11Ledger.sys.tables
Listing 11-5

Return Ledger Status Information

Database Ledger and Digest Management

Ledger works using blockchain technology to incrementally capture the state of a database over time. When DML statements are executed against ledger tables, the query engine extends the execution plan to record the transaction ID, ordinal sequence number of the operation within the transaction for the INSERT and/or DELETE operations. The new version of the row is then serialized and a hash of the serialized content is appended, at the transaction level, to a Merkle Tree. A separate Merkle Tree is created for each table updated within the transaction, and as well as the row versions, metadata is also stored about the updated columns (such as datatype, ordinal column position, etc.) and the transaction (such as transaction ID, timestamp, etc.).

Tip

A deep discussion of Merkle Trees is beyond the scope of this book, but further detail can be found at https://en.wikipedia.org/wiki/Merkle_tree.

These Merkle Trees are added to a tamper-evident data structure called a block. A block is a hashed representation of all hashed transactions that have occurred since the block was opened. The most recent block is known as the database digest.

A block closes when any one of the following conditions are met:
  • A database digest is automatically generated.

  • A database digest is manually generated.

  • The block contains 100,000 transactions.

When a block is closed, the following operations are performed on the closed block:
  • All transactions of both the sys.database_ledger_transactions catalog view (which was discussed in the previous section) and the in-memory queue of transactions that have not yet been flushed to the catalog view.

  • The Merkle Tree of the transactions within the block and the hash of content is calculated.

  • The block is persisted to the sys.database_ledger_blocks catalog view.

It is very important to note that the trustworthiness of the database depends on the trustworthiness of the digests. Therefore, generated digests should always be stored on trusted WORM (write once read many) storage devices.

To assist with this, SQL Server 2022 provides integration with Azure Confidential Ledger and Azure Blob Storage with immutability. Automatic digest generation can only be configured if this Azure integration is used. When digests are generated automatically with Azure integration, the digest will be generated every 30 seconds, providing that there has been at least one transaction impacting ledger tables within that time. To configure this, you will need to create a CREDENTIAL that uses the URL of the Blob storage as its name, the shared access signature as the identity, and the shared access signature key as the secret.

Caution

You should develop a (ideally automated) process for rotating the shared access signature key simultaneously in both Azure and SQL Server.

The command in Listing 11-6 demonstrates how to manually generate a digest for the Chapter11Ledger database and then view the persisted block in sys.database_ledger_blocks.
--Manually generate a digest
EXEC sys.sp_generate_database_ledger_digest
--View the persisted digest
SELECT *
FROM sys.database_ledger_blocks
Listing 11-6

Generate a Digest and View Persisted Block

Listing 11-7 shows the JSON document that was generated. It includes the hash of the block, as well as metadata about the block.
{
  "database_name":"Chapter11Ledger",
  "block_id":0,
  "hash":"0x5D529C2464D66909F2F20ED418D48D8542B049347C897705930E32BA46278825",
  "last_transaction_commit_time":"2022-08-07T11:19:42.7500000",
  "digest_time":"2022-08-07T19:45:56.5364972"
}
Listing 11-7

Generated JSON

The script in Listing 11-8 compares two digests to verify the Chapter11Ledger database and ensure that the data has not been tampered with.

Tip

You will notice that the first activity of the script is to turn on Snapshot isolation. This is a requirement for database verification. Transaction isolation levels are discussed in Chapter 19.

ALTER DATABASE Chapter11Ledger SET ALLOW_SNAPSHOT_ISOLATION ON
GO
EXECUTE sp_verify_database_ledger N'
[
    {
      "database_name":"Chapter11Ledger",
      "block_id":0,
      "hash":"0x5D529C2464D66909F2F20ED418D48D8542B049347C897705930E32BA46278825",
      "last_transaction_commit_time":"2022-08-07T11:19:42.7500000",
      "digest_time":"2022-08-07T19:45:56.5364972"
    },
    {
      "database_name":"Chapter11Ledger",
        "block_id":0,
        "hash":"0x5D529C2464D66909F2F20ED418D48D8542B049347C897705930E32BA46278825",
        "last_transaction_commit_time":"2022-08-07T11:19:42.7500000",
        "digest_time":"2022-08-07T19:56:44.7114967"
    }
]';
Listing 11-8

Verify a Database

Figure 11-4 illustrates a successful database verification.
Figure 11-4

Successful database verification

Figure 11-5 illustrates a failed database verification, which implies the data has been tampered with.
Figure 11-5

Failed database verification

If you are using automatic digest generation and storage, then you can ingest the locations from the sys.database_ledger_digest_locations catalog view. When converted to a JSON document using the FOR JSON clause, the locations in this catalog view can be passed to the sys.sp_verify_database_ledger_from_digest_storage stored procedure, to verify a database. This is demonstrated in Listing 11-9.
DECLARE @digest_locations NVARCHAR(MAX)
SET @digest_locations = (
      SELECT *
      FROM sys.database_ledger_digest_locations
      FOR JSON AUTO, INCLUDE_NULL_VALUES
);
EXEC sys.sp_verify_database_ledger_from_digest_storage @digest_locations ;
Listing 11-9

Verify a Database from Automatically Generated Digests

Caution

It is very important that you schedule database verification to happen on a regular basis. Doing so will alert you promptly to an attack. Details of using Server Agent to automate maintenance can be found in Chapter 23.

Server Audit

SQL Server Audit provides DBAs with the ability to capture granular audits against instance-level and database-level activity and save this activity to a file, the Windows Security log, or the Windows Application log. The location where the audit data is saved is known as the target. The SQL Server Audit object sits at the instance level and defines the properties of the audit and the target. You can have multiple server audits in each instance. This is useful if you have to audit many events in a busy environment, since you can distribute the I/O by using a file as the target and placing each target file on a separate volume.

Choosing the correct target is important from a security perspective. If you choose the Windows Application log as a target, then any Windows user who is authenticated to the server is able to access it. The Security log is a lot more secure than the Application log but can also be more complex to configure for SQL Server Audit. The service account that is running the SQL Server service requires the Generate Security Audits user rights assignment within the server’s local security policy. Application-generated auditing also needs to be enabled for success and failure within the audit policy. The other consideration for the target is size. If you decide to use the Application log or Security log, then it is important that you consider, and potentially increase, the size of these logs before you begin using them for your audit. Also, work with your Windows administration team to decide on how the log will be cycled when full and if you will be archiving the log by backing it up to tape.

The SQL Server Audit can then be associated with one or more server audit specifications and database audit specifications. These specifications define the activity that will be audited at the instance level and the database level, respectively. It is helpful to have multiple server or database audit specifications if you are auditing many actions, because you can categorize them to make management easier while still associating them with the same server audit. Each database within the instance needs its own database audit specification if you plan to audit activity in multiple databases.

Creating a Server Audit

When you create a server audit, you can use the options detailed in Table 11-1.
Table 11-1

Server Audit Options

Option

Description

FILEPATH

Only applies if you choose a file target. Specifies the file path, where the audit logs will be generated.

MAXSIZE

Only applies if you choose a file target. Specifies the largest size that the audit file can grow to. The minimum size you can specify for this is 2MB.

MAX_ROLLOVER_FILES

Only applies if you choose a file target. When the audit file becomes full, you can either cycle that file or generate a new file. The MAX_ROLLOVER_FILES setting controls how many new files can be generated before they begin to cycle. The default value is UNLIMITED, but specifying a number caps the number of files to this limit. If you set it to 0, then there will only ever be one file, and it will cycle every time it becomes full. Any value above 0 indicates the number of rollover files that will be permitted. So, for example, if you specify 5, then there will be a maximum of six files in total.

MAX_FILES

Only applies if you choose a file target. As an alternative to MAX_ROLLOVER_FILES, the MAX_FILES setting specifies a limit for the number of audit files that can be generated, but when this number is reached, the logs will not cycle. Instead, the audit fails and events that cause an audit action to occur are handled based on the setting for ON_FAILURE.

RESERVE_DISK_SPACE

Only applies if you choose a file target. Preallocate space on the volume equal to the value set in MAXSIZE, as opposed to allowing the audit log to grow as required.

QUEUE_DELAY

Specify if audit events are written synchronously or asynchronously. If set to 0, events are written to the log synchronously. Otherwise, specify the duration in milliseconds that can elapse before events are forced to write. The default value is 1000 (1 second), which is also the minimum value.

ON_FAILURE

Specify what should happen if events that cause an audit action fail to be audited to the log. Acceptable values are CONTINUE, SHUTDOWN, or FAIL_OPERATION. When CONTINUE is specified, the operation is allowed to continue. This can lead to unaudited activity occurring. FAIL_OPERATION causes auditable events to fail, but allows other actions to continue. SHUTDOWN forces the instance to stop if auditable events cannot be written to the log.

AUDIT_GUID

Because server and database audit specifications link to the server audit through a GUID, there are occasions when an audit specification can become orphaned. These include when you attach a database to an instance, or when you implement technologies such as database mirroring. This option allows you to specify a specific GUID for the server audit, as opposed to having SQL Server generate a new one.

It is also possible to create a filter on the server audit. This can be useful when your audit specification captures activity against an entire class of object, but you are only interested in auditing a subset. For example, you may configure a server audit specification to log any member changes to server roles; however, you are only actually interested in members of the sysadmin server role being modified. In this scenario, you can filter on the sysadmin role.

You can create a server audit through the GUI in SQL Server Management Studio by drilling through Security in Object Explorer and choosing New Audit from the Audits node. Figure 11-6 illustrates the Create Audit dialog box.
Figure 11-6

The General tab

You can see that we have decided to save our audit to a flat file, as opposed to a Windows log. Therefore, we need to specify the file-related parameters. We set our file to roll over and enforce the maximum size for any one file to be 512MB. We leave the default value of 1 second (1000 milliseconds) as a maximum duration before audit entries are forced to be written to the log and name the audit Audit-ProSQLAdmin.

On the Filter tab of the Create Audit dialog box, you should specify that we wish to filter on the object_name and only audit changes to the sysadmin role.

Alternatively, we can use T-SQL to perform the same action. The script in Listing 11-10 creates the same server audit.
USE Master
GO
CREATE SERVER AUDIT [Audit-ProSQLAdmin]
TO FILE
(        FILEPATH = N'c:audit'
        ,MAXSIZE = 512 MB
        ,MAX_ROLLOVER_FILES = 2147483647
        ,RESERVE_DISK_SPACE = OFF
)
WITH
(        QUEUE_DELAY = 1000
        ,ON_FAILURE = CONTINUE
)
WHERE object_name = 'sysadmin' ;
Listing 11-10

Creating a Server Audit

Creating a Server Audit Specification

To create the server audit specification through SSMS, we can drill through Security in Object Explorer and choose New Server Audit Specification from the Server Audit Specifications context menu. This will cause the Create Server Audit Specification dialog box to be displayed, as illustrated in Figure 11-7.
Figure 11-7

Server Audit Specification dialog box

You can see that we have selected the SERVER_ROLE_MEMBER_CHANGE_GROUP as the audit action type. This audits any additions or removals of the membership of server roles. Combined with the filter that we have put on the Server Audit object, however, the new result is that only changes to the sysadmin server role will be logged. We also selected the Audit-ProSQLAdmin audit from the Audit drop-down box to tie the objects together.

Alternatively, we can create the same server audit specification through T-SQL by running the command in Listing 11-11. In this command, we are using the FOR SERVER AUDIT clause to link the server audit specification to the Audit-ProSQLAdmin server audit, and the ADD clause to specify the audit action type to capture.
CREATE SERVER AUDIT SPECIFICATION [ServerAuditSpecification-ProSQLAdmin]
FOR SERVER AUDIT [Audit-ProSQLAdmin]
ADD (SERVER_ROLE_MEMBER_CHANGE_GROUP) ;
Listing 11-11

Creating the Server Audit Specification

Enabling and Invoking Audits

Even though we have created the server audit and server audit specification, we need to enable them before any data starts to be collected. We can achieve this by choosing Enable from the context menu of each of the objects in Object Explorer, or by altering the objects and setting their STATE = ON in T-SQL. This is demonstrated in Listing 11-12.
ALTER SERVER AUDIT [Audit-ProSQLAdmin] WITH (STATE = ON) ;
ALTER SERVER AUDIT SPECIFICATION [ServerAuditSpecification-ProSQLAdmin]
WITH (STATE = ON) ;
Listing 11-12

Enabling Auditing

We now add the Danielle login to the sysadmin server roles using the script in Listing 11-13 so that we can check that our audit is working.
ALTER SERVER ROLE sysadmin ADD MEMBER Danielle ;
Listing 11-13

Triggering the Audit

We expect that our server audit specification’s definition has captured both actions, but that the WHERE clause has filtered out the first action we applied to the server audit. If we view the audit log by selecting View Audit Log from the context menu of the Audit-ProSQLAdmin server audit in Object Explorer, as illustrated in Figure 11-8, we can see that this is working as expected and review the audit entry that has been captured.
Figure 11-8

Audit Log File Viewer

We can see that a granular level of information has been captured. Most notably, this information includes the full statement that caused the audit to fire, the database and object involved, the target login, and the login that ran the statement.

Updateable Ledger Tables

Just like append-only ledger tables, updateable ledger tables also provide nonrepudiation and table history. The difference is that they allow INSERT and UPDATE statements to be made against the table. To make the table versions available, they use a history table, which records every DML statement that was made against the table. A custom name can be provided for this history table, on creation of the updateable ledger table. If a custom name is not provided, then it defaults to the format MSSQL_LedgerHistoryFor_[Object_ID], where Object_ID is the object ID of the updateable ledger table.

Tip

For consistency, history tables are also created for append-only ledger tables. Because UPDATE and INSERT statements are prohibited, however, they have limited value.

SQL Server also provides a view, for each ledger table created, which provides a simple view of your table history. Once again, a custom name can be provided for the view when the ledger table is created. If a custom name is not provided, then it will be created with the naming convention [TableName]_Ledger, where TableName is the name of the ledger table.

Tip

The ability to provide custom names for the history table and view also allows you to place them in a different schema. This can be helpful from a security perspective, to grant your Cyber team or Legal team access to table history. Further information around schemas and security can be found in Chapter 10.

Updateable ledger tables require additional columns within each table. These columns can be created manually, using the GENERATE ALWAYS AS syntax discussed in the previous section, or if the column definitions are omitted, they will be created automatically, using the column names detailed in Table 11-2.
Table 11-2

Columns Required for Updatable Ledger Tables

Column

Data Type

Description

ledger_start_transaction_id

BIGINT

The transaction ID of the transaction that inserted the row

ledger_end_transaction_id

BIGINT

The transaction ID of the transaction that deleted the row

ledger_start_sequence_number

BIGINT

The operation sequence number, within the transaction, that created a row version

ledger_end_sequence_number

BIGINT

The operation sequence number, within the transaction, that deleted a row version

The script in Listing 11-14 creates an updateable ledger table within the Chapter11 database and inserts some initial data, before making modifications to the data. In this example, we specify custom names for the ledger view and history table, but we omit the ledger columns. This means that they will be created automatically, with default names. You will notice that the table also specifies SYSTEM_VERSIONING = ON. This is a requirement and must always be set, when creating an updateable ledger table.

Note

Different activities are simulated as being run by different users, using the EXECUTE AS clause. This creates an interesting repudiation point that we will discuss a little later.

Tip

To run the script in Listing 11-14, you will need Logins and Users called Andrew and GoodsInApplication. These users will need permissions to perform INSERT, UPDATE, and DELETE operations against the ledger tables. Please see Chapter 10 for information on how to create Logins and Users. Alternatively, you could change the EXECUTE AS clauses to use any other Logins which exist and have the appropriate permissions to the ledger tables.

USE Chapter11
GO
--Create Updateable Ledger Table
CREATE TABLE dbo.GoodsIn (
     ID            INT              NOT NULL     IDENTITY      PRIMARY KEY,
     StockID       INT              NOT NULL,
     QtyOrdered    INT              NOT NULL,
     QtyReceived   INT              NOT NULL,
     ReceivedBy    NVARCHAR(128)    NOT NULL,
     ReceivedDate  DATETIME2        NOT NULL,
     Damaged       BIT              NOT NULL
) WITH (
      SYSTEM_VERSIONING = ON ( --Updateable ledger tables must always have system_versioning turned on
            HISTORY_TABLE = dbo.GoodsIn_Ledger_History --Specify custom name for history table
      ),
      LEDGER = ON (
            LEDGER_VIEW = dbo.GoodsIn_Ledger_View --Specify custom name for ledger view
      )
) ;
GO
EXECUTE AS LOGIN = 'GoodsInApplication' --Simulate activity by application
--Insert Initial Data
INSERT INTO dbo.GoodsIn (StockID, QtyOrdered, QtyReceived, ReceivedBy, ReceivedDate, Damaged)
VALUES(17, 25, 25, 'Pete', '20220807 10:59', 0),
      (6, 20, 19, 'Brian', '20220810 15:01', 0),
      (17, 20, 20, 'Steve', '20220810 16:56', 1),
      (36, 10, 10, 'Steve', '20220815 18:11', 0),
      (36, 10, 10, 'Steve', '20220815 18:12', 1),
      (1, 85, 85, 'Andrew', '20220820 10:27', 0) ;
--Duplicate row deleted by application
DELETE FROM dbo.GoodsIn WHERE ID = 4 ;
REVERT --Stop impersonating GoodsInApplication
EXECUTE AS LOGIN = 'Andrew' --Simulate activity by user
--Row updated by Andrew, from outside the application (RED FLAG)
UPDATE dbo.GoodsIn
SET QtyReceived = 70
WHERE ID = 6 ;
REVERT --Stop impersonating Andrew
EXECUTE AS LOGIN = 'GoodsInApplication' --Simulate activity by application
--Rows updated by application
UPDATE dbo.GoodsIn
SET Damaged = 1
WHERE ID = 1 ;
INSERT INTO dbo.GoodsIn (StockID, QtyOrdered, QtyReceived, ReceivedBy, ReceivedDate, Damaged)
VALUES (17, 25, 25, 'Pete', '20220822 08:59', 0)
REVERT --Stop impersonating GoodsInApplication
Listing 11-14

Create an Updateable Ledger Table

To avoid searching for the view and history table that are associated with the GoodsIn updateable ledger table, we can use the query in Listing 11-15 to return the object names.
SELECT
        t.name AS Updateable_Ledger_Table
      , h.name AS History_Table
      , v.name AS Ledger_View
FROM sys.tables AS t
INNER JOIN sys.tables AS h
      ON h.object_id = t.history_table_id
JOIN sys.views v
      ON v.object_id = t.ledger_view_id
WHERE t.name = 'GoodsIn' ;
Listing 11-15

Discover Ledger View and History Table Names

Tip

This query could be expanded to meet your needs. For example, if you decide to store history tables in a different schema, you could join to sys.schemas to include the schema name.

The script in Listing 11-16 expands out the script, using dynamic SQL, to return the contents of the updateable ledger table, the history table, and the ledger view.
DECLARE @SQL NVARCHAR(MAX)
SET @SQL = (
SELECT
        'SELECT * FROM ' + t.name --Updateable Ledger Table
    + ' SELECT * FROM ' + h.name --History Table
    + ' SELECT * FROM ' + v.name + ' ORDER BY ledger_transaction_id, ledger_sequence_number ' --Ledger View
FROM sys.tables AS t
INNER JOIN sys.tables AS h
      ON h.object_id = t.history_table_id
JOIN sys.views v
      ON v.object_id = t.ledger_view_id
WHERE t.name = 'GoodsIn' )
EXEC(@SQL)
Listing 11-16

View Updateable Ledger Table, Ledger View, and History Table

The results of this query are displayed in Figure 11-9.
Figure 11-9

Results of viewing ledger table objects

You can see from the results that the history table contains one row for each transaction or batch that has modified the table. In our example, we did not use explicit transactions. Therefore, the demarcation is by batch. Because of our use of the EXECUTE AS clause, the updates are divided into three batches and, hence, three transactions.

The ledger view provides a view of the data within the table as it has changed over time. There is no UPDATE operation recorded. Instead, for updates, two rows appear in the view: one for the insert of the new version of the row and another for the deletion of the old row version. For example, there are two rows with the Transaction ID 1073 containing rows for the “before” and “after” versions of the row that was updated by Andrew.

As is the case with append-only ledger tables, you can also view the principal who ran the updates against an updateable ledger table and the time that the transaction was committed. This is demonstrated in Listing 11-17.
SELECT
        lv.ID
      , lv.StockID
      , lv.QtyOrdered
      , lv.QtyReceived
      , lv.ReceivedBy
      , lv.ReceivedDate
      , lv.Damaged
      , lv.ledger_operation_type_desc
      , dlt.commit_time
      , dlt.principal_name
FROM dbo.GoodsIn_Ledger_View lv
INNER JOIN sys.database_ledger_transactions dlt
      ON dlt.transaction_id = lv.ledger_transaction_id
Listing 11-17

Discover Who Modified the Table

You may remember that earlier in this section, I mentioned an interesting repudiation consideration around the use of impersonation. The results of the preceding query, shown in Figure 11-10, demonstrate that this is actually not a concern. Despite the use of EXECUTE AS to impersonate different users, for all updates, the principal with which we are logged into SQL Server has been recorded as the principal that modified the data.
Figure 11-10

Results of viewing who modified data

Just like append-only ledger tables, once a table has been configured as an updateable ledger table, that feature cannot be turned off. This prevents privileged users from turning the feature off, updating data, and then reenabling the feature. Another interesting consideration for updatable ledger tables is what happens if the whole table is deleted? For example, what if a privileged user copies the data out of the GoodsIn table, drops the table, and then reinserts the modified data? The simple answer is that DROP does not drop a ledger table, it simply renames it, along with its associated objects. For example, let’s use the script in Listing 11-18 to drop the GoodsIn table before rerunning the query that discovers ledger object names.
USE Chapter11
GO
DROP TABLE dbo.GoodsIn
GO
SELECT
        t.name AS Updateable_Ledger_Table
      , h.name AS History_Table
      , v.name AS Ledger_View
FROM sys.tables AS t
INNER JOIN sys.tables AS h
      ON h.object_id = t.history_table_id
JOIN sys.views v
      ON v.object_id = t.ledger_view_id ;
Listing 11-18

Dropping Ledger Objects

The results, shown in Figure 11-11, show that instead of being deleted, the objects have simply been renamed and can still be queried.
Figure 11-11

Viewing dropped ledger objects

Caution

This does not stop an entire database from being dropped and modified data being copied back in.

Database Audit Specifications

A database audit specification is similar to a server audit specification but specifies audit requirements at the database level, as opposed to at the instance level. In order to demonstrate this functionality, we map the Danielle login to a user in this database and assign SELECT permissions to the SensitiveData table. We also create a new server audit, called Audit-Chapter11, which we use as the audit to which our database audit specification attaches. These actions are performed in Listing 11-19. Before executing the script, change the file path to match your own configuration.
USE Master
GO
--Create Chapter11Audit Database
CREATE DATABASE Chapter11Audit
GO
USE Chapter11Audit
GO
CREATE TABLE dbo.SensitiveData (
    ID    INT    PRIMARY KEY    NOT NULL,
    Data    NVARCHAR(256) NOT NULL
) ;
--Create Server Audit
USE master
GO
CREATE SERVER AUDIT [Audit-Chapter11Audit]
TO FILE
(        FILEPATH = N'C:Audit'
        ,MAXSIZE = 512 MB
        ,MAX_ROLLOVER_FILES = 2147483647
        ,RESERVE_DISK_SPACE = OFF
)
WITH
(        QUEUE_DELAY = 1000
        ,ON_FAILURE = CONTINUE
) ;
USE Chapter11Audit
GO
--Create database user from Danielle Login
CREATE USER Danielle FOR LOGIN Danielle WITH DEFAULT_SCHEMA=dbo ;
GO
GRANT SELECT ON dbo.SensitiveData TO Danielle ;
Listing 11-19

Creating the Chapter11Audit Database

We now look to create a database audit specification that captures any INSERT statements made against the SensitiveData table by any user but also captures SELECT statements run specifically by Danielle.

We can create the database audit specification in SQL Server Management Studio by drilling through the Chapter11Audit database ➤ Security and selecting New Database Audit Specification from the context menu of Database Audit Specifications. This invokes the Create Database Audit Specification dialog box, as illustrated in Figure 11-12.
Figure 11-12

Database Audit Specification dialog box

You can see that we named the database audit specification DatabaseAuditSpecification-Chapter11-SensitiveData and linked it to the Audit-Chapter11 server audit using the drop-down list. In the lower half of the screen, we specified two audit action types, INSERT and SELECT. Because we specified an object class of OBJECT, as opposed to the other available options of DATABASE or SCHEMA, we also need to specify the object name of the table that we want to audit. Because we only want Danielle’s SELECT activity to be audited, we add this user to the Principal field for the SELECT action type, but we add the Public role as the principal for the INSERT action type. This is because all database users will be members of the Public role, and hence, all INSERT activity will be captured, regardless of the user.

Tip

You can display a complete list of audit class types by running the query SELECT * FROM sys.dm_audit_class_type_map. You can find a complete list of auditable actions by running the query SELECT * FROM sys.dm_audit_actions.

We can create the same database audit specification in T-SQL by using the CREATE DATABASE AUDIT SPECIFICATION statement, as demonstrated in Listing 11-20.
USE Chapter11Audit
GO
CREATE DATABASE AUDIT SPECIFICATION [DatabaseAuditSpecification-Chapter11-SensitiveData]
FOR SERVER AUDIT [Audit-Chapter11]
ADD (INSERT ON OBJECT::dbo.SensitiveData BY public),
ADD (SELECT ON OBJECT::dbo.SensitiveData BY Danielle) ;
Listing 11-20

Creating the Database Audit Specification

Just as we would with a server audit specification, we need to enable the database audit specification before any information starts to be collected. The script in Listing 11-21 enables both Audit-Chapter11 and DatabaseAuditSpecification-Chapter11-SensitiveData.
USE Chapter11Audit
GO
ALTER DATABASE AUDIT SPECIFICATION [DatabaseAuditSpecification-Chapter11-SensitiveData]
WITH (STATE = ON) ;
GO
USE Master
GO
ALTER SERVER AUDIT [Audit-Chapter11] WITH (STATE = ON) ;
Listing 11-21

Enabling the Database Audit Specification

To test security, SQL Server allows you to impersonate a user. To do this, you must be a sysadmin or be granted the impersonate permissions on the user in question. The script in Listing 11-22 impersonates the user Danielle in order to check that the auditing is successful. It does this by using the EXECUTE AS USER command. The REVERT command switches the security context back to the user who ran the script.
USE Chapter11Audit
GO
GRANT INSERT, UPDATE ON dbo.sensitiveData TO Danielle ;
GO
INSERT INTO dbo.SensitiveData (SensitiveText)
VALUES ('testing') ;
GO
UPDATE dbo.SensitiveData
SET SensitiveText = 'Boo'
WHERE ID = 2 ;
GO
EXECUTE AS USER ='Danielle'
GO
INSERT dbo.SensitiveData (SensitiveText)
VALUES ('testing again') ;
GO
UPDATE dbo.SensitiveData
SET SensitiveText = 'Boo'
WHERE ID = 1 ;
GO
REVERT
Listing 11-22

Testing Security with Impersonation

Auditing the Audit

With the auditing that we have implemented up to this point, there is a security hole. If an administrator with the permissions to manage server audit has ill intent, then it is possible for them to change the audit specification before performing a malicious action and then finally reconfiguring the audit to its original state in order to remove reputability.

Server audit allows you to protect against this threat, however, by giving you the ability to audit the audit itself. If you add the AUDIT_CHANGE_GROUP to your server audit specification or database audit specification, then any changes to the specification are captured.

Using the Audit-Chapter11 server audit and the DatabaseAuditSpecification-Chapter11 database audit specification as an example, we are auditing any INSERT statements, by any user, to the SensitiveData table. To avoid a privileged user with ill intent inserting data into this table without traceability, we can use the script in Listing 11-23 to add the AUDIT_CHANGE_GROUP. Note that we have to disable the database audit specification before we make the change and then reenable it.
USE Chapter11Audit
GO
ALTER DATABASE AUDIT SPECIFICATION [DatabaseAuditSpecification-Chapter11-SensitiveData]
WITH (STATE=OFF) ;
GO
ALTER DATABASE AUDIT SPECIFICATION [DatabaseAuditSpecification-Chapter11-SensitiveData]
ADD (AUDIT_CHANGE_GROUP) ;
GO
ALTER DATABASE AUDIT SPECIFICATION [DatabaseAuditSpecification-Chapter11-SensitiveData]
WITH(STATE = ON) ;
GO
Listing 11-23

Adding AUDIT_CHANGE_GROUP

After executing this command, any changes we make to the auditing are captured. If you view the audit log, you can see that the Administrator login has been audited, removing the INSERT audit on the SensitiveData table.

Summary

SQL Server 2022 introduces Ledger, which uses blockchain technology to prove the trustworthiness of your data. This can fulfill use cases such as satisfying audit requirements and creating supply chain management processes between companies.

There are two types of ledger table available: append-only ledger tables and updateable ledger tables. Append-only tables only permit INSERT statements and do not permit UPDATE OR DELETE statements. Nonreputability is ensured by logging the time and security principal which made each insert.

Updateable ledger tables allow INSERT, UPDATE, and DELETE statements. They use a history table to store each row version, and again, nonreputability is ensured by logging the time and security principal of each transaction.

The latest block is known as a database digest. Digests can be generated automatically, if you integrate with Azure or can be generated manually. However, once your digests are generated, you should ensure that they are used to regularly verify the trustworthiness of your database.

Server audit allows a fine-grain audit of activity at both the instance and database levels. It also includes the ability to audit the audit itself, thus removing the threat of a privileged user bypassing the audit with malicious intent. You can save audits to a file in the operating system and control permissions through NTFS. Alternatively, you can save audits to the Windows Security log or Windows Application log.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.146.255.249