Ch-Ch-Changes

It's not only the legend that is Mr. David Bowie who sings about changes. In SQL Server 2014, we were destined to create In-Memory OLTP objects that were unchangeable after creation. If we needed to change the structure of a memory-optimized table, we had to resort to dropping and re-creating the object with the new, updated structure.

For many developers and customers, this was a deal breaker. Being able to add and remove columns or indexes is something that every SQL Server developer is used to being able to do without any restriction. With the advent of agile software development and similar development strategies such as continuous integration, being able to make changes to a software application is something many developers look for.

Now it is possible to do just that. We will be continuing this chapter with the same database as in Chapter 11, Introducing SQL Server In-Memory OLTP. We will use the following code to create a simple memory-optimized table, shown as follows:

USE master 
GO 
CREATE DATABASE InMemoryTest 
    ON  
    PRIMARY(NAME = [InMemoryTest_disk],  
               FILENAME = 'C:tempInMemoryTest_disk.mdf', size=100MB),  
    FILEGROUP [InMemoryTest_inmem] CONTAINS MEMORY_OPTIMIZED_DATA 
               (NAME = [InMemoryTest_inmem],  
               FILENAME = 'C:tempInMemoryTest_inmem') 
   LOG ON (name = [InMemoryTest_log], Filename='c:tempInMemoryTest_log.ldf', size=100MB) 
   COLLATE Latin1_General_100_BIN2; 
GO  
USE InMemoryTest 
GO  
CREATE TABLE InMemoryTable 
( 
   UserId INT NOT NULL, 
   UserName VARCHAR(20) COLLATE Latin1_General_CI_AI NOT NULL, 
   LoginTime DATETIME2 NOT NULL, 
   LoginCount INT NOT NULL, 
   CONSTRAINT PK_UserId  PRIMARY KEY NONCLUSTERED (UserId), 
   INDEX HSH_UserName HASH (UserName) WITH (BUCKET_COUNT=10000) 
    
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY=SCHEMA_AND_DATA); 
GO 
INSERT INTO dbo.InMemoryTable 
        ( UserId, UserName , LoginTime, LoginCount ) 
VALUES ( 1, 'Mickey Mouse', '2016-01-01', 1 ); 
GO 

The list of supported ALTER statements for a table in SQL Server 2017 are as follows:

  • Changing, adding, and removing columns
  • Adding and removing indexes
  • Adding and removing constraints
  • Changing the bucket count
  • sp_rename is fully supported to memory-optimized tables

First up, we will add a column to the demo table, as shown in the following code. Note how the DDL statement is no different to adding a column to a disk-based table:

USE InMemoryTest; 
GO  
ALTER TABLE dbo.InMemoryTable ADD NewColumn INT NULL; 
GO 

We can also remove columns too, using the following code. Here the DDL has also not changed:

USE InMemoryTest; 
GO  
ALTER TABLE dbo.InMemoryTable DROP COLUMN NewColumn; 
GO 

We are also able to add indexes after a table has already been created, using the following code:

 USE InMemoryTest; 
GO 
ALTER TABLE dbo.InMemoryTable ADD INDEX HSH_LoginTime NONCLUSTERED HASH (LoginTime) WITH (BUCKET_COUNT = 250); 
GO

Until now, the DDL statements looked normal. However, adding an index to a memory-optimized table is performed using ALTER TABLE, rather than CREATE INDEX. The same can be said for dropping an index or altering an index:

USE InMemoryTest; 
GO  
ALTER TABLE dbo.InMemoryTable ALTER INDEX HSH_LoginTime REBUILD WITH (BUCKET_COUNT=10000); 
GO  
ALTER TABLE dbo.InMemoryTable DROP INDEX HSH_LoginTime; 
GO 

In the last chapter, we discovered that the indexes on a memory-optimized table are the Bw-tree linked lists that provide us with an access path to the memory pages of the actual data in a memory-optimized table. As such, they are more an extension of the table definition (similar to constraints), rather than an index on a disk-based table. This is reflected in the requirement of issuing an ALTER TABLE command to add, remove, or alter an index on a memory-optimized table.

Altering the bucket count of an index (as shown in the preceding listing) is quite interesting. We always strive to implement new code with the plan of ensuring it is fit for purpose before we deploy to production. We also know that the first implementation of code rarely survives the first encounter with production usage. Predicting the correct bucket count is like predicting the future: the more information we have, the better our predictions can be, but who has full knowledge of how a new feature will be used? It is nearly impossible to get a 1:1 comparison of production to a development environment. As such, changing the bucket count is something that we will more than likely need to do. Later in this chapter we will be looking at the internal aspect of indexing memory-optimized tables, and will cover bucket counts in more detail.

To rename a memory-optimized table, we issue a simple sp_rename command, as shown in the following code:

USE InMemoryTest 
GO 
 
sp_rename 'dbo.InMemoryTable','InMemoryTable2' 
GO 
sp_rename 'dbo.InMemoryTable2','InMemoryTable'

A further interesting addition is the ability to bundle multiple changes (specifically, multiple change types: columns and indexes) together in one ALTER statement, using the following code:

USE InMemoryTest 
GO  
ALTER TABLE dbo.InMemoryTable  
ADD ANewColumn INT NULL, 
   AnotherColumn TINYINT NULL, 
INDEX HSH_ANewColumn NONCLUSTERED HASH (ANewColumn) WITH (BUCKET_COUNT = 250); 

Adding multiple columns was possible in previous versions of SQL Server, but the ability to add an index in the same statement is new. This has to do with how the In-Memory OLTP engine creates and manages memory-optimized objects in general. In Chapter 11, Introducing SQL Server In-Memory OLTP, we discovered that memory-optimized objects are memory resident objects with matching access methods (in fact, compiled C code). Indexes for memory-optimized tables are part of the memory-optimized table insomuch as they require the C constructs and access methods to ensure that the SQL Server can work with them. Because these objects require compilation into machine code, they are somewhat static—even with the new ability to issue ALTER commands to them.

To overcome this logical limitation of unchangeable compiled code, SQL Server receives the desired changes inside the ALTER statement and proceeds to create a new version of the existing object. Upon creation of this copy, the desired changes are incorporated into the new version. If the object being changed is a table, the rows from the old version are then copied to the new version. The background process then compiles the access methods (including the changed columns) and the new version is ready for use. At this point, SQL Server dereferences the old version and redirects future calls to the new version.

As the ALTER command of a table requires that the entire contents of a table be copied from the old version to the new version, we must be mindful of the fact that we are doubling the memory requirements for the table for the duration of the ALTER transaction (and until the background garbage collection has cleaned up the old structure). Equally, ALTER commands for memory-optimized tables are offline operations. This means that the memory-optimized table is blocked for the duration of the ALTER transaction.

As such, if we are manipulating a large table, we must ensure that we have enough memory available for the operation to succeed and understand that the table will be blocked for the duration of the transaction. It may, therefore, be prudent to consider emptying extremely large tables before issuing the ALTER command, in order to allow the change to complete quicker.

Many ALTER statements require meta-data changes only. These types of changes are able to be processed in parallel and have a reduced impact on the transaction log. When only meta-data changes need to be processed, only those changes are processed through the transaction log. Coupled with parallel processing, we can expect the execution of those meta-data changes to be extremely fast.

However, parallel processing is excluded for a few operations that require more than simple meta-data changes; these are as follows:

  • Altering or adding a column to use a LOB type such as nvarchar(max), varchar(max), or varbinary(max)
  • Adding or dropping a COLUMNSTORE index
  • Almost anything that affects an off-row column:
    • Causing an on-row column to move off-row
    • Causing an off-row column to move on-row
    • Creating a new off-row column
    • Exception—Lengthening an already off-row column that has been logged in the optimized way

As well as serial processing being forced for the operations listed, making changes to these data types causes a complete copy of the table to be processed and copied into the transaction log; this can cause the transaction log to fill up and may produce extra load on the storage sub-system.

The removal of restrictions on altering tables extends to altering natively compiled stored procedures. The well-known ALTER PROCEDURE command can now be used to make changes to the previously created natively compiled stored procedures (this demo code creates the stored procedure with no content to allow the ALTER statement to then be run):

USE InMemoryTest 
GO 
CREATE PROCEDURE dbo.InMemoryInsertOptimized 
    @UserId INT, 
    @UserName VARCHAR(255), 
    @LoginTime DATETIME2, 
    @LoginCount INT 
WITH NATIVE_COMPILATION, SCHEMABINDING 
AS 
BEGIN ATOMIC WITH 
( 
   TRANSACTION ISOLATION LEVEL = SNAPSHOT, 
   LANGUAGE = N'English' 
) 
   RETURN 0; 
END; 
GO 
 
ALTER PROCEDURE dbo.InMemoryInsertOptimized 
    @UserId INT, 
    @UserName VARCHAR(255), 
    @LoginTime DATETIME2, 
    @LoginCount INT 
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER 
AS 
BEGIN ATOMIC WITH 
( 
   TRANSACTION ISOLATION LEVEL = SNAPSHOT, 
   LANGUAGE = N'English' 
) 
   -- Add an Insert 
   INSERT dbo.InMemoryTable 
   (UserId, UserName, LoginTime, LoginCount) 
    VALUES 
    (@UserId, @UserName, @LoginTime, @LoginCount); 
   RETURN 0; 
END; 
GO

The following aspects of an existing natively compiled stored procedure can be changed using the ALTER PROCEDURE syntax:

  • Parameters
  • EXECUTE AS
  • TRANSACTION ISOLATION LEVEL
  • LANGUAGE
  • DATEFIRST
  • DATEFORMAT
  • DELAYED_DURABILITY
However, it is important to note that using the ALTER command to turn a natively compiled stored procedure into one that is non-natively compiled, and vice versa, is not supported.

If we wish to make such a change, we are required to perform a DROP PROCEDURE and a CREATE PROCEDURE. This should make sense, as we are moving from In-Memory into the normal Relational Engine. As such, we are recreating these objects in their entirety to achieve the desired change. This also means that we have to consider that any permissions assigned to such an object need to be re-assigned at (re)creation time.

During the recompile process, when an ALTER command is issued, the old version of the natively compiled stored procedure can still be executed. Upon compilation of the altered stored procedure, the old version will be destroyed, and all subsequent calls of the stored procedure will use the new definition. This allows an ALTER command to be issued without causing long waiting periods, but may allow transactions to execute using potentially old code.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.84.112