© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2022
E. PollackAnalytics Optimization with Columnstore Indexes in Microsoft SQL Serverhttps://doi.org/10.1007/978-1-4842-8048-5_15

15. Columnstore Index Performance

Edward Pollack1  
(1)
Albany, NY, USA
 
The ultimate measure of performance for any data structure is the speed in which data can be retrieved. In columnstore indexes, the time required to return data will be a function of two operations:
  • Metadata reads

  • Data reads

This chapter will walk through the steps needed to measure, assess, and tune columnstore index query performance, both for analytics and write operations. It will also introduce additional options for columnstore index usage and performance optimization.

Columnstore Metadata Reads

To determine what data to read and how to read it , the metadata for a columnstore index will be read prior to any columnstore index scans. This includes the metadata about the contents of rowgroups and segments. Consider the query in Listing 15-1.
SELECT
       SUM(Quantity) AS Total_Quantity,
       SUM([Total Excluding Tax]) AS Total_Excluding_Tax
FROM Fact.Sale_CCI_PARTITIONED
WHERE [Invoice Date Key] = '7/17/2015';
Listing 15-1

Example Query Used in Performance Analysis

Two columns are aggregated for a single value of Invoice Date Key. This table is both ordered and partitioned and therefore will benefit significantly from partition elimination and rowgroup elimination. When the query is executed, columnstore metadata is consulted to determine which segments need to be read.

The table Fact.Sale_CCI_PARTITIONED is partitioned on the Invoice Date Key column by year, with partitions assigned to the years 2013, 2014, 2015, 2016, and 2017. Listing 15-2 provides the definitions for both the partition scheme and function used in this demonstration.
CREATE PARTITION FUNCTION fact_Sale_CCI_years_function (DATE) AS RANGE RIGHT FOR VALUES
('1/1/2014', '1/1/2015', '1/1/2016', '1/1/2017');
CREATE PARTITION SCHEME fact_Sale_CCI_years_scheme AS PARTITION fact_Sale_CCI_years_function
TO (WideWorldImportersDW_2013_fg, WideWorldImportersDW_2014_fg, WideWorldImportersDW_2015_fg,
WideWorldImportersDW_2016_fg, WideWorldImportersDW_2017_fg)
Listing 15-2

Partition Function and Partition Scheme Definitions

The table contains data ranging from 2013 through 2016 and therefore will not make use of the last partition. When executed, the query in Listing 15-1 will immediately check the filter against the partition function and then use the partition scheme to determine where data is located based on the function. Based on this information, it is determined that only data in the partition WideWorldImportersDW_2015_fg will be needed.

Columnstore metadata is stored separately for each partition, and therefore when executed, this query needs to only consult metadata relevant to the partition containing data for the year 2015. Figure 15-1 shows the segment metadata for Invoice Date Key in this columnstore index.
Figure 15-1

Columnstore segment metadata for Invoice Date Key

The metadata shows an exceptionally well-ordered columnstore index. Note that min_data_id and max_data_id progressively increase as values advance through each segment within each partition. An encoding type of 2 represents dictionary encoding for a nonstring data type, in this case a DATE. The primary dictionary used for this column has an ID of zero.

The query in Listing 15-3 provides a bit of additional detail on the dictionary used for this column.
SELECT
       partitions.partition_number,
       objects.name,
       columns.name,
       column_store_dictionaries.type,
       column_store_dictionaries.entry_count,
       column_store_dictionaries.on_disk_size
FROM sys.column_store_dictionaries
INNER JOIN sys.partitions
ON column_store_dictionaries.hobt_id = partitions.hobt_id
INNER JOIN sys.objects
ON objects.object_id = partitions.object_id
INNER JOIN sys.columns
ON columns.column_id = column_store_dictionaries.column_id
AND columns.object_id = objects.object_id
WHERE objects.name = 'Sale_CCI_PARTITIONED'
AND columns.name = 'Invoice Date Key'
AND column_store_dictionaries.dictionary_id = 0;
Listing 15-3

Query to Return Information About a Columnstore Dictionary

The results provide additional detail about the dictionary used for this column, as shown in Figure 15-2 .
Figure 15-2

Columnstore dictionary metadata

This shows that partitions 1–3 share a single dictionary that is a compact 1308 bytes. The scenario presented here is essentially the optimal columnstore index and query. The index is well ordered and partitioned and the query aligns perfectly with that order to minimize the amount of metadata read as part of its execution.

While columnstore metadata may not seem large when compared to the data itself, it is important to remember that a columnstore index with one billion rows will have approximately one thousand rowgroups. Each of those rowgroups will contain one segment per column. For this table which contains 21 columns, metadata will consist of about 21,000 segments per billion rows. Needing to read metadata on rowgroups, segments, dictionaries, and more can add up to a nontrivial amount of work for SQL Server. As with data itself, metadata needs to be loaded into memory before it can be read. Therefore, maintaining an ordered table and partitioning (if needed) can ensure that excessive metadata is not read when executing analytic queries.

Returning to the analytic query in Listing 15-1, it can be expected that it would execute quickly, resulting in minimal metadata reads, as well as minimal data reads. The STATISTICS IO output for the query is shown in Figure 15-3.
Figure 15-3

Statistics IO output for a sample analytic query

This example was an ideal scenario where segment elimination, rowgroup elimination, and partitioning all combined to provide exceptionally low reads and a vast majority of segments skipped. While the real world will not always provide such optimal examples of columnstore performance, this illustrates the upper bound for what to compare performance against .

Columnstore Data Reads

Once columnstore metadata is read , the data itself is then read. It is critical to understand that the performance of queries against a columnstore index will ultimately be driven by the number of segments read when a query is executed. All other variables being equal, an analytic request that reads 1000 segments will be expected to perform approximately ten times slower than a query that reads 100 segments. While columnstore indexes may at times be seen as magical tools sprinkled with pixie dust, this is the one area in which there is no magic and performance will be guided by IO.

Segment reads equate to page reads which represent those IO operations. If the pages needed are already in memory, then the operations will be comprised of logical IO as segments are read directly from the buffer pool. If the required pages are not in memory yet (or are partially in memory), then physical reads will be required to copy pages from storage into the buffer pool.

Segment elimination and rowgroup elimination are the tools that limit segment reads and help ensure that only the amount of data needed to satisfy a query is read. The quality of columnstore compression applied to data helps squeeze data into as few pages as possible, thereby reducing IO.

At their most basic structure, columnstore indexes are bound by the same speed limitations that exist for rowstore tables. Reading pages from storage into the buffer pool takes time. Decompressing and reading data from the buffer pool takes additional time. Therefore, the less pages that need to be read, the faster columnstore index analytics can perform.

Other performance-boosting tools used by columnstore indexes (such as batch mode operation or filtered indexes) further improve query speeds by improving throughput or reducing IO. The focus throughout the step of reading columnstore data is to control IO via whatever means possible. Figure 15-4 illustrates how data and metadata are read from storage through to being used query results.
Figure 15-4

Movement of columnstore data and metadata from storage to memory to usage by a query

While the entire columnstore index resides on a storage system, typically only a fraction of it will be maintained in the buffer pool. Segments remain compressed throughout this entire IO process until they are needed by a query, at which time they are decompressed, and data is returned to SQL Server .

Memory Sizing

All of the performance discussions thus far have focused on reducing IO. Despite all efforts to minimize unnecessary IO, larger columnstore indexes will ultimately require reading significant numbers of segments from storage into memory.

The next step in ensuring highly performant columnstore indexes is to accurately estimate the amount of memory that should be allocated to SQL Server to support analytic queries.

Too little memory and data will be constantly removed from memory, only to be read back into the buffer pool when needed again soon. The process of reading data from storage (even fast storage) will be far greater than other steps in executing analytic queries.

Too much memory would represent unused computing resources, which translate into wasted money.

Allocating memory to SQL Server to support columnstore indexes requires a handful of steps:
  1. 1.

    Measure the size of the columnstore index.

     
  2. 2.

    Approximate how much of the index is comprised of hot data that should always reside in memory.

     
  3. 3.

    Approximate how much of the index represents warm data. This data will be in memory for some analytic needs, but not all.

     
  4. 4.

    Approximate data growth over time that will adjust the numbers calculated earlier.

     

Any data that is not hot or warm is expected to be cold and rarely used. It may still be valuable but is not accessed often enough to architect an entire system around it. Resources can be allocated at runtime by Azure, AWS, or other hosting services, if that level of flexibility is desired, but that is entirely optional and at the discretion of an organization and the importance of speedy access to cold data.

Consider a hypothetical columnstore index as shown in Listing 15-4.
2017: 10,000,000 rows (1GB) - cold/rarely used
2018: 50,000,000 rows (3GB) - cold/rarely used
2019: 100,000,000 rows (5GB) - cold/rarely used
2020: 250,000,000 rows (10GB) - warm/sometimes used
2021: 500,000,000 rows (18GB) - hot/often used
Listing 15-4

Row Counts and Size for Data in a Columnstore Index

This index shows a common trend in analytics by which the most recent year is heavily used for current reporting. The previous year is also accessed frequently for year-over-year analytics. All previous data is important and must be retained, but is not read very often when compared to newer data.

Based on the numbers provided, if the goal was to ensure that the most frequently used data was accounted for in memory allocation, then this columnstore index would warrant 18GB of memory to ensure that all 18GB of hot data can reside in memory, if needed.

If the 10GB of warm data were also important, then allocating some amount of memory up to 10GB would help ensure that data is not cycled in and out of memory too often and that it does not replace more important hot data in memory when requested. If resources were plentiful, then allocating the full 10GB would accomplish the task in its entirety. Otherwise, the organization responsible for this data would need to determine how important the warm data is and allocate some amount of additional memory up to 10GB to cover it. For example, if it was estimated that the latter half of the 2020 data would be requested on a somewhat regular basis, but the earlier half would be far less used, then 5GB would be a good estimate of memory to allocate to this block of warm data.

The remaining 9GB of data from 2017 to 2019 would not receive any memory allocation as they are rarely read and would impact performance too infrequently to matter to most organizations. If infrequent analytics or reports are disruptive enough, an argument could be made to adding further memory to reduce that disruption, but this would be something that should be considered on a case-by-case basis .

This example also illustrates a fairly straightforward growth of about two times per year. If this rate of growth is expected to continue and the years with warm and hot data are expected to roll forward each year, then in a year, the current hot data (18GB) would become warm data, the current warm data (10GB) would become cold data, and next year’s data (~35GB) would become the new hot data. Therefore, year-over-year growth would be represented by
  • (35GB + 18GB) - (18GB + 10GB) = 25GB.

Note that the size of columnstore data used in memory will often not be the same as the total data size. A columnstore index with 20 columns may not see all columns used equally. It is quite possible that some columns may rarely get used. If this is the case, then memory estimations can adjust the amount needed downward to account for columns that are not needed in memory often. While these infrequently queried columns cannot be classified as cold data, they can be discounted from memory totals to ensure that memory estimates are not overinflated by segments that are rarely needed.

For example, if the 18GB of data for 2021 included 5GB of columns that are rarely queried, then the memory estimate for that data could be reduced from 18GB to as little as 13GB, if there is confidence in how infrequently those columns are used.

There is a great deal of estimation that goes into this work. At the absolute high end, a memory budget could encompass all space consumed by a columnstore index. At the lowest level, no memory could be allocated for these analytics. A realistic estimation will be somewhere in the middle and will be influenced by the performance required of this data and the availability/cost of computing resources .

Dictionary Size and Dictionary Pressure

Dictionary encoding is one of a multitude of ways in which SQL Server can improve columnstore compression effectiveness. Chapter 5 discussed encoding methods in detail and how dictionary encoding can be used by columnstore indexes to greatly reduce index size for columns that have repetitive values.

While dictionaries can be shared across many segments, a caveat to dictionary encoding is that they cap at 16 megabytes. As a result, if a column has wide values that do not repeat often, then the result may be that a dictionary fills up before a rowgroup could be completely written. The scenario when a dictionary fills up is known as dictionary pressure. While a dictionary is limited to 16MB, dictionary pressure will often occur at sizes well below that as the limitation is on how much space will be consumed there, rather than how much is currently used.

When a columnstore index experiences dictionary pressure, the rowgroup that is currently being written to will be closed and a new rowgroup will be created along with a new dictionary. This has two significant implications:
  • Dictionaries Consume Space: The more dictionaries that are needed to support a columnstore index, and the larger they are, the more storage space and memory are required to read and write to them.

  • Undersized Rowgroups Create Fragmentation: A rowgroup may contain up to 220 (1,048,576) rows. When rowgroups are undersized by dictionary pressure (well below 220 rows), then more rowgroups, segments, and dictionaries need to be maintained by SQL Server. This in turn wastes resources and slows down analytics that need these tables.

Fortunately, dictionary pressure is easy to diagnose. Once identified, a solution can be crafted to resolve it. Consider the rowgroup metadata shown in Figure 15-5.
Figure 15-5

Example of dictionary pressure

In this example, row_group_id 25 shows a trim reason of DICTIONARY_SIZE and a row count of 419,432 rows. In this rowgroup, SQL Server was loading rows into it when the dictionary reached a size where no more entries could be added without bringing it over the 16MB cap. If this were the only rowgroup like this, then dictionary pressure may not be seen as a serious issue. If many rowgroups were cut off at row counts this low, then dictionary pressure would without a doubt be inhibiting optimal performance and would be worth investigating and resolving.

There are two primary causes of dictionary pressure:
  • Columns that are very wide

  • Columns with many distinct values

The most common scenarios involving dictionary pressure are derived from columns that are both wide and have many distinct values. There is no one-size-fits-all solution to the challenge of dictionary pressure. The three most common solutions can be considered as the simplest ways of resolving this challenge:
  • Normalize wide columns into a lookup (dimension) table.

  • Sort on a different column.

  • Partition the table.

Normalizing Wide Columns

The simplest solution requires schema changes and therefore may be disruptive, and that is to normalize a wide column that is identified as the source of dictionary pressure. Normalizing a column replaces a wide column (often string data) with a smaller numeric column. The lookup table would likely be stored using a clustered rowstore index and a supporting nonclustered rowstore index to handle queries against the text data.

When considering a columnstore index for a large analytic table, pay close attention to any wide columns, and as part of testing, run through the exercise of creating a columnstore index on the data. Once created, pay attention to rowgroup row counts and if dictionary pressure is occurring or not. If rowgroups are undersized by 10% or more, then consider normalizing the wide column if it is the culprit.

Lookup tables can contain other metadata that supports efficient searches against it, such as hashes, search criteria, or details that can be indexed and used to quickly reduce row counts before the text column needs to be interrogated.

Add or Change the Columnstore Sorting Column

Sometimes the way in which data is sorted within a columnstore index lends itself to poor dictionary encoding. If a columnstore index is unsorted or is sorted on a column that tends to group data together alongside nonsimilar values, then wider columns might become the source of dictionary pressure.

The column that a columnstore index is ordered on should always be the one that is most frequently used to filter queries. If there are a variety of common search criteria, then there may be other sorting options that provide better compression and performance.

This solution to dictionary pressure requires before-and-after testing of common analytic queries to ensure that one problem (dictionary pressure) is not being traded for another (poor analytic query performance).

Partitioning

One key element of partitioned columnstore indexes is that each partition contains its own distinct columnstore index. For example, a table that is broken into ten partitions will contain a separate columnstore structure in each partition. That includes separate dictionaries, as well!

When columnstore index row counts become large (hundreds of millions or billions of rows), partitioning can provide distinct performance and maintenance benefits in SQL Server. In addition to partition elimination, an added performance improvement can be seen in the separation of dictionaries across partitions. This solution is especially effective when the values for wide columns change slowly over time. In these scenarios, the data from one day to the next will have many repeated values, but across months or years will not.

When partitioning a table, be sensitive to the size of the partitions that are created. Because each represents a distinct, separate columnstore index, it is important that each be large enough to benefit from the key properties of columnstore indexes. Therefore, ensure that each partition has at least tens of millions of rows. Similarly, partitions that are exceptionally large (billions of rows) may suffer the same problems as an unpartitioned table with regard to dictionary pressure.

Testing is important in confirming the benefits of partitioning in a columnstore index. Generally speaking, if a table contains hundreds of millions of rows (or more), being able to subdivide it into smaller chunks using the columnstore ordering column as the partitioning column will allow performance benefits to be realized. If dictionary pressure exists, then this can become an effective way to manage it by splitting up dictionaries across partitions.

Chapter 11 dives into partitioning in detail, including demonstrations of how to implement it and how it can provide a host of benefits in addition to resolving dictionary pressure challenges.

In general, dictionaries are very efficient at storing their lookup data and columnstore indexes benefit greatly by using them. Dictionary pressure should be seen as an out-of-the-ordinary situation and only investigated if it is causing blatant performance problems.

Columnstore Indexes on Temporary Tables

For scenarios when analytic data needs to be crunched on the fly using temporary data, there can be value in adding columnstore indexes to temporary objects. This is a unique use case that will rarely be needed, but can provide exceptional value to ETL processes or any analytics in which an intermediary or temporary data set requires many expensive queries to be executed against it.

Adding a columnstore index to a temporary table works exactly as it does against a permanent table. Listing 15-5 shows an example of how to accomplish this task.
CREATE TABLE #Sales_Temp_Data
(      [Sale Key] BIGINT NOT NULL,
       [Customer Key] INT NOT NULL,
       [Invoice Date Key] DATE NOT NULL,
       Quantity INT NOT NULL,
       [Total Excluding Tax] DECIMAL(18,2) NOT NULL);
INSERT INTO #Sales_Temp_Data
       ([Sale Key], [Customer Key], [Invoice Date Key], Quantity, [Total Excluding Tax])
SELECT
       [Sale Key], [Customer Key], [Invoice Date Key], Quantity, [Total Excluding Tax]
FROM Fact.Sale_CCI_PARTITIONED;
CREATE CLUSTERED COLUMNSTORE INDEX CCI_Sales_Temp_Data ON #Sales_Temp_Data;
Listing 15-5

Query That Creates a Temporary Table, Populates It, and Adds a Columnstore Index

Once added, the newly indexed temporary table may be queried as effectively as a columnstore indexed permanent table, as seen in Listing 15-6.
SELECT
       COUNT(*)
FROM #Sales_Temp_Data;
SELECT
       SUM(Quantity) * SUM([Total Excluding Tax])
FROM #Sales_Temp_Data
WHERE [Invoice Date Key] >= '1/1/2015'
AND [Invoice Date Key] < '1/1/2016';
Listing 15-6

Example Queries Against a Temporary Table with a Columnstore Index

These queries execute relatively quickly, returning the requested results. Figure 15-6 shows the STATISTICS IO output from that pair of queries.
Figure 15-6

IO for queries against a temporary table with a columnstore index

Note that the filtered query read almost half of the segments in the table, despite only reading about a quarter of its rows. When this temporary table was populated, no data order was enforced. As a result, data was inserted in whatever order SQL Server happened to read it from the source table, which was not optimal. Depending on how extensive the temporary table is to be used for analytics, taking the extra steps to order it prior to reading it may or may not be worth the time and resources needed to do so. That value will lie in whether or not the analytic processes are completing quickly enough and if they are using too much computing resources along the way.

Nonclustered columnstore index may also be created on temporary tables, if there is a need to slice data using both transactional and analytic methods. One benefit of doing so is that the clustered rowstore index can enforce data order on the table, even if further writes are made to it. Another benefit is that the column list for the columnstore index can be customized. This allows a small subset of columns to be subject to analytics when the remainder may be needed for other operations. Listing 15-7 shows the syntax for creating nonclustered columnstore indexes on a temporary table.
CREATE TABLE #Sales_Temp_Data
(      [Sale Key] BIGINT NOT NULL,
       [Customer Key] INT NOT NULL,
       [Invoice Date Key] DATE NOT NULL,
       Quantity INT NOT NULL,
       [Total Excluding Tax] DECIMAL(18,2) NOT NULL);
INSERT INTO #Sales_Temp_Data
       ([Sale Key], [Customer Key], [Invoice Date Key], Quantity, [Total Excluding Tax])
SELECT
       [Sale Key], [Customer Key], [Invoice Date Key], Quantity, [Total Excluding Tax]
FROM Fact.Sale_CCI_PARTITIONED;
CREATE CLUSTERED INDEX CI_Sales_Temp_Data ON #Sales_Temp_Data ([Sale Key]);
CREATE NONCLUSTERED COLUMNSTORE INDEX NCCI_Sales_Temp_Data ON #Sales_Temp_Data ([Invoice Date Key], Quantity, [Total Excluding Tax]);
Listing 15-7

Illustrating the Use of Nonclustered Columnstore Indexes on Temporary Tables

The syntax for nonclustered columnstore indexes is identical for temporary tables and permanent tables, and once created, they can be used in the same fashion.

Note that columnstore indexes are not allowed on table variables. The T-SQL in Listing 15-8 shows an attempt to do so.
DECLARE @Sales_Temp_Data TABLE
(      [Sale Key] BIGINT NOT NULL,
       [Customer Key] INT NOT NULL,
       [Invoice Date Key] DATE NOT NULL,
       Quantity INT NOT NULL,
       [Total Excluding Tax] DECIMAL(18,2) NOT NULL);
INSERT INTO @Sales_Temp_Data
       ([Sale Key], [Customer Key], [Invoice Date Key], Quantity, [Total Excluding Tax])
SELECT
       [Sale Key], [Customer Key], [Invoice Date Key], Quantity, [Total Excluding Tax]
FROM Fact.Sale_CCI_PARTITIONED;
CREATE CLUSTERED COLUMNSTORE INDEX CCI_Sales_Temp_Data ON @Sales_Temp_Data;
Listing 15-8

A Script That Attempts to Add a Columnstore Index to a Table Variable

This query won’t even compile and will immediately generate the error shown in Figure 15-7 when parsed.
Figure 15-7

Parsing error when trying to create a columnstore index on a table variable

SQL Server provides the courtesy of raising an error during parsing, before the table variable is created and populated.

Using columnstore indexes on temporary tables will not be a frequently applied use case, but specific processes that rely heavily on the crunching of temporary data may be able to use them effectively. Performance testing can be the ultimate test of whether this is a helpful solution or one that does not provide enough value to be of worth.

Memory-Optimized Columnstore Indexes

A common use case for columnstore indexes is real-time operational analytics. Nonclustered columnstore indexes provide the ability to execute analytic queries against OLTP/rowstore tables efficiently. By being able to choose the columns to index, the columnstore index can be used to target problematic OLAP queries in scenarios where analytics cannot easily be off-loaded to other systems.

Memory-optimized tables are the ultimate in OLTP solutions. They provide exceptional speed for heavily transactional workloads while eliminating the latching and locking that typically accompanies contentious workloads. They can be targeted with columnstore indexes, too, allowing analytics to be run against highly transactional tables in memory. This is a niche use-case that can provide exceptional speeds for a select few scenarios that will be discussed in this section.

Demonstrating Memory-Optimized Columnstore Indexes

Before diving into columnstore index queries, it is important to configure a database with the necessary filegroup, file, and snapshot isolation setting needed to allow memory-optimized tables to be created. Listing 15-9 provides the necessary configuration changes to allow memory-optimized tables in SQL Server.
ALTER DATABASE WideWorldImporters ADD FILEGROUP WideWorldImporters_moltp CONTAINS MEMORY_OPTIMIZED_DATA;
ALTER DATABASE WideWorldImporters ADD FILE (name='WideWorldImporters_moltp', filename='C:SQLDataWideWorldImporters_moltp') TO FILEGROUP WideWorldImporters_moltp;
ALTER DATABASE WideWorldImporters SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT = ON;
Listing 15-9

Enabling Memory-Optimized Tables in SQL Server

Only one memory-optimized filegroup is allowed per database. If a database already has one, then trying to add another will result in an error like the one shown in Figure 15-8.
Figure 15-8

Error encountered when adding a memory-optimized filegroup to a database that already has one

With these basic configuration steps complete, we can experiment with a memory-optimized table that contains a columnstore index. The script in Listing 15-10 shows the creation of an example table.
CREATE TABLE Sales.Orders_MOLTP
(      OrderID INT NOT NULL CONSTRAINT PK_Orders_MOLTP PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 150000),
       CustomerID INT NOT NULL,
       SalespersonPersonID INT NOT NULL,
       PickedByPersonID INT NULL,
       ContactPersonID INT NOT NULL,
       BackorderOrderID INT NULL,
       OrderDate DATE NOT NULL,
       ExpectedDeliveryDate DATE NOT NULL,
       CustomerPurchaseOrderNumber NVARCHAR(20) NULL,
       IsUndersupplyBackordered BIT NOT NULL,
       Comments NVARCHAR(MAX) NULL,
       DeliveryInstructions NVARCHAR(MAX) NULL,
       InternalComments NVARCHAR(MAX) NULL,
       PickingCompletedWhen DATETIME2(7) NULL,
       LastEditedBy INT NOT NULL,
       LastEditedWhen DATETIME2(7) NOT NULL,
       INDEX CCI_Orders_MOLTP CLUSTERED COLUMNSTORE)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
Listing 15-10

Initial (Imperfect) Creation of a Memory-Optimized Table with a Columnstore Index

This definition appears valid, but when executed, an error is returned, as shown in Figure 15-9.
Figure 15-9

Error due to LOB columns included in memory-optimized columnstore index definition

Columns of MAX length are not allowed in a memory-optimized columnstore index. To allow its creation, these columns need to be normalized into another table or shrunk to a smaller size. Listing 15-11 shows a revised script where the MAX length columns have been reduced in size to accommodate this limitation.
CREATE TABLE Sales.Orders_MOLTP
(      OrderID INT NOT NULL CONSTRAINT PK_Orders_MOLTP PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 150000),
       CustomerID INT NOT NULL,
       SalespersonPersonID INT NOT NULL,
       PickedByPersonID INT NULL,
       ContactPersonID INT NOT NULL,
       BackorderOrderID INT NULL,
       OrderDate DATE NOT NULL,
       ExpectedDeliveryDate DATE NOT NULL,
       CustomerPurchaseOrderNumber NVARCHAR(20) NULL,
       IsUndersupplyBackordered BIT NOT NULL,
       Comments NVARCHAR(2000) NULL,
       DeliveryInstructions NVARCHAR(2000) NULL,
       InternalComments NVARCHAR(2000) NULL,
       PickingCompletedWhen DATETIME2(7) NULL,
       LastEditedBy INT NOT NULL,
       LastEditedWhen DATETIME2(7) NOT NULL,
       INDEX CCI_Orders_MOLTP CLUSTERED COLUMNSTORE)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
Listing 15-11

Second (Imperfect) Creation of a Memory-Optimized Table with a Columnstore Index

With the NVARCHAR(MAX) columns replaced with NVARCHAR(2000), the script is executed, resulting in a new error, as seen in Figure 15-10.
Figure 15-10

Error due to a memory-optimized columnstore index containing more than 8060 bytes per row

While this error message appears to provide a glimmer of hope that this table definition would work if it is executed on a version of SQL Server later than SQL Server 2014, the result is still failure. After some review, a table definition is crafted that accommodates these limitations and allows a table to (finally) be created, as seen in Listing 15-12.
CREATE TABLE Sales.Orders_MOLTP
(      OrderID INT NOT NULL CONSTRAINT PK_Orders_MOLTP PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 150000),
       CustomerID INT NOT NULL,
       SalespersonPersonID INT NOT NULL,
       PickedByPersonID INT NULL,
       ContactPersonID INT NOT NULL,
       BackorderOrderID INT NULL,
       OrderDate DATE NOT NULL,
       ExpectedDeliveryDate DATE NOT NULL,
       CustomerPurchaseOrderNumber NVARCHAR(20) NULL,
       IsUndersupplyBackordered BIT NOT NULL,
       Comments NVARCHAR(500) NULL,
       DeliveryInstructions NVARCHAR(250) NULL,
       InternalComments NVARCHAR(500) NULL,
       PickingCompletedWhen DATETIME2(7) NULL,
       LastEditedBy INT NOT NULL,
       LastEditedWhen DATETIME2(7) NOT NULL,
       INDEX CCI_Orders_MOLTP CLUSTERED COLUMNSTORE)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
Listing 15-12

Working Table Creation for a Memory-Optimized Columnstore Index

Success! This table contains a handful of features that are not present on disk-based tables:
  • Nonclustered hash index is used as the primary key.

  • MEMORY_OPTIMIZED = ON.

  • DURABILITY = SCHEMA_AND_DATA.

The durability setting determines if this table’s data can be recovered when the server is restarted or if only the schema is persisted. SCHEMA_ONLY is significantly faster as there is no need to persist data to disk storage. In addition, startup is faster as no data needs to be loaded into the table for it to be available. SCHEMA_ONLY is typically used for tables that contain temporary data such as session, ETL, or transient information that is not needed again once processed. SCHEMA_ONLY does not support columnstore indexes, though, and therefore is out of scope for any further discussion here.

Note that the columnstore index is labeled as a clustered columnstore index, but it is not truly a clustered index. The primary storage mechanism for a memory-optimized table is always the set of rows in memory. The columnstore index is an additional structure on top of the memory-optimized object that is also persisted to disk storage. These caveats incur quite a bit of overhead by SQL Server to maintain a columnstore index alongside a memory-optimized table.

Also worthy of highlighting is the fact that this memory-optimized clustered columnstore index contains numerous wide columns that are not ideal for dictionary encoding. CustomerPurchaseOrderNumber, Comments, DeliveryInstructions, and InternalComments are string columns that are unlikely to be repetitive. Therefore, they are not likely to compress well and may cause dictionary pressure, resulting in undersized rowgroups and further reduction in columnstore efficiency. This is not a deal-breaker, but is essential to understand when considering implementing a memory-optimized columnstore index. Tables that are built primarily for OLTP workloads will often contain text data that is essential for transactional processing, but that may be suboptimal for analytics. One possibility that could resolve a situation like this would be to split the table into two, with the string columns and supporting data in one table and the numbers and metrics in the other. This would allow one to be given a memory-optimized columnstore index and the other to remain with its memory-optimized rowstore structure.

Nonclustered columnstore indexes are not supported on memory-optimized tables, and therefore the option to choose which columns to include in the index is not available.

With a memory-optimized columnstore index created, the script in Listing 15-13 can be executed to populate it with the same data as in the disk-based table Sales.Orders.
INSERT INTO Sales.Orders_MOLTP
(      OrderID, CustomerID, SalespersonPersonID, PickedByPersonID, ContactPersonID, BackorderOrderID, OrderDate, ExpectedDeliveryDate,
    CustomerPurchaseOrderNumber, IsUndersupplyBackordered, Comments, DeliveryInstructions, InternalComments, PickingCompletedWhen,
    LastEditedBy, LastEditedWhen)
SELECT
       OrderID, CustomerID, SalespersonPersonID, PickedByPersonID, ContactPersonID, BackorderOrderID, OrderDate, ExpectedDeliveryDate,
    CustomerPurchaseOrderNumber, IsUndersupplyBackordered, Comments, DeliveryInstructions, InternalComments, PickingCompletedWhen,
    LastEditedBy, LastEditedWhen
FROM Sales.Orders;
Listing 15-13

Script to Populate a Memory-Optimized Columnstore Index with Data

With table creation and population complete, the sizes of the original Sales.Orders table can be compared to its memory-optimized counterpart. Note the index contents of each table:

Sales.Orders:
  • A one-column primary key index

  • Four single-column indexes

  • One four-column index

Sales.Orders_MOLTP:
  • A one-column primary key index

  • A columnstore index (on all columns)

Figure 15-11

Size comparison of a disk-based table vs. a memory-optimized table with a clustered columnstore index

Note in Figure 15-11 the significant size difference between the disk-based table (25MB) and the memory-optimized table (117MB). That is a hefty space penalty and underscores the fact that mapping a memory-optimized structure to a columnstore structure is even more complex an operation than mapping a nonclustered rowstore index to a clustered columnstore index. Before continuing, one additional memory-optimized table will be created, as shown in Listing 15-14.
CREATE TABLE Sales.Orders_MOLTP_NO_CCI
(      OrderID INT NOT NULL CONSTRAINT PK_Orders_MOLTP_NO_CCI PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 150000),
       CustomerID INT NOT NULL,
       SalespersonPersonID INT NOT NULL,
       PickedByPersonID INT NULL,
       ContactPersonID INT NOT NULL,
       BackorderOrderID INT NULL,
       OrderDate DATE NOT NULL INDEX IX_Orders_MOLTP_NO_CCI_OrderDate NONCLUSTERED,
       ExpectedDeliveryDate DATE NOT NULL,
       CustomerPurchaseOrderNumber NVARCHAR(20) NULL,
       IsUndersupplyBackordered BIT NOT NULL,
       Comments NVARCHAR(500) NULL,
       DeliveryInstructions NVARCHAR(250) NULL,
       InternalComments NVARCHAR(500) NULL,
       PickingCompletedWhen DATETIME2(7) NULL,
       LastEditedBy INT NOT NULL,
       LastEditedWhen DATETIME2(7) NOT NULL)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
GO
Listing 15-14

Table Creation for a Memory-Optimized Table Without a Columnstore Index

This table is identical to the previously created memory-optimized table, except that the columnstore index has been swapped out for a nonclustered index on OrderDate. The new table’s total size is 20MB, which is about one-sixth of the size of the columnstore table, representing a significant space savings.

Consider a test analytic query that is executed against all three test tables, as shown in Listing 15-15.
SELECT
       COUNT(*) AS OrderCount,
       COUNT(DISTINCT(CustomerID)) AS DistinctCustomerCount
FROM Sales.Orders
WHERE OrderDate >= '1/1/2015'
AND OrderDate < '4/1/2015';
SELECT
       COUNT(*) AS OrderCount,
       COUNT(DISTINCT(CustomerID)) AS DistinctCustomerCount
FROM Sales.Orders_MOLTP_NO_CCI
WHERE OrderDate >= '1/1/2015'
AND OrderDate < '4/1/2015';
SELECT
       COUNT(*) AS OrderCount,
       COUNT(DISTINCT(CustomerID)) AS DistinctCustomerCount
FROM Sales.Orders_MOLTP
WHERE OrderDate >= '1/1/2015'
AND OrderDate < '4/1/2015';
Listing 15-15

Test Analytic Query Against Each Orders Table

Results are returned from each query. The execution plans can be seen in Figure 15-12.
Figure 15-12

Execution plans for a test analytic query against multiple tables

The disk-based execution plan suggests a covering index to avoid a clustered index scan that incurs a total of 1383 reads. The memory-optimized table without a columnstore index uses an index seek to return results, whereas the memory-optimized table with a columnstore index uses a columnstore index scan to return the same results.

The columnstore index scan is by far the least expensive and fastest option out of the three of these example tables. That being said, the cost of maintaining a memory-optimized columnstore index is not trivial. There are a variety of limitations imposed on this structure, beyond what has already been introduced:
  • ALTER TABLE cannot be used on a memory-optimized columnstore index to add columns, create indexes, change data types, or many of the other common DDL changes made after a table is initially created.

  • The columnstore index is replicated to disk as a matter of necessity, to help speed up database recovery after a server restart.

  • Just as with disk-based tables, mixing clustered columnstore and nonclustered rowstore indexes removes SQL Server’s ability to automatically apply Vertipaq optimization.

  • Computed columns are not allowed on memory-optimized columnstore indexes.

  • Because memory-optimized columnstore indexes are only allowed as clustered indexes, filters may not be applied to them.

  • Index maintenance is extremely limited. Rebuilds may only be used to adjust hash index bucket counts (if needed) and reorganize operations are not allowed. Sys.sp_memory_optimized_cs_migration can be used to process the contents of the delta store, if needed.

  • Columnstore archive compression is not allowed on memory-optimized columnstore indexes.

The list of caveats related to memory-optimized columnstore indexes is significant and makes it clear that this is not a feature that should be used without careful consideration. The ideal scenario for putting a columnstore index is as follows:
  • OLTP table with high concurrency.

  • OLTP table that is identified as an exceptional candidate for migration to a memory-optimized table.

  • Table is targeted with frequent analytic queries that cause heavy scans and unacceptable resource consumption.

  • OLTP and OLAP operations cannot be split into separate data sources.

Even if a table meets these criteria, be sure to test thoroughly and be able to quantify that the performance benefits of the memory-optimized columnstore index outweigh the limitations and drawbacks. This is a unique architectural feature that will only be applicable to a small fraction of high-volume, highly concurrent OLTP tables. More often than not, other alternatives will be more feasible, such as
  • Disk-based table with a clustered rowstore index and a nonclustered columnstore index.

  • Memory-optimized table with additional covering nonclustered indexes.

  • Separate OLTP/OLAP data processing/storage systems.

Optimization Strategies

Generally speaking, optimization is a process that should begin with solutions that address the most common use cases. Tweaking and implementing additional layers of complexity should only occur when one or more of the following are true:
  • Other solutions have failed to provide sufficient performance gains.

  • The situation can be classified as an exception to the rules or an edge case.

  • Testing and QA of new strategies proves worthwhile gains (in performance, resource savings, or both).

There is no one-size-fits-all solution to any set of architectural challenges, and therefore it is important to use caution when applying techniques introduced in this chapter (or anywhere for that matter). Keep it simple and only when simple fails should less simple solutions be considered.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.128.199.162