Testing the nonclustered columnstore index

The following code creates an NCCI on the fact table, this time without a filter, so all data is included in the NCCI:

CREATE NONCLUSTERED COLUMNSTORE INDEX NCCI_FactTest 
ON dbo.FactTest 
(SaleKey, CustomerKey,  
 Customer, CityKey, City, 
 DateKey, StockItemKey, 
 Product, Quantity, 
 TotalAmount, Profit); 
GO 

So how much space is used by the test fact table now? Let's check it again with the following code:

EXEC sys.sp_spaceused N'dbo.FactTest', @updateusage = N'TRUE'; 
GO 

The result is as follows:

Name          rows     reserved   data       index_size  unused 
------------  -------  ---------  ---------  ----------  -------
dbo.FactTest  2279810  529680 KB  498528 KB  29432 KB    1720 KB

Note the numbers. The index size is about 17 times less than the data size! And remember, in this preceding reported size are also the non-leaf levels of the clustered index, so the actual compression rate was even more than 17 times. This is impressive. So what does this mean for queries? Before measuring the improvements in queries, I want to show how you can ignore the NCCI you just created with a specific option in the OPTION clause of the SELECT statement. So here is a simple query that ignores the NCCI:

SET STATISTICS IO ON; 
SELECT f.StockItemKey, 
 SUM(f.TotalAmount) AS Sales 
FROM dbo.FactTest AS f 
WHERE f.StockItemKey < 30 
GROUP BY f.StockItemKey 
ORDER BY f.StockItemKey 
OPTION (ignore_nonclustered_columnstore_index); 

Because the NCCI was ignored, the query still did the full-clustered index scan with 63,301 logical reads. However, you can see from the execution plan that this time row mode operators were used, as shown in the following screenshot:

Row mode processing operators

This is different compared to the execution plan SQL Server used for the same query when the NCCI was empty, when SQL Server used batch mode operators. The ignore option really means that SQL Server completely ignores the NCCI. You can check that this is also true for the other two queries, the complex and the point one. You can also check the execution plans when ignoring the NCCI with the complex and point queries:

-- Complex query 
SELECT f.SaleKey, 
  f.CustomerKey, f.Customer, cu.[Buying Group], 
  f.CityKey, f.City, ci.Country, 
  f.DateKey, d.[Calendar Year], 
  f.StockItemKey, f.Product,  
  f.Quantity, f.TotalAmount, f.Profit 
FROM dbo.FactTest AS f 
  INNER JOIN Dimension.Customer AS cu 
    ON f.CustomerKey = cu.[Customer Key] 
  INNER JOIN Dimension.City AS ci 
    ON f.CityKey = ci.[City Key] 
  INNER JOIN Dimension.[Stock Item] AS s 
    ON f.StockItemKey = s.[Stock Item Key] 
  INNER JOIN Dimension.Date AS d 
    ON f.DateKey = d.Date 
OPTION (ignore_nonclustered_columnstore_index); 
-- Point query 
SELECT CustomerKey, Profit 
FROM dbo.FactTest 
WHERE CustomerKey = 378 
OPTION (ignore_nonclustered_columnstore_index); 
SET STATISTICS IO OFF; 

For both queries, SQL Server did a full table scan on the fact table, with around 63,000 logical reads in this table. Now let's finally see how the queries can benefit from the NCCI. The first query is the simple query again:

SET STATISTICS IO ON; 
SELECT f.StockItemKey, 
 SUM(f.TotalAmount) AS Sales 
FROM dbo.FactTest AS f 
WHERE f.StockItemKey < 30 
GROUP BY f.StockItemKey 
ORDER BY f.StockItemKey; 

As you can see from the following screenshot, the columnstore index scan was used, and only four segments were read, with 2,001 LOB reads. Remember that the columnstore indexes are stored in blobs in SQL Server. Compare this to the number of logical reads when the NCCI was ignored. You can also check by yourself that batch mode operators were used:

Row mode processing operators

You can check how many segments in total are occupied by the NCCI with the following query using the sys.column_store_segments catalog view:

SELECT ROW_NUMBER() 
       OVER (ORDER BY s.column_id, s.segment_id) AS rn, 
 COL_NAME(p.object_id, s.column_id) AS column_name, 
 S.segment_id, s.row_count,  
 s.min_data_id, s.max_data_id, 
 s.on_disk_size AS disk_size 
FROM sys.column_store_segments AS s    
INNER JOIN sys.partitions AS p    
    ON s.hobt_id = p.hobt_id    
INNER JOIN sys.indexes AS i    
    ON p.object_id = i.object_id   
WHERE i.name = N'NCCI_FactTest' 
ORDER BY s.column_id, s.segment_id; 

Here is the abbreviated result of this query:

rn column_name segment_id row_count min_data_id max_data_id disk_size
-- ----------- ---------- --------- ----------- ----------- ---------
1  SaleKey     0          1048576   1000001     10106307    4194888
2  SaleKey     1          336457    1001951     10185925    1346560
3  SaleKey     2          441851    1106610     10227981    1768336
4  SaleKey     3          452926    1001228     10213964    1812656
5  CustomerKey 0         1048576          0          402     773392
...   ...
44 Profit      3          452926     -64500       920000      25624
45 NULL        0         1048576          0         3539    1678312
...   ...
48 NULL        3          452926          0         3879     725640

The total number of segments used is 48. SQL Server created four rowgroups and then one segment per column inside each rowgroup, plus one segment per rowgroup for the rowgroup dictionary. You can also see the number of rows and disk space used per segment. In addition, the min_data_id and max_data_id columns point to the minimal and the maximal value in each segment. The SQL Server query optimizer uses this information for early segment elimination.

You can also execute the other two queries:

-- Complex query 
SELECT f.SaleKey, 
  f.CustomerKey, f.Customer, cu.[Buying Group], 
  f.CityKey, f.City, ci.Country, 
  f.DateKey, d.[Calendar Year], 
  f.StockItemKey, f.Product,  
  f.Quantity, f.TotalAmount, f.Profit 
FROM dbo.FactTest AS f 
  INNER JOIN Dimension.Customer AS cu 
    ON f.CustomerKey = cu.[Customer Key] 
  INNER JOIN Dimension.City AS ci 
    ON f.CityKey = ci.[City Key] 
  INNER JOIN Dimension.[Stock Item] AS s 
    ON f.StockItemKey = s.[Stock Item Key] 
  INNER JOIN Dimension.Date AS d 
    ON f.DateKey = d.Date; 
-- Point query 
SELECT CustomerKey, Profit 
FROM dbo.FactTest 
WHERE CustomerKey = 378; 
SET STATISTICS IO OFF; 

For the Complex query, the number of LOB reads is 7,128. For the Point query, the number of LOB reads is 2,351. In both cases, a columnstore index scan was used to read the data from the fact table. The point query used fewer LOB reads than the complex query because the query refers to fewer columns, and SQL Server retrieves only the columns needed to satisfy the query. Still, the results for the point query are not overly exciting. You should be able to get much less IO with a B-tree nonclustered index, especially with a covering one.

In my case, the complex query used a serial plan. Note that, depending on the hardware resources and concurrent work, you might get a different execution plan. SQL Server sees eight logical processors in my virtual machine, so you might have expected a parallel plan. SQL Server 2016 is much more conservative about using a parallel plan compared to previous versions. This is better for the majority of queries. If you really need to get a parallel execution plan, you could change the compatibility level to the version 2014, as the following code shows:

USE master; 
GO 
ALTER DATABASE WideWorldImportersDW SET COMPATIBILITY_LEVEL = 120; 
GO 

Now you can try to execute the complex query again:

USE WideWorldImportersDW; 
SET STATISTICS IO ON; 
SELECT f.SaleKey, 
  f.CustomerKey, f.Customer, cu.[Buying Group], 
  f.CityKey, f.City, ci.Country, 
  f.DateKey, d.[Calendar Year], 
  f.StockItemKey, f.Product,  
  f.Quantity, f.TotalAmount, f.Profit 
FROM dbo.FactTest AS f 
  INNER JOIN Dimension.Customer AS cu 
    ON f.CustomerKey = cu.[Customer Key] 
  INNER JOIN Dimension.City AS ci 
    ON f.CityKey = ci.[City Key] 
  INNER JOIN Dimension.[Stock Item] AS s 
    ON f.StockItemKey = s.[Stock Item Key] 
  INNER JOIN Dimension.Date AS d 
    ON f.DateKey = d.Date; 
SET STATISTICS IO OFF; 

This time, SQL Server used a parallel plan on my computer. Before continuing, reset the compatibility level to 2017:

USE master; 
GO 
ALTER DATABASE WideWorldImportersDW SET COMPATIBILITY_LEVEL = 140; 
GO 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.237.136