© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2022
A. CarruthersBuilding the Snowflake Data Cloudhttps://doi.org/10.1007/978-1-4842-8593-0_11

11. Query Optimizer Basics

Andrew Carruthers1  
(1)
Birmingham, UK
 

Performance tuning is often regarded as a “black” art, available to those with experience gained over many years of hands-on work. This section offers insights into how Snowflake works “under the hood” and provides tools to better understand behavior and identify options to improve performance.

Chapter 3 discusses micro-partitions and cluster keys for large-volume data sets that are critical to performance. This chapter delves into the fundamentals driving performance, starting with clustering. We cannot proactively propose solutions without understanding how clustering is implemented and a few techniques to optimally cluster our tables. I provide summary guidance, or “rules of thumb,” to facilitate design or later remediation.

By understanding how Snowflake processes queries to enable efficient data access, we can choose the most appropriate storage and warehouse strategy to reduce costs. Our next section focuses on query performance, where we investigate various scenarios in preparation for your real-world situations.

This chapter is neither exhaustive nor prescriptive in approach but offers a starting point in determining performance bottlenecks and ways to investigate and remediate them. Your real-world experience will vary from the scenarios presented.

Naturally, there is always much more to discover. I trust this brief chapter provides sufficient information and context and delivers motivation for your further investigations.

Clustering

We know each micro-partition holds between 50 MB and 500 MB of data organized in hybrid columnar format, optimizing compression according to the column data type. But how does Snowflake determine the order in which data is organized in micro-partitions? In the absence of explicit cluster key information, Snowflake’s only logic is to use the order in which data is stored in the source file to create the requested file size. In other words, data order is not changed, analyzed, or modified during load.

For many low-volume data sets, clustering by key value is not particularly relevant. For example, in those scenarios where all data for an object fit into a single micro-partition, no micro-partition pruning is possible. But for those large data sets spanning several or many micro-partitions with appropriate clustering keys declared, micro-partition pruning is possible, assuming query predicates match the cluster key.

This section focuses on cluster keys, recognizing that performance is also a function of CPU and memory. We later revisit warehouses.

What Is a Clustering Key?

A clustering key comprises a subset of attributes or expressions on a table, declared in the least selective to most selective order, with the express intent of co-locating data and designed to match the commonly used query predicates. Superficially a clustering key is “like” an index. But the similarity soon breaks down because only a single clustering key can be declared for a table. Unlike an index, all attributes are stored along with the clustered attributes. Clustering key documentation is at https://docs.snowflake.com/en/user-guide/tables-clustering-keys.html .

Cluster Key Operations

Declaring cluster keys uses serverless compute. The proof is to switch roles to ACCOUNTADMIN, click the Account tab, where usage includes AUTOMATIC_CLUSTERING consumption costs as shown in Figure 11-1.

A screenshot of serverless compute costs has two boxes on the top. The text in the first box reads, warehouses, 6, credits used, 12.26. The next box reads, average storage used, 5.991 G B. Below this is a table with two columns namely, warehouse name and credits used. Text in first column reads, compute_W H, test_W H, and so on. Figures are written in credits used column.

Figure 11-1

Serverless compute costs

Despite ALTER TABLE completing without delay, work continues in the background. To determine whether reclustering is complete, check the average_depth attribute in the result set for system$clustering_information, and when the average_depth value remains constant, reclustering is complete.

Default Clustering

In a departure from our immediate investigation to introduce clustering, we utilize the Snowflake supplied database SNOWFLAKE_SAMPLE_DATA as this contains tables with high data volumes set up for precisely this kind of investigation and look at clustering information.
USE ROLE      sysadmin;
USE DATABASE  snowflake_sample_data;
USE WAREHOUSE COMPUTE_WH;
USE SCHEMA    snowflake_sample_data.tpch_sf1000;
In case you are curious, the line item table contains 5999989709 rows, and because the metadata repository contains the rowcount, no warehouse is used, and the results are returned instantaneously.
SELECT system$clustering_information ( 'lineitem' );
The returned JSON is shown next. Note average_depth, where lower numbers indicate better clustering, and total_constant_partition_count, where higher numbers indicate better clustering. In this example, the line item table is well organized, noting some result set rows removed to reduce space.
{
  "cluster_by_keys" : "LINEAR(L_SHIPDATE)",
  "total_partition_count" : 10336,
  "total_constant_partition_count" : 8349,
  "average_overlaps" : 0.6908,
  "average_depth" : 1.4082,
  "partition_depth_histogram" : {
    "00000" : 0,
    "00001" : 8310,
    "00002" : 599,
    "00003" : 844,
    "00004" : 417,
    "00005" : 149,
    "00006" : 17,
...
  }
}

Take a few moments to examine partition_depth_histogram, which tells us the disposition of the data . The preceding example confirms the table is well clustered, showing high partition counts at level 00001 to low partition counts at level 00006.

Now we know how to interpret the clustering information result set. Let’s now look at a less well-optimized table, partsupp.
SELECT system$clustering_information ( 'partsupp' );
Note that average_depth is higher, and total_constant_partition_count is zero, indicating the partsupp table is poorly organized. And the “notes” provide another clue. The clustering key should lead to low cardinality attributes and not high cardinality attributes; the least selective attribute first and the most selective attribute last when declaring the cluster key. Note that some result set rows were removed to reduce space.
{
  "cluster_by_keys" : "LINEAR(PS_SUPPKEY)",
  "notes" : "Clustering key columns contain high cardinality key PS_SUPPKEY which might result in expensive re-clustering. Consider reducing the cardinality of clustering keys. Please refer to https://docs.snowflake.net/manuals/user-guide/tables-clustering-keys.html for more information.",
  "total_partition_count" : 2315,
  "total_constant_partition_count" : 0,
  "average_overlaps" : 1.8721,
  "average_depth" : 2.0043,
  "partition_depth_histogram" : {
    "00000" : 0,
    "00001" : 4,
    "00002" : 2303,
    "00003" : 2,
    "00004" : 6,
...
  }
}

Just as we did before, take a few moments to examine the partition_depth_histogram, which tells us the disposition of the data. The preceding example confirms the table is poorly clustered, showing low partition counts to high partition counts and then low partition counts.

In contrast, the next query fails with this error message: “Invalid clustering keys or table NATION is not clustered,” which occurs because the table is not clustered. There are only 25 rows.
SELECT system$clustering_information ( 'nation' );

Clustering depth documentation is at https://docs.snowflake.com/en/user-guide/tables-clustering-micropartitions.html#clustering-depth .

Identify Cluster Key

In general, and as your real-world testing confirms, some general rules apply when declaring cluster keys .
  • Fewer attributes in the key are better. In preference, only one or two attributes

  • Cluster key attribute order should be the lowest cardinality first to the highest cardinality last

  • Apply expressions to reduce the cardinality of keys, for example, DATE_TRUNC()

Putting a high cardinality attribute before a low cardinality attribute reduces the effectiveness of clustering on the second attribute.

As each real-world scenario differs, the methodology described in the following subsections identifies cluster keys.

Step 1. Identify Access Paths

By profiling your queries, identify the most often used access paths to the object.

To do this using the Snowflake user interface, navigate to the History tab. Using the available filters, click the query ID corresponding to the SQL text of interest, then click the Profile tab .

For our investigation, we continue our use of supplied sample data.
SELECT * FROM partsupp WHERE ps_supplycost > 100;
To illustrate the absence of pruning, and to use our admittedly poor example query, Figure 11-2 shows the partitions scanned equal the partition’s total, indicating no partition pruning has occurred.

A screenshot of pruning. Below the label, text reads partition scanned and partitions total. On the right side, amount is written for partitions scanned and total which is, 2315.

Figure 11-2

No partition pruning

We are not using the cluster key as a filter column in query predicates, or the table is not clustered.

Check the clustering information to identify the cluster key attributes.
SELECT system$clustering_information ( 'partsupp' );
The returned JSON confirms our sample query does not use the cluster key (ps_suppkey).
...
"cluster_by_keys" : "LINEAR(PS_SUPPKEY)"
...

Step 2. Gather Baseline Metrics

Using the History tab, we can sort by any column displayed. We should check for queries for long runtimes and gather information to inform our decision-making. Figure 11-3 shows where our query spent the most time for single query execution.

A screenshot of a profile overview finished. Below it the text reads, total execution time (12 minute 30 seconds). A list below reads, processing, 74%, local disk I O, 0%, remote disk I O, 15%, synchronization, 1%, and initialization, 11%.

Figure 11-3

Query profile overview

Step 3. Assign Cluster Key

With the preceding information in hand, we should try another key aligned to predicates used in poorly performing queries identified earlier and retest. For this test, we need real data to work with, created in the next section .

However, reclustering uses serverless compute, and operations must complete before results can be checked and repeating earlier comments. To determine whether reclustering is complete, check the average_depth attribute in the result set for system$clustering_information, and when the average_depth value remains constant, reclustering is complete.

Step 4. Retest

As with every change, our results must be tested and proven good.

As the next section demonstrates, we should make a single change at a time and be methodical about how we approach testing.

Setup Test Data

For the rest of this chapter, we need to make changes to tables. As we cannot manipulate the shared database tables, we will create our own test objects .
USE ROLE      sysadmin;
USE DATABASE  test;
USE WAREHOUSE COMPUTE_WH;
USE SCHEMA    public;
CREATE TABLE test.public.partsupp_1
AS
SELECT * FROM snowflake_sample_data.tpch_sf1000.partsupp;
Check the cluster key.
SELECT system$clustering_information ( 'partsupp_1' );
Unsurprisingly, there is no cluster key, so let’s declare one using an expression.
ALTER TABLE partsupp_1 CLUSTER BY (ps_supplycost > 100);

After confirming reclustering has been completed, recheck clustering where you see a problem. Note that some result set rows were removed to reduce space.

Better clustering is indicated by

average_depth = low

total_constant_partition_count = high
{
  "cluster_by_keys" : "LINEAR(ps_supplycost > 100)",
  "total_partition_count" : 1825,
  "total_constant_partition_count" : 1768,
  "average_overlaps" : 2.0055,
  "average_depth" : 3.0055,
  "partition_depth_histogram" : {
    "00000" : 0,
    "00001" : 1764,
...
    "00064" : 61
  }
}

In this example, we have partially achieved our desired effect.

average_depth desired = low, actual = medium (3.0055)

total_constant_partition_count desired = high, actual = high (1768)

Also, note that most micro-partitions are in a single bucket, which is not what we wish to achieve. Let’s apply a new cluster key based upon our presumption both ps_partkey and ps_suppkey are suitable candidates but first find out the cardinality.
SELECT COUNT(DISTINCT ps_partkey) count_ps_partkey,
       COUNT(DISTINCT ps_suppkey) count_ps_suppkey
FROM   partsupp_1;

ps_partkey = 200000000

ps_suppkey = 10000000

Following our general rules, we apply the least selective attribute first, noting neither attribute is low cardinality, and this is a test for illustration purposes only.
ALTER TABLE partsupp_1 CLUSTER BY (ps_suppkey, ps_partkey);
After confirming reclustering is complete, recheck clustering.
SELECT system$clustering_information ( 'partsupp_1' );
Superficially, the result looks good, a fairly even distribution in the partition_depth_history.
{
  "cluster_by_keys" : "LINEAR(ps_suppkey, ps_partkey)",
  "notes" : "Clustering key columns contain high cardinality key PS_SUPPKEY, PS_PARTKEY which might result in expensive re-clustering. Consider reducing the cardinality of clustering keys. Please refer to https://docs.snowflake.net/manuals/user-guide/tables-clustering-keys.html for more information.",
  "total_partition_count" : 2022,
  "total_constant_partition_count" : 0,
  "average_overlaps" : 13.6973,
  "average_depth" : 8.7319,
  "partition_depth_histogram" : {
    "00000" : 0,
    "00001" : 0,
    "00002" : 0,
    "00003" : 24,
    "00004" : 224,
    "00005" : 501,
    "00006" : 478,
    "00007" : 346,
    "00008" : 135,
    "00009" : 51,
    "00010" : 2,
    "00011" : 2,
    "00012" : 0,
    "00013" : 0,
    "00014" : 0,
    "00015" : 14,
    "00016" : 22,
    "00032" : 139,
    "00064" : 84
  }
}

A closer look reveals the truth.

average_depth desired = low, actual = medium (8.7319)

total_constant_partition_count desired = high, actual = low (0)

But we can see notes providing useful information.

Try again using a single key. All remaining values are not keys. They are normal attributes.
ALTER TABLE partsupp_1 CLUSTER BY (ps_suppkey);
After confirming reclustering is complete, recheck clustering.
SELECT system$clustering_information ( 'partsupp_1' );
Better, but not perfect clustering, noting some result set rows removed to reduce space.
{
  "cluster_by_keys" : "LINEAR(ps_suppkey)",
  "notes" : "Clustering key columns contain high cardinality key PS_SUPPKEY which might result in expensive re-clustering. Consider reducing the cardinality of clustering keys. Please refer to https://docs.snowflake.net/manuals/user-guide/tables-clustering-keys.html for more information.",
  "total_partition_count" : 1977,
  "total_constant_partition_count" : 0,
  "average_overlaps" : 4.3551,
  "average_depth" : 3.4401,
  "partition_depth_histogram" : {
    "00000" : 0,
    "00001" : 8,
    "00002" : 258,
    "00003" : 871,
    "00004" : 581,
    "00005" : 214,
    "00006" : 45,
...
  }
}

average_depth desired = low, actual = low (3.4401)

total_constant_partition_count desired = high, actual = low (0)

Our results are arguably better than those delivered for the source table where we began our investigation. In reality, they are only as good as the improvement in performance we experience in real-world usage. Caveat emptor. Under the circumstances, probably the best we can achieve.

Materialized Views

Having discussed materialized views in Chapter 10, we consider them worthy of a second mention.

Materialized views can be clustered using a different cluster key. As they are maintained behind the scenes using serverless compute, they can always be guaranteed to be consistent with the underlying table. Therefore, materialized views represent an alternative way to implement clustering more suited to different access paths than our primary use case, albeit incurring storage and maintenance costs.

Automatic Clustering

To be clear, not every table benefits from clustering. Only very large tables in the order of several terabytes and larger benefit from clustering. If data volumes and testing indicate adding cluster keys are beneficial, then consider enabling automatic clustering. Automatic clustering is where Snowflake utilizes serverless computers and manages all future maintenance on an as-needs basis. More information is at https://docs.snowflake.com/en/user-guide/tables-auto-reclustering.html#automatic-clustering .

Note

Automatic clustering is enabled by default when a table is reclustered.

Automatic clustering does not attempt to provide the best performance possible but instead aims to improve clustering depth to achieve both predictable and acceptable performance.

As we might expect, once enabled, we cannot influence automatic clustering behavior except by disabling it. There are no other levers available to apply.

To determine if automatic clustering is enabled on a table, look for the automatic_clustering attribute that is either ON or OFF.
SHOW TABLES LIKE 'partsupp_1';
To disable automatic clustering.
ALTER TABLE partsupp_1 SUSPEND RECLUSTER;
And to re-enable automatic clustering.
ALTER TABLE partsupp_1 RESUME RECLUSTER;
For roles with MONITOR USAGE privilege , the following query identifies credit usage on a per-object basis noting full view contents are only visible to ACCOUNTADMIN by default. Further information is at https://docs.snowflake.com/en/sql-reference/functions/automatic_clustering_history.html#automatic-clustering-history .
SELECT *
FROM TABLE(information_schema.automatic_clustering_history
          ( date_range_start => dateadd ( H, -24, current_timestamp )));

The corresponding view can be found in Account Usage. Note that the latency may be up to 3 hours. More information is at https://docs.snowflake.com/en/sql-reference/account-usage/automatic_clustering_history.html#automatic-clustering-history-view .

Factors affecting credit consumption include the number and cardinality of key(s), how many micro-partitions are involved in reclustering, the size of the reclustered table, and the frequency and pattern of DML operations.

Clustering Recommendations

As previously stated, clustering is not for every table. I recommend starting with tables of 1 TB or more. For those smaller tables, you may see some benefits to adding a cluster key but proceed on a case-by-case basis.

Query predicates (WHERE, JOIN, GROUP BY columns) are a good indicator of appropriate cluster keys to add. We also have materialized views available to provide alternate clustering for the same table, and the optimizer may select from materialized views in preference to referenced tables.

In general, these principles apply.
  • Begin with one or two cluster key attributes and aim to keep the total number of cluster keys small, in preference, less than three.

  • Always order cluster key attributes from least selective to highest selective— low cardinality to high cardinality.

  • Monitor table activity as DML operations can skew micro-partition content over time.

  • Consider enabling automatic clustering, but note resource monitors do not currently track credit consumption.

Query Performance

Having looked at clustering, let’s now look at how to identify poorly performing queries and discover remediation techniques. Documentation is at https://docs.snowflake.com/en/user-guide/ui-query-profile.html#query-operator-details .

Warehouse Considerations

As discussed in Chapter 3, three broad principles apply to scaling warehouses, as shown in Figure 11-4.

An illustration of three headed arrow. The up head labeled, up reads, increase tee-shirts size, improve query performance, process more data, run more complex queries. The bottom left head labeled, out reads, add compute clusters, increase users, more concurrent queries. The right head labeled, across reads workload isolation, eliminate contention.

Figure 11-4

Warehouse scaling options

Our starting point is to identify poorly performing queries. For this, we revert to the Snowflake supplied database SNOWFLAKE_SAMPLE_DATA as this contains tables with high data volumes set up for precisely this kind of investigation.
USE ROLE      sysadmin;
USE DATABASE  snowflake_sample_data;
USE WAREHOUSE compute_wh;
USE SCHEMA    snowflake_sample_data.tpcds_sf10tcl;
Note

Snowflake retains cached results for 24 hours and a maximum of 31 days subject to the same SQL re-running before the purge.

To establish a baseline for subsequent runs of the same query, we must ensure cached results are not reused; for this, we set use_cached_result to FALSE. More information is at https://docs.snowflake.com/en/user-guide/querying-persisted-results.html#using-persisted-query-results .
ALTER SESSION SET use_cached_result = FALSE;
Our warehouse compute_wh is X-Small, and we now set it to auto-suspend after 60 seconds.
ALTER WAREHOUSE compute_wh SET auto_suspend = 60;

We can check warehouse settings from the Snowflake user interface Warehouses tab.

Query Runtime and Memory Spills

This section addresses scaling up from Figure 11-4 .

For this investigation, we resize our existing warehouse compute_wh, don’t forget to resize it back to X-Small at the end of this exercise. We do not recommend resizing as a general practice.

To understand memory spills to disk, we will run the same SQL iteration using different size warehouses and query profile capture metrics.
SELECT i_product_name,
       SUM(cs_list_price)  OVER (PARTITION BY cs_order_number ORDER BY i_product_name ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) list_running_sum
FROM   catalog_sales, date_dim, item
WHERE  cs_sold_date_sk = d_date_sk
AND    cs_item_sk      = i_item_sk
AND    d_year IN (2000)
AND    d_moy  IN (1,2,3,4,5,6)
LIMIT 100;
Navigate to the History tab. Refresh the screen if necessary, noting an X-Small size. Then click Query ID for the most recent SQL statement invocation. Total Duration is 3 minutes and 37 seconds in my case. Click Profile. In Profile Overview (see Figure 11-5), note that micro-partition pruning results in just over 6% of the partitions selected. Spilling shows where intermediate query results are written to a disk overflowing from memory. As expected, we have 0% scanned from the cache.

A screenshot of sample query profile overview. On the left, is a header that reads, profile overview finished. Below that is a list labeled, total execution time (3 minutes, 37 seconds). The list has percentages for processing, local disk I O, and so on. The list below it provide values for I O, pruning, and spilling.

Figure 11-5

Sample query profile overview 1

For our next run, we resize to small, double the X-Small T-shirt size.
ALTER WAREHOUSE compute_wh SET warehouse_size = 'SMALL';
And re-run the same query, then repeat the same steps. Navigate to the History tab; refresh the screen if necessary. Note that the size is Small, and then click Query ID for the most recent SQL statement invocation. Total Duration is now 1 minute, 44 seconds in my case. Click Profile. In Profile Overview (see Figure 11-6), spilling is much lower. A new item, Network, shows the processing time when waiting for the network data transfer. As expected, we have 0% scanned from the cache, and all other values remain constant.

A screenshot of sample query profile overview. The top left reads profile overview finished, which has a list labeled total execution time (1 minute 44 seconds). The list has percentages for processing, synchronization, and so on. The top right reads, total statistics. It has data for I O, network, pruning, and spilling.

Figure 11-6

Sample query profile overview 2

For our next run, we resize to Medium, which is four times the X-Small T-shirt size.
ALTER WAREHOUSE compute_wh SET warehouse_size = 'MEDIUM';

And re-run the same query, and then repeat the same steps. Navigate to the History tab; refresh the screen if necessary. Note that the size is Medium, and then click Query ID for the most recent SQL statement invocation. Total Duration is now 59.1 seconds in my case. Click Profile. In Profile Overview, spilling has been reduced to 6.47 GB.

Our last run is with a Large size warehouse, which is eight times the X-Small T-shirt size .
ALTER WAREHOUSE compute_wh SET warehouse_size = 'LARGE';

Re-run the same query and then repeat the same steps. Navigate to the History tab; refresh the screen if necessary. Note that the size is Large, and then click Query ID for the most recent SQL statement invocation. Total Duration is now 29.3 seconds in my case. Click Profile. In Profile Overview, note the absence of spilling.

We can conclude for this sample query that either a Medium or a Large warehouse are reasonable compromises depending upon performance criteria. For your information, I re-ran using the X-Large warehouse (16 times the X-Small T-shirt size) with a reduction in runtime to 14 seconds.

One final consideration relating to cost. At the beginning of this section, we set auto-suspend at 60 seconds. Snowflake’s minimum warehouse runtime change is 60 seconds per X-Small cluster. Therefore, we must consider the cost implications of running larger warehouses because each increase in T-shirt size doubles the cost, albeit at reduced runtime.

Before concluding this investigation, we must return our warehouse size to X-Small.
ALTER WAREHOUSE compute_wh SET warehouse_size = 'X-SMALL';

Concurrency

This section addresses scaling out from Figure 11-4.

Clustering is enabled by setting the max_cluster_count greater than min_cluster_count, meaning the warehouse starts in Auto-Scale mode. Where max_cluster_count and min_cluster_count are set to a value greater than 1, the warehouse runs in Maximized mode, meaning the warehouse always initializes the stated number of clusters on startup.

Scaling out determines when and how Snowflake turns on/off additional clusters in the warehouse, up to the max_cluster_count according to the query demand, maximizing query responsiveness and concurrency. There is no additional cost to implementing Auto-Scale mode as a single cluster is initially instantiated on demand. As demand rises, more clusters are instantiated, then shut down as demand falls. In contrast, Maximize mode always instantiates the stated number of clusters on startup and does not shut clusters down as demand falls. Therefore the initial cost is higher and may be acceptable for consistently repeatable performance. There are no hard and fast rules, simply guidelines according to the importance of response times, concurrency demands, and performance. We pay according to requirements.

Note

Snowflake implements a hard limit of 10 concurrent clusters, with each cluster capable of handling 8 concurrent requests.

To understand concurrency, first implement Auto-Scale mode by setting max_cluster_count greater than min_cluster_count. In this example, a maximum of 4 concurrent X-Small clusters can handle a total of 32 concurrent queries.
ALTER WAREHOUSE compute_wh
SET   warehouse_size    = 'X-SMALL',
      max_cluster_count = 4
      min_cluster_count = 1

Although the creation of multiple parallel execution threads is well beyond the scope of this chapter , you can get useful information on the effects of concurrent execution, SQL queueing, and clustering at https://community.snowflake.com/s/article/Putting-Snowflake-s-Automatic-Concurrency-Scaling-to-the-Test . I do not endorse third-party products. I simply offer this link to add value to your own investigations.

Assuming your investigations are complete, reset compute_wh to default clustering setting.
ALTER WAREHOUSE compute_wh
SET   warehouse_size    = 'X-SMALL',
      max_cluster_count = 1
      min_cluster_count = 1
      scaling_policy    = STANDARD;

Workload Segregation

This section addresses scaling across from Figure 11-4.

You have seen the effect of scaling up by increasing cluster size and improving query response times and discussed the effect of adding clusters to scale out. This section addresses scaling across and explains why we should consider adding warehouses of the same size but with multiple declarations.

To a point, adding more discretely named warehouses also addresses clustering concerns, but the aim is more subtle. When we scale across, we segment our billing and cross-charging consumption by warehouse usage, where each department’s requirements for cost effectiveness and response times may differ. We also present a greater probability of result cache reuse as SQL queries issued in a department may have been run before the warehouse quiesces. A further benefit is the reduction in resource contention as warehouse usage is restricted to a subset of the user base and, therefore, more tunable to a narrower workload than might otherwise be possible.

The case for scaling across is less clear than for scaling up and out, but in larger organizations becomes clearer and is one left for your further investigations.

Summary

This chapter began by diving into clustering. You learned how to define a cluster key and determine better clustering as indicated by average_depth = low and total_constant_partition_count = high with a descending number of partitions across a shallow depth.

We then identified costs associated with automatic clustering. We determined how to disable and enable automatic clustering noting the aim is to improve clustering depth to achieve both predictable and acceptable performance.

Ending our discussion on clustering by providing recommendations, we discussed query performance and spilling data from memory to disk, commenting on the cost implications of increasing warehouse size.

Implementing concurrency falls outside the available space, though I hope you find the discussion useful. Finally, the comments on workload segregation may not resonate with smaller organizations.

Having looked at query performance, let’s next look at data management.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.4.174