As provisioned and by default, a Snowflake account is configured for public Internet access. To properly secure our Snowflake account, we should define our security posture, understand where security controls are defined, select those appropriate for our purposes, match and set corresponding Snowflake configuration options, and above all, implement appropriate monitoring to ensure the options set remain until we make an informed change.
It is possible to use Snowflake without touching any security controls. If this book accelerates how to use Snowflake, this section may not appear relevant. As security is central to everything in Snowflake, I argue it is essential to fully understand the why, how, and what of security because without a full understanding of Snowflake security, an incomplete security posture is probable, and mistakes are more likely and the worst of outcomes more certain.
Cybersecurity has always been relevant and should be top on our agenda. Preventing attacks is becoming more and more important. One recent estimate is that 80% of all organizations experience some form of cyberattack each year, and media headlines frequently expose companies and government departments subjected to their data being scrambled.
Cybersecurity is everyone’s problem—all day, every day.
You should also be aware that Snowflake is constantly being enhanced; new features introduce new security options, and the devil is always in the details. This chapter dives into code. Finally! At last, I hear you say. Something practical for the technophiles to enjoy. I hope!
We take a holistic approach, looking at how to define security controls and sources of information to seed our security posture, from which we can build our monitoring. After implementing selected controls, we must monitor to ensure settings are not inadvertently changed or changed by bad actors.
Sections in this chapter point you to the documentation for features outside Snowflake. I aim to give pointers and information for you to investigate further and provide a starting point recognizing each organization has different tooling and approaches.
Finally, given this chapter’s limited space, it is impossible to cover every feature or recent change to behavior (if any).
Security Features
By design, Snowflake has security at its heart. Every aspect of Snowflake has been built with security controls embedded. RBAC is discussed in this chapter, noting fundamental differences in how the Snowflake RBAC model works compared to other RDBMS vendors.
Snowflake-specific security features provide a layered defense but only if we implement controls and monitoring effectively. We can implement the best security posture possible, but if we don’t monitor to ensure our controls remain set and raise alerts when we detect a breach, then we might be ok, but we can’t be sure. Try explaining to your organization’s CIO, regulator, or criminal investigators.
The distinction is very important. Some legacy databases use the words account and user interchangeably. In Snowflake parlance, an account is the outermost container provisioned by Snowflake. The user is either an end user or service user who may be entitled to use objects in Snowflake, but only via a role. There is no capability in Snowflake to grant entitlement directly to a user.
RBAC is covered in the next chapter, which is worth a deep read for two reasons. First, you are most likely migrating from a legacy database vendor where security has not been baked in from the ground up. Second, misconceptions are harder to change than establishing a new concept. I thought I understood RBAC, but it wasn’t until I thought about the subject that I realized I had missed some vital information.
System Access
To access any system, you must know the endpoint—the location where the service is provisioned. We are all familiar with Uniform Resource Locators (URLs) and use them daily without thought. Underpinning each URL is an Internet Protocol (IP) address , and each IP address may implement many ports, the most common of which are port 80 (Hypertext Transfer Protocol (HTTP) ) and port 443 (Hypertext Transfer Protocol Secure (HTTPS) ). Look in your browser and note the prefix. An example of HTTPS is at https://docs.snowflake.com/en/user-guide/admin-security-fed-auth.html . Most sites now use HTTPS by default. Classless Inter-Domain Routing (CIDR) is touched upon later. Be aware of the /0 network mask for Snowflake network policies discussed later. Networking is complicated and a subject in its own right. I have introduced some terms of which HTTPS and CIDR are most relevant.
When accessing systems, we must approve each user. In most organizations, this is usually a formal process requiring the user to submit an access request with their manager’s approval, which is then processed via another team who provisions the user on either the system or centralized provisioning tool for automated creation with notification back to the user when their request is complete.
Once a user has been notified of their system access, we must consider how they physically connect to the system. This process is called authentication . You know that usernames and passwords are inherently insecure. We fool ourselves to think everyone remembers a unique generated password for each system they interact with daily. We must look at authenticating users by other means.
Some of the tools we have available are to implement HTTPS. Imagine a secure pipe connecting your desktop to a known endpoint through which data flows. HTTPS is reliant upon a security certificate. We don’t need to know the certificate’s contents as it is all handled “behind the scenes” for us, but now you know. Authentication can also be implemented automatically by single sign-on (SSO) , discussed later, where we no longer need to log in via username and password. Instead, we rely upon membership of Active Directory (AD) or Okta to determine whether we should be allowed to log in or not.
Entitlement is all about “who can do what.” Our organizations have many hundreds of systems. One large European bank has about 10,000 different systems. Their HR department has approximately 147 different systems, and their equities trading department has about 90 different systems.
We need a single place to record entitlement on a role profile basis. In other words, if I perform job function X within my organization, I need access to systems A, B, and C’s entitlement to perform actions J, K, and L for each system. As can be seen, not only is networking complex but both authentication and entitlement are complicated too.
Fortunately, we can also use Microsoft Active Directory Federated Services (ADFS) , Okta, and a few others to manage group memberships that map to entitlements in our systems. If only every system could integrate with ADFS or Okta, and user roles were static in our HR systems, we would have a chance of automating user provisioning and entitlement at the point of joining, moving positions internally, and leaving our organizations.
Security Posture
No system can ever be said to meet its objectives without proper requirements for validation. Snowflake security is no exception.
Organizations often have a well-defined, formally articulated security posture , actively maintained as threats evolve, and new features with a security profile are released. Our security posture must include these aims. Preventing data breaches and allowing only appropriately entitled data access. Organizations hold lots of sensitive, confidential, and highly restricted data of immense value to competitors, bad actors, and the curious. Financial penalties incurred by data breaches are limited only by the governing jurisdiction and not restricted to just money; reputation can be more important than a financial penalty, and trust relationships formed over many years are too easily eroded.
Data breaches are not just leaks from internal systems to external locations. Our business colleagues rely upon the integrity of the data contained in our systems, using the information and outcomes to make business critical decisions, so we must ensure our data is not tampered with by bad actors or made accessible via a “back door” into our networks. We may occasionally identify data set usage with inappropriate controls, unauthorized usage, or insecure storage. In this case, we must raise the breach to our cybersecurity colleagues for recording and remediation.
We must also ensure our systems are available to all correctly authorized, entitled customers and system interfaces wherever possible utilizing sophisticated authentication mechanisms proven to be less accessible to attack. Username and password combinations are our least-favored approach to authentication. How many readers use the same password on multiple systems?
We also have legal and regulatory requirements, which adapt to meeting known threats and preparing defenses against perceived threats.
Cybersecurity is an ever-changing and increasingly more complex game of cat and mouse. But working long hours, running remediation calls with senior management attention, and conducting support calls every hour is no fun. Been there, done that, and for those unfamiliar with the scenario, please remain unfamiliar by staying on top of your security posture.
Attack Vectors
How often, when moving to a new position in an organization, has entitlement not been removed, with new entitlement granted for the new position?
Do managers always recertify their staff on time?
Are all employee position entitlements always provisioned by Active Directory group (or equivalent) membership?
Have we provisioned a separate Snowflake environment for production? See the “Change Bundles” section in this chapter for more on this issue of separate environments.
At what frequency do we scan our infrastructure to ensure our security posture remains intact?
When did we last conduct a penetration test?
You may be wondering how relevant these questions are to ensuring our Snowflake environment is secure. You find out later in this chapter .
Prevention Not Cure
As the adage says, “Prevention is better than cure.” If things go wrong, at least we immediately have some tools available to recover under certain scenarios. Snowflake has an immutable one-year query history of assisting investigations. The Time Travel feature covers any point in history for up to 90 days; if it is not enabled, please do so immediately for your production environments. Finally, Snowflake provisions a fail-safe for an additional seven days of data recovery, noting the need to engage Snowflake support for Fail-safe assistance.
Our first step is to identify the assets to protect. Some organizations have a central catalog of all systems, technologies, vendors, and products. But many don’t, and for those who do, is the product catalog up to date?
Recently, the Log4j zero-day exploit has focused on the mind. More sophisticated organizations also record software versions and have a robust patching and testing cycle to ensure their estate remains current, and patches don’t break their systems. With regard to Snowflake, no vulnerabilities for the Log4j zero-day exploit were found, and immediate communications were issued to reassure all customers.
Essential maintenance is the practice of updating code by applying patches. Preventative in nature, and, fortunately for us, it is all taken care of by Snowflake’s weekly patch updates, but see the section on change bundles later in this chapter.
Protection involves physical and logical prevention of unauthorized access to hardware, networks, offices, infrastructure, and so on. Protection also establishes guardrails and software specifically designed to prevent unauthorized access, such as firewalls, anti-virus, network policies, multi-factor authentication, break-glass for privileged account use, regular environment scanning, and a host of other preventative measures, of which a subset are applicable for Snowflake.
Alerting relates to monitoring the protections applied and raising an alert when a threshold is reached, or a security posture is breached to inform the monitoring team as soon as possible and enable rapid response. The faster we detect and respond to an alert, the quicker we can recover from the situation.
We have identified Snowflake as the asset to protect as the subject of this book. While our AWS, Azure, and GCP accounts are equally important, they are largely outside this book’s scope, but you will find that the same principles apply.
Returning to our focus on Snowflake, where can we find an agreed global suite of standards for reducing cyber risks to Snowflake?
Welcome to the National Institute of Standards and Technology (NIST) .
National Institute of Standards and Technology (NIST)
NIST is a U.S. Department of Commerce organization that publishes standards for cybersecurity www.nist.gov/cyberframework .
Snowflake maintains a comprehensive documented security program based on NIST 800-53 (or industry recognized successor framework), under which Snowflake implements and maintains physical, administrative, and technical safeguards designed to protect the confidentiality, integrity, availability, and security of the Service and Customer Data (the “Security Program”).
NIST is a large site; finding what you need isn’t trivial. A great starting point for defining Snowflake security controls and is directly referenced in the Snowflake security addendum is at https://csrc.nist.gov/Projects/risk-management/sp800-53-controls/release-search#!/controls?version=5.1
Select Security Controls → All Controls to view the complete list, which at the time of writing runs to 322 separate controls. Naturally, only a subset is appropriate for securing our Snowflake accounts. Each control must be considered in the context of the capabilities Snowflake deliver.
Now that we have identified a trustworthy security control source and reviewed the content, our next step is to identify corresponding Snowflake controls. We then protect our Snowflake account by implementing identified changes, then apply effective monitoring to detect breaches with alerting to inform our support team, who will respond, fix, and remediate, leading to effective service recovery. After which, we can conduct our post-mortem and management reporting .
Our First Control
Our control is to limit permitted activities to prescribed situations and circumstances.
Our control incorporates Snowflake’s best-practice recommendations.
Our control is to be documented, with implementation and monitoring implied to be delivered by an independent development team.
There is to be a periodic review of the control policy.
From what you know of Snowflake and the preceding interpretation, you might say the first control (of many) relates to the use of Snowflake-supplied administrative roles and, therefore, could be used to put guardrails around the use of the most highly privileged Snowflake-supplied ACCOUNTADMIN role. If our organization has enabled ORGADMIN role, we might consider extending this control to cover both ACCOUNTADMIN and ORGADMIN roles. Alternatively, we might create a second control with different criteria .
Does this control affect Personally Identifiable Information (PII)?
Does this control affect commercially sensitive information?
In defining control, we must also implement effective monitoring and alerting. This is covered in Chapter 6, which proposes a pattern-based suite of tools.
Detect when the control is breached with an alert being raised, recognizing there are legitimate use cases when this occurs.
Review alerts raised and determine appropriate action, whether to accept as legitimate, otherwise investigate, escalate, remediate, and repair.
Record each alert along with the response and periodically report to management.
SNOWFLAKE_NIST_AC1
Scope and Permitted Usage:
Snowflake supplied role ACCOUNTADMIN must not be used for general administrative use on a day to day basis but is reserved for those necessary operations where no other role can be used. Use of ACCOUNTADMIN role is expected to be pre-planned for system maintenance activities or system monitoring activities only, and is to be pre-notified to Operational Support team before use.
Snowflake Best Practice:
Snowflake recommends ACCOUNTADMIN usage to be assigned to at least two named individuals to prevent loss of role availability to the organization. We suggest a third generic user secured under break-glass conditions is assigned ACCOUNTADMIN role.
Implementation:
It is not possible to prevent the usage of ACCOUNTADMIN role by entitled users.
Review Period:
This policy is to be reviewed annually.
Sensitive Data:
This control does not relate to sensitive data.
Monitoring:
Any use of ACCOUNTADMIN role is to be detected within 5 minutes and notified by email to your_email_group@your_organization.com
Action:
Operational Support team identify whether ACCOUNTADMIN use is warranted, this might be for essential maintenance or software release both under change control. For all other ACCOUNTADMIN uses, identify user and usage, terminate session, escalate to line manager, conduct investigation, remediate and repair.
The exact wording differs according to the requirements, but this is an outline of a typical action.
Note in our initial requirements the implicit assumption of the control being defined by one group, with implementation being devolved to a second. This is good practice and in accordance with the segregation of roles and responsibilities.
Wash, rinse, and repeat for every required control. In conjunction with Snowflake subject matter experts (SMEs) , your cybersecurity professionals typically define the appropriate controls.
Snowflake Security Options
Once our controls have been defined, we need to find ways to implement and later monitor, alert, and respond. This section addresses typical controls while explaining “why” we would want to implement each one. The list is not exhaustive. As Snowflake matures, new controls become evident, and as you see later, there is a gap in Snowflake monitoring recommendations.
Network Policies
Our first control, in my opinion, is the most important one to implement. Network policies restrict access to Snowflake from specific IP ranges. Depending upon our security posture and considering how we use our Snowflake accounts, we may take differing views on whether to implement network policies or not. A network policy is mandatory for highly regulated organizations utilizing Snowflake for internal business use only; for other organizations allowing third-party logins from the public Internet, probably not.
Whether required or not, knowing about network policies is wise. They provide defense-in-depth access control via an IP blacklist (applied first) and an IP whitelist (applied second), mitigating inappropriate access and data loss risks.
For this discussion , let’s assume we require two network policies; the first ring-fences our Snowflake account allowing access to known IP ranges only, and the second enables access for a specific service user from a known IP range. The corresponding control might be expressed as “All Snowflake Accounts must prevent unauthorized access from locations outside of <your organization> internal network boundary except those from a cybersecurity approved location.”
Snowflake interprets CIDR ranges with a /0 mask as 0.0.0.0/0, effectively allowing open public Internet access.
Our network policies must, therefore, not have any CIDR ranges with /0.
When creating a network policy , it is impossible to declare a network policy that blocks the IP range from the currently connected session and attempt to implement the network policy. Also, when using network policies, the optional BLOCKED_IP_LIST is applied first for any connecting session, after which the ALLOWED_IP_LIST is applied.
We can now proceed with confidence in creating our network policies, knowing we cannot lock ourselves out. Our first task is to identify valid IP ranges to allow. These may be from a variety of tools and sources. Your cybersecurity team should know the list and approve your network policy implementation. Naturally, cybersecurity may wish to satisfy themselves that the network policy has correctly been applied after the first implementation and ensure effective monitoring after that. With our approved IP ranges available, we may only need to define the ALLOWED_IP_LIST.
While we may have many account network policies declared, we can only have one account network policy in force at a given time.
To enable access from a known, approved Internet location, we require a second network policy, this time for a specific connection. We can declare as many network policies as we wish, each with a specific focus in addition to the single active account network policy. An example may be connecting Power BI from Azure to Snowflake on AWS. The east-west connectivity is from a known IP range, which your O365 administrators will know.
When the Power BI service user attempts to log in, their IP is checked against the ALLOWED_IP_LIST range for the assigned network policy.
Using these commands, we can manage our network policies. Monitoring and alerting are addressed later. There are a few hoops to jump through in common with implementing other monitoring patterns.
For monitoring in Chapter 6, this setting is referred to as SNOWFLAKE_NIST_NP1.
Preventing Unauthorized Data Unload
Our next control might be to prevent data from being unloaded to user-specified Internet locations. Of course, your security posture and use cases may need to allow data to be unloaded, in which case this control should be ignored. User-specified Internet locations can be any supported endpoint. Effectively, your user determines where they wish to unload data; for most organizations, it could be a primary source of data leaks.
For monitoring in Chapter 6, this setting is SNOWFLAKE_NIST_AC2.
Restricting Data Unload to Specified Locations
Our next control might restrict data unloads to specified, system-mapped Internet locations. Of course, your security posture and use cases may need to allow data to be unloaded to any user-defined location, in which case this control should be ignored. System mapped Internet locations can be any supported endpoint mapped via storage integration only, thus restricting data egress to known locations.
The advantages of implementing this control should be obvious. The rigor associated with software development ensures the locations are reviewed and approved before storage integrations are implemented.
For monitoring in Chapter 6, this setting is SNOWFLAKE_NIST_AC3 .
Single Sign-On (SSO)
When our Snowflake account is provisioned, all users are provisioned with a username and password. You know that usernames and passwords are vulnerable to bad actors acquiring our credentials, potentially leading to data loss, reputational impact, and financial penalties. Our cybersecurity colleagues rightly insist we protect our credentials. SSO is one of the tools we can use where we no longer rely upon username and password but instead authenticate via centralized tooling.
Snowflake SSO documentation is at https://docs.snowflake.com/en/user-guide/admin-security-fed-auth-overview.html . Implementing SSO relies upon having federated authentication. This section explains the steps required to integrate ADFS and some troubleshooting information. Naturally, SSO integration varies according to available tooling, so apologies in advance for those using alternative SSO providers, space (and time) do not permit a wider examination of all available options.
Snowflake supports SSO over either Public or PrivateLink but not both at the same time; see the following for more information on PrivateLink.
Security Assertion Markup Language (SAML) is an open standard that allows identity providers (IdP) to pass authorization credentials to service providers (SPs). In our case, Snowflake is the service provider, and Microsoft ADFS is the identity provider. Due to the segregation of roles and responsibilities in organizations, setting up SSO requires both SME knowledge and administrative access to ADFS.
Step 1. Configure Identity Provider in ADFS
The first step is to generate an IdP certificate for your organization. For that, you might need to work with a subject matter expert with the right experience, and that expert might benefit from the guidance provided by Snowflake at https://docs.snowflake.com/en/user-guide/admin-security-fed-auth-configure-idp.html .
The IdP certificate you generate in this step is then used in step 3.
Step 2. Configure Snowflake Users
Step 3. Specify IdP Information
Note the label can only contain letters and numbers. The label cannot contain spaces and underscores.
Step 4. Enable SSO
Enter credentials for your organization domain login, which should authenticate against your IdP and allow access to Snowflake.
Troubleshooting
Occasionally , things go wrong with configuring SSO. This section provides troubleshooting information. I cannot cover every scenario but provide information on tools to help diagnose the root cause.
Identify the error_code from the information presented at
https://docs.snowflake.com/en/user-guide/errors-saml.html .
Further information is at https://docs.snowflake.com/en/user-guide/admin-security-fed-auth-use.html#using-sso-with-aws-privatelink-or-azure-private-link .
Multi-Factor Authentication (MFA)
MFA is not currently enabled by default. Each user must configure MFA manually. Snowflake documentation covering all aspects of MFA is at https://docs.snowflake.com/en/user-guide/security-mfa.html .
Snowflake strongly recommends users with an ACCOUNTADMIN role be required to use MFA.
Download and install DUO Mobile onto your phone. A QR code is sent. Scan the QR code. Note you may be prompted to upgrade your phone operating system. Once enrolled, refresh the Preferences option. You see that your phone number is displayed.
System for Cross-domain Identity Management (SCIM)
Without SCIM integration, you cannot automate the creation and removal of Snowflake users and roles maintained in Active Directory. Supported IdPs are Microsoft Azure AD and Okta. SCIM integration automates the exchange of identity information between two endpoints.
I use the term endpoint to describe access points to any network that malicious actors can exploit.
For the remaining steps, refer to Azure documentation at https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/snowflake-provisioning-tutorial , update Azure AD with the token generated in Figure 4-11 then test.
PrivateLink
PrivateLink requires Snowflake support assistance and may take up to two working days to provision. Please refer to the documentation at https://docs.snowflake.com/en/user-guide/admin-security-privatelink.html for further information. Note that your corporate network configuration may require changes to allow connectivity.
If you experience connectivity issues, you may also need to use SnowCD for diagnosis and open ports on firewalls, specifically 80 and 443. Further information is at https://docs.snowflake.com/en/user-guide/snowcd.html#snowcd-connectivity-diagnostic-tool .
Data Encryption
Data encryption is a primary means of ensuring our data remains safe. Snowflake utilizes underlying cloud service provider storage, which for AWS is S3 buckets. We have not yet discussed how our data is protected in S3. We assume everyone knows why.
Snowflake takes security seriously, very seriously indeed. What is covered in the next few pages cannot do justice in explaining the immense work Snowflake has put into security. This whitepaper requires registration to download and is well worth investing the effort to read www.snowflake.com/resource/industrial-strength-security-by-default/ .
Tri-Secret Secure
The root key maintained by the Snowflake hardware security module
The account master key is individually assigned to each Snowflake account
The table master key is individually assigned to each object storing data
The file key individually assigned to each S3 file
Periodic Rekeying
One benefit of rekeying is the total duration for which a key is actively used is limited, thus making any external attack far more difficult to perpetrate. Furthermore, periodic rekeying allows Snowflake to increase encryption key sizes and utilize improved encryption algorithms since the previous key generation occurred. Rekeying ensures that all customer data, new and old, is encrypted with the latest security technology.
Periodic rekeying replaces active keys with new keys on a 30-day basis, retiring the old account master key and table master keys automatically, behind the scenes, no fuss, no interaction required, all managed automatically and transparently by Snowflake. Periodic rekeying requires the Enterprise Edition and higher. Further information is at https://docs.snowflake.com/en/user-guide/security-encryption.html#encryption-key-rotation .
Following NIST recommendations, Snowflake ensures all customer data, regardless of when the data was stored, remains encrypted with the latest security technology.
We may find our internal data classifications and protection requirements mandate periodic rekeying as an essential guardrail, especially where data classification information is unavailable. Setting periodic rekeying is a best practice and should be adopted wherever possible.
Naturally, we would want to monitor the account setting remains in force, which I discuss in Chapter 6.
Customer Managed Key (CMK)/Bring Your Own Key (BYOK)
A further optional guardrail is available to implement for customers using Business Critical Edition and higher. You have seen how Snowflake protects our data using a hierarchy of keys, and there is one further level of protection offered: the ability for the customer to add their own key, otherwise known as CMK or BYOK. The advantage of implementing CMK is the ability to disable the account where the CMK has been deployed at customer discretion by disabling the key. Note that the key is required to access data within the account, and without which, Snowflake cannot help.
Before getting into the details of implementing CMK, you must consider how the customer manages the key. CMK is generated and must be stored securely. Not all organizations have a team to manage encryption keys, so how can a locally managed key be created and maintained securely?
- 1.Generate the key.
- a.
Create a custom policy to restrict the deletion of the key
- b.
Create a custom IAM role and attach the policy
- c.Create Key Management Store (KMS) CMK as
- i.
Symmetric KMS key
- ii.
Name and Description
- iii.
Labels
- iv.
Choose the IAM role to be attached
- v.
Set usage permissions
- vi.
Review and create CMK
- 2.Share the KMS CMK ARN with Snowflake.
- a.
Raise a support ticket
- 3.
Snowflake provides a key policy code to be added to the key policy
- 4.
Snowflake confirms account rekeying is complete
Naturally, this guide cannot be prescriptive and does not inform controls around AWS account security.
S3 Security Profile and Scanning
Strictly speaking, AWS S3 bucket security is not a Snowflake-specific issue, but in the context of external stages is mentioned to provide a holistic view of application security. Referenced throughout this book, S3 is a gateway into Snowflake.
Every organization has a policy or template security setting for S3 buckets with appropriate security scanning tools such as Checkpoint Software Technologies CloudGuard. Naturally, configuration and use of any scanning tools are beyond the scope of this book but are mentioned for completeness because artifacts loaded into S3 must also be protected from unauthorized access.
Penetration Test (Pen Test)
The subject of penetration testing occasionally reappears in organizations where new team members, management, and oversight look to reaffirm we have all our controls in place and pen testing is up to date. From a delivery perspective, pen testing is paradoxically out of our hands. It is not enough to make this statement without explaining why, and this section provides context and reasoning.
Understanding the objectives of pen testing provides part of the answer, with this definition from www.ncsc.gov.uk/guidance/penetration-testing .
A method for gaining assurance in the security of an IT system by attempting to breach some or all that system’s security, using the same tools and techniques as an adversary might.
Stress testing the Snowflake environment is naturally in Snowflake Inc.’s best interest, and much continual effort is expended to ensure Snowflake remains secure. But in so doing, the tools and techniques must remain confidential. Any bad actor would love to know which tools and techniques are deployed, and where gaps in coverage may identify weakness or opportunity. The last thing any product vendor needs is a zero-day exploit. Does anyone remember Log4j?
But this section relates to pen tests. Apart from the generally available proofs, how can our organization be assured at a detailed level?
Contractual negotiations between organizations and Snowflake Inc. include a provision for disclosing details of pen tests conducted to named individuals. The named individuals should be cybersecurity specialists as the information disclosed is highly sensitive. Therefore, the recipient list must be kept short.
The final body of evidence we can rely upon is Snowflake’s field security CTOs, specialists available to explain those topics of interest to our cybersecurity colleagues in presentation and document formats.
With this explanation in mind, I trust there is sufficient evidence to satisfy our immediate concerns while providing information on how to dig deeper. Finally, more information can be found at www.snowflake.com/product/security-and-trust-center/ from which some of the preceding content is sourced.
Time Travel
A brief discussion of the Time Travel account security feature is important. As discussed in Chapter 3, at a minimum, you must ensure your production environments have the Time Travel feature set to 90 days. It is recommended that all other environments have it enabled for the occasional mishaps during development. Yes, we have all been there. For monitoring in Chapter 6, this setting is SNOWFLAKE_NIST_TT1.
Change Bundles
Change bundles make behavior changes to your application code base and are pre-announced to registered users. See https://community.snowflake.com/s/article/Pending-Behavior-Change-Log .
Change bundles are applied at the account level; therefore, a single account holding both production and non-production environments should be given additional consideration before applying change bundles.
Naturally, the AFTER status should return ENABLED, after which point testing can begin. Do not forget to remove all test objects and roles created beforehand
Naturally, the AFTER status should return DISABLED.
Always ensure test cases are removed after testing.
Summary
This chapter began by identifying how and why cybersecurity attacks occur, available resources to identify security requirements, defining corresponding Snowflake cybersecurity controls, and some examples of implementing controls with sample code.
You also looked at several Snowflake guardrails provided to allow us to control our environments, along with a troubleshooting guide for SSO.
The discussion included explanations of underlying storage security, focusing on AWS S3. You also looked at penetration testing, explaining the security context, actions Snowflake conduct behind the scenes on our behalf, and the means available to satisfy ourselves. Snowflake is inherently secure.
Finally, having dipped your toes into a very deep subject, and hopefully, you were given a decent account, let’s move to Chapter 5. Stay tuned!