Chapter 3: Securing Storage Services

In the previous chapter, we covered compute services. After compute services, the second most common resource everyone talks about is storage – from object storage to block storage (which is also known as instance attached storage), to file storage.

We are using storage services to store our data.

The following is a list of common threats that might impact our data when it is stored in the cloud:

  • Unauthorized access
  • Data leakage
  • Data exfiltration
  • Data loss

As a best practice, we should always use the following countermeasures when storing data in the cloud:

  • Access-control lists (ACLs; note that each cloud provider has its own implementation) and Identity and Access Management (IAM), to restrict access from our cloud environment to the storage service
  • Encryption at both transit and rest to ensure data confidentiality
  • Auditing to have a log of who has access to our data and what actions were performed on our data (for instance, uploads, downloads, updates, deletions, and more)
  • Backups or taking snapshots to allow us to restore deleted data or return to the previous version of our data (for example, in the event of ransomware encrypting our data)

This chapter will cover all types of storage services and provide you with best practices on how to securely connect and use each of them.

In this chapter, we will cover the following topics:

  • Securing object storage
  • Securing block storage
  • Securing file storage
  • Securing the Container Storage Interface (CSI)

Technical requirements

For this chapter, you are required to have a fundamental understanding of object storage, block storage, and file storage.

Securing object storage

Each cloud provider has its own implementation of object storage, but at the end of the day, the basic idea is the same:

  • Object storage is a special type of storage that is meant to store data.
  • Files (or objects) are stored inside buckets (these are logical concepts such as directories or logical containers).
  • Access to files on object storage is either done through the HTTP(S) protocol API via web command-line tools or programmatically using SDK tools.
  • Object storage is not meant to store operating systems or databases (please refer to the Securing block storage section).

Next, we are going to examine what the best practices are for securing object storage services from AWS, Azure, and GCP.

For more information, please refer to the following resource:

Object storage: https://en.wikipedia.org/wiki/Object_storage

Securing Amazon Simple Storage Service

Amazon Simple Storage Service (Amazon S3) is the Amazon object storage service.

Best practices for conducting authentication and authorization for Amazon S3

AWS controls access to S3 buckets using ACLs.

Access can be controlled at the entire bucket level (along with all objects inside this bucket) and on a specific object level (for example, let's suppose you would like to share a specific file with several of your colleagues).

AWS supports the following methods to access S3 bucket permissions:

  • IAM policies: This allows you to set permissions for what actions are allowed or denied from an identity (for instance, a user, a group, or a role).
  • Bucket policies: This allows you to set permissions at the S3 bucket level – it applies to all objects inside a bucket.
  • S3 access points: This gives you the ability to grant access to S3 buckets to a specific group of users or applications.

Additionally, AWS controls permissions for identities (regardless of resources such as S3) on an AWS organizational basis (these are also called service control policies or SCPs).

The effective permissions of an S3 bucket are the sum of SCPs with identity permissions (IAM policies), total resource permissions (bucket policies), and an AWS KMS policy (such as allowing access to an encrypted object), assuming the user was not denied in any of the previous criteria.

Here is a list of best practices to follow:

  • Create an IAM group, add users to the IAM group, and grant the required permissions on the target S3 bucket to the target IAM group.
  • Use IAM roles for services (such as applications or non-human identities) that require access to S3 buckets.
  • Restrict access for IAM users/groups to a specific S3 bucket, rather than using wildcard permissions for all S3 buckets in the AWS account.
  • Remove default bucket owner access permissions to S3 buckets.
  • Use IAM policies for applications (or non-human identities)/service-linked roles that need access to S3 buckets.
  • Enable MFA delete for S3 buckets to avoid the accidental deletion of objects from a bucket.
  • Grant minimal permissions to S3 buckets (that is, a specific identity on a specific resource with specific conditions).
  • Use the bucket ACL's write permissions for the Amazon S3 log delivery group to allow this group the ability to write access logs (for further analysis).
  • For data that you need to retain for long periods (due to regulatory requirements), use the S3 object lock to protect the data from accidental deletion.
  • Encrypt data at rest using Amazon S3-Managed Encryption Keys (SSE-S3). This is explained in more detail in Chapter 7, Applying Encryption in Cloud Services.
  • For sensitive environments, encrypt data at rest using Customer-Provided Encryption Keys (SSE-C). This is explained, in more detail, in Chapter 7, Applying Encryption in Cloud Services.

For more information, please refer to the following resources:

Identity and access management in Amazon S3:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-access-control.html

IAM policies, bucket policies, and ACLs:

https://aws.amazon.com/blogs/security/iam-policies-and-bucket-policies-and-acls-oh-my-controlling-access-to-s3-resources/

How IAM roles differ from resource-based policies:

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_compare-resource-policies.html

Amazon S3 Preventative Security Best Practices:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html#security-best-practices-prevent

Setting default server-side encryption behavior for Amazon S3 buckets:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-encryption.html

Consider encryption of data at rest:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html#server-side

Best practices for securing network access to Amazon S3

Because Amazon S3 is a managed service, it is located outside the customer's Virtual Private Cloud (VPC). It is important to protect access to the Amazon S3 service.

Here is a list of best practices to follow:

  • Unless there is a business requirement to share data publicly (such as static web hosting), keep all Amazon S3 buckets (all tiers) private.
  • To secure access from your VPC to the Amazon S3, use AWS PrivateLink. This keeps traffic internally inside the AWS backbone, through a secure channel, using the interface's VPC endpoint.
  • For sensitive environments, use bucket policies to enforce access to an S3 bucket from a specific VPC endpoint or a specific VPC.
  • Use bucket policies to enforce the use of transport encryption (HTTPS only).
  • For sensitive environments, use bucket policies to require TLS version 1.2 as the minimum.
  • Encrypt data at rest using SSE-S3 (as explained in Chapter 7, Applying Encryption in Cloud Services).
  • For sensitive environments, encrypt data at rest using SSE-C (as explained in Chapter 7, Applying Encryption in Cloud Services).
  • Consider using presigned URLs for scenarios where you need to allow external user access (with specific permissions, such as file download) to an S3 bucket for a short period, without the need to create an IAM user.

For more information, please refer to the following resources:

Internetwork traffic privacy:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/inter-network-traffic-privacy.html

AWS PrivateLink for Amazon S3:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html

Protecting data using encryption:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingEncryption.html

Setting default server-side encryption behavior for Amazon S3 buckets:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-encryption.html

Consider encryption of data at rest:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html#server-side

Enforce encryption of data in transit:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html#transit

Using S3 Object Lock:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html

Using presigned URLs:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-presigned-url.html

Best practices for conducting auditing and monitoring for Amazon S3

Auditing is a crucial part of data protection.

As with any other managed service, AWS allows you to enable logging and auditing using two built-in services:

  • Amazon CloudWatch: This is a service that allows you to log object storage activities and raise the alarm according to predefined activities (such as excessive delete actions).
  • AWS CloudTrail: This is a service that allows you to monitor API activities (essentially, any action performed on Amazon S3).

Here is a list of best practices to follow:

  • Enable Amazon CloudWatch alarms for excessive S3 usage (for example, a high volume of GET, PUT, or DELETE operations on a specific S3 bucket).
  • Enable AWS CloudTrail for any S3 bucket to log any activity performed on Amazon S3 by any user, role, or AWS service.
  • Limit access to the CloudTrail logs to a minimum number of employees, preferably those with an AWS management account, outside the scope of your end users (including outside the scope of your users), to avoid possible deletion or changes to the audit logs.
  • Enable S3 server access logs to record all access activities as complimentary to AWS CloudTrail API-based logging (for the purpose of future forensics).
  • Use Access Analyzer for S3 to locate S3 buckets with public access or S3 buckets that have access from external AWS accounts.
  • Enable file integrity monitoring to make sure files have not been changed.
  • Enable object versioning to avoid accidental deletion (and to protect 
against ransomware).
  • Use Amazon S3 inventory to monitor the status of S3 bucket replication (such as encryption on both the source and destination buckets).

For more information, please refer to the following resources:

Logging Amazon S3 API calls using AWS CloudTrail:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/cloudtrail-logging.html

Logging requests using server access logging:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerLogs.html

Amazon S3 Monitoring and Auditing Best Practices:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html#security-best-practices-detect

Reviewing bucket access using Access Analyzer for S3:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-analyzer.html

Amazon S3 inventory:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-inventory.html

Summary

In this section, we learned how to secure Amazon S3 services based on the AWS infrastructure. This included logging in, securing network access, data encryption (at both transit and rest), and logging and auditing.

Securing Azure Blob storage

Azure Blob storage is the Azure object storage service.

Best practices for conducting authentication and authorization for Azure Blob storage

Azure controls authorization for Blob storage using Azure Active Directory.

For temporary access to Azure Blob storage (that is, for an application or a non-human interaction), you have the option to use shared access signatures (SAS).

Here is a list of best practices to follow:

  • Create an Azure AD group, add users to the AD group, and then grant the required permissions on the target Blob storage to the target AD group.
  • Use shared key authorization (SAS) to allow applications temporary access to Blob storage.
  • Grant minimal permissions to Azure Blob storage.
  • For data that you need to retain for long periods (due to regulatory requirements), use an immutable Blob storage lock to protect the data from accidental deletion.

For more information, please refer to the following resources:

Authorize access to blobs using Azure Active Directory:

https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad

Prevent Shared Key authorization for an Azure Storage account:

https://docs.microsoft.com/en-us/azure/storage/common/shared-key-authorization-prevent?tabs=portal

Grant limited access to Azure Storage resources using shared access signatures (SAS):

https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview

Security recommendations for Blob storage:

https://docs.microsoft.com/en-us/azure/storage/blobs/security-recommendations

Best practices for securing network access to Azure Blob storage

Because Azure Blob storage is a managed service, it is located outside the customer's Virtual Network (VNet). It is important to protect access to the Azure Blob storage service.

Here is a list of best practices to follow:

  • Keep all Azure Blob storage (that is, all tiers) private.
  • To secure access from your VNet to the Azure Blob storage, use an Azure private endpoint, which avoids sending network traffic outside your VNet through a secure channel.
  • Unforce the use of transport encryption (HTTPS only) for all Azure Blob storage.
  • For sensitive environments, require a minimum of TLS version 1.2 for Azure Blob storage.
  • Deny default network access to the Azure storage account and only allow access from predefined conditions such as the setting up of IP addresses.
  • Encrypt data at rest using Azure Key Vault.
  • For sensitive environments (for example, which contain PII, credit card details, healthcare data, and more), encrypt data at rest using customer-managed keys (CMKs) stored inside Azure Key Vault (as explained in Chapter 7, Applying Encryption in Cloud Services).

For more information, please refer to the following resources:

Configure Azure Storage firewalls and virtual networks:

https://docs.microsoft.com/en-us/azure/storage/common/storage-network-security?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&tabs=azure-portal

Tutorial: Connect to a storage account using an Azure Private Endpoint:

https://docs.microsoft.com/en-us/azure/private-link/tutorial-private-endpoint-storage-portal

Require secure transfer to ensure secure connections:

https://docs.microsoft.com/en-us/azure/storage/common/storage-require-secure-transfer

Enforce a minimum required version of Transport Layer Security (TLS) for requests to a storage account:

https://docs.microsoft.com/en-us/azure/storage/common/transport-layer-security-configure-minimum-version?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&tabs=portal

Azure Storage encryption for data at rest:

https://docs.microsoft.com/en-us/azure/storage/common/storage-service-encryption?toc=/azure/storage/blobs/toc.json

Customer-managed keys for Azure Storage encryption:

https://docs.microsoft.com/en-us/azure/storage/common/customer-managed-keys-overview?toc=/azure/storage/blobs/toc.json

Best practices for conducting auditing and monitoring for Azure Blob storage

Auditing is a crucial part of data protection.

Azure allows you to monitor blob storage using the following services:

  • Azure Monitor: This service logs access and audit events from Azure Blob storage.
  • Azure Security Center: This service allows you to monitor for compliance issues in the Azure Blob storage configuration.

Here is a list of best practices to follow:

  • Enable log alerts using the Azure Monitor service to track access to the Azure Blob storage and raise alerts (such as multiple failed access attempts to Blob storage in a short period of time).
  • Enable Azure storage logging to audit all authorization events for access to the Azure Blob storage.
  • Log anonymous successful access attempts to locate an unauthorized access attempt to the Azure Blob storage.
  • Enable Azure Defender for Storage to receive security alerts in the Azure Security Center console.

For more information, please refer to the following resources:

Monitoring Azure Blob storage:

https://docs.microsoft.com/en-us/azure/storage/blobs/monitor-blob-storage?tabs=azure-portal

Azure Storage analytics logging:

https://docs.microsoft.com/en-us/azure/storage/common/storage-analytics-logging

Log alerts in Azure Monitor:

https://docs.microsoft.com/en-us/azure/azure-monitor/alerts/alerts-unified-log

Summary

In this section, we learned how to secure the Azure Blob storage service based on the Azure infrastructure. This included logging in, securing network access, data encryption (at both transit and rest), and logging and auditing.

Securing Google Cloud Storage

Google Cloud Storage is the GCP object storage service.

Best practices for conducting authentication and authorization for Google Cloud Storage

Access can be controlled at the entire bucket level (including all objects inside this bucket) or on a specific object level (for example, suppose you would like to share a specific file with several of your colleagues).

GCP supports the following methods to access a cloud storage bucket:

  • Uniform bucket-level access: This method sets permissions based on the Google Cloud IAM role (that is, user, group, domain, or public).
  • Fine-grained: This method sets permissions based on a combination of both Google Cloud IAM and an ACL – it applies to either the entire bucket level or to a specific object.

Here is a list of best practices to follow:

  • Create an IAM group, add users to the IAM group, and then grant the required permissions on the target cloud storage bucket to the target IAM group.
  • Use IAM policies for applications that require access to cloud storage buckets.
  • Grant minimal permissions to cloud storage buckets.
  • Use Security Token Service (STS) to allow temporary access to cloud storage.
  • Use HMAC keys to allow the service account temporary access to cloud storage.
  • Use signed URLs to allow an external user temporary access to cloud storage.
  • For data that you need to retain for long periods (due to regulatory requirements), use the bucket lock feature to protect the data from accidental deletion.

For more information, please refer to the following resources:

Identity and Access Management:

https://cloud.google.com/storage/docs/access-control/iam

Cloud Storage authentication:

https://cloud.google.com/storage/docs/authentication

Access control lists (ACLs):

https://cloud.google.com/storage/docs/access-control/lists

Retention policies and retention policy locks:

https://cloud.google.com/storage/docs/bucket-lock

Security Token Service API:

https://cloud.google.com/iam/docs/reference/sts/rest

HMAC keys:

https://cloud.google.com/storage/docs/authentication/hmackeys

Signed URLs:

https://cloud.google.com/storage/docs/access-control/signed-urls

4 best practices for ensuring privacy and security of your data in Cloud Storage:

https://cloud.google.com/blog/products/storage-data-transfer/google-cloud-storage-best-practices-to-help-ensure-data-privacy-and-security

Best practices for securing network access to Google Cloud Storage

Because Google Cloud Storage is a managed service, it is located outside the customer's VPC. It is important to protect access to Google Cloud Storage.

Here is a list of best practices to follow:

  • Use TLS for transport encryption (HTTPS only).
  • Keep all cloud storage buckets (all tiers) private.
  • Use VPC Service Controls to allow access from your VPC to Google Cloud Storage.
  • Encrypt cloud storage buckets using Google-managed encryption keys inside Google Cloud KMS (as explained in Chapter 7, Applying Encryption in Cloud Services).
  • For sensitive environments (for example, which contain PII, credit card information, healthcare data, and more), encrypt cloud storage buckets using a CMK inside Google Cloud KMS (as explained in Chapter 7, Applying Encryption in Cloud Services).

For more information, please refer to the following resources:

Security, ACLs, and access control:

https://cloud.google.com/storage/docs/best-practices#security

The security benefits of VPC Service Controls:

https://cloud.google.com/vpc-service-controls/docs/overview

Enabling VPC accessible services:

https://cloud.google.com/vpc-service-controls/docs/manage-service-perimeters#add_a_service_to_the_vpc_accessible_services

Customer-supplied encryption keys:

https://cloud.google.com/storage/docs/encryption/customer-supplied-keys

Customer-managed encryption keys:

https://cloud.google.com/storage/docs/encryption/customer-managed-keys

Best practices for conducting auditing and monitoring for Google Cloud Storage

Auditing is a crucial part of data protection.

As with any other managed service, GCP allows you to enable logging and auditing using Google Cloud Audit Logs.

Here is a list of best practices to follow:

  • Admin activity audit logs are enabled by default and cannot be disabled.
  • Explicitly enable Data Access audit logs to log activities performed on Google Cloud Storage.
  • Limit the access to audit logs to a minimum number of employees to avoid possible deletion or any changes made to the audit logs.

For more information, please refer to the following resources:

Cloud Audit Logs with Cloud Storage:

https://cloud.google.com/storage/docs/audit-logging

Usage logs & storage logs:

https://cloud.google.com/storage/docs/access-logs

Summary

In this section, we learned how to secure Google Cloud Storage based on the GCP infrastructure. This included logging in, securing network access, data encryption (at both transit and rest), and logging and auditing.

Securing block storage

Block storage is a storage scheme like the on-premises Storage Area Network (SAN).

It allows you to mount a volume (disk), format it to a common filesystem (such as NTFS for Windows or Ext4 for Linux), and store various files, databases, or entire operating systems.

Next, we are going to examine what the best practices are for securing block storage from AWS, Azure, and GCP.

For more information, please refer to the following resource:

Block-level storage:

https://en.wikipedia.org/wiki/Block-level_storage

Best practices for securing Amazon Elastic Block Store

Amazon Elastic Block Store (Amazon EBS) is the AWS block storage.

It is common when working with EC2 instances, to attach an additional volume to store your data (separately from the operating system volume). This is also known as block storage.

Amazon EBS can be attached to a single EC2 instance and can be accessed from within the operating system.

The traffic between your EC2 instance and your attached EBS volume is encrypted at transit (and is automatically configured and controlled by AWS).

Additionally, an EBS volume can be configured to encrypt data at rest for the rare scenario in which a potential attacker gains access to your EBS volume and wishes to access your data. The data itself (on the EBS volume and its snapshots) is only accessible by the EC2 instance that is connected to the EBS volume.

The following command uses the AWS CLI tool to create an encrypted EBS volume in a specific AWS availability zone:

aws ec2 create-volume

    --size 80

    --encrypted

    --availability-zone <AWS_AZ_code>

The following command uses the AWS CLI tool to enable EBS encryption by default in a specific AWS region:

aws ec2 enable-ebs-encryption-by-default --region <Region_Code>

Here is a list of best practices for EBS volumes:

  • Configure encryption by default for each region you are planning to deploy EC2 instances.
  • Encrypt both boot and data volumes.
  • Encrypt each EBS volume at creation time.
  • Encrypt EBS volume snapshots.
  • Use AWS Config to detect unattached EBS volumes.
  • Use an IAM policy to define who can attach, detach, or create a snapshot for EBS volumes to minimize the risk of data exfiltration.
  • Avoid configuring public access to your EBS volume snapshots – make sure all snapshots are encrypted.
  • For highly sensitive environments, encrypt EBS volumes using the customer master key (as explained in Chapter 7, Applying Encryption in Cloud Services).
  • Set names and descriptions for EBS volumes to better understand which EBS volume belongs to which EC2 instance.
  • Use tagging (that is, labeling) for EBS volumes to allow a better understanding of which EBS volume belongs to which EC2 instance.

For more information, please refer to the following resource:

Amazon EBS encryption:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html

Best practices for securing Azure managed disks

Azure-managed disks are Azure managed block level storage.

It is common when working with VMs, to attach an additional volume to store your data (that is, separately from the operating system volume). This is also known as block storage.

The following command uses the Azure CLI tool to encrypt a specific VM, in a specific resource group, using a unique customer key vault:

az vm encryption enable -g MyResourceGroup --name MyVM --disk-encryption-keyvault myKV

The following command uses the Azure CLI tool to show the encryption status of a specific VM in a specific resource group:

az vm encryption show --name "myVM" -g "MyResourceGroup"

Here is a list of best practices to follow:

  • Create encryption keys (inside the Azure Key Vault service) for each region you are planning to deploy VMs in.
  • For Windows machines, encrypt your data using BitLocker technology.
  • For Linux machines, encrypt your data using dm-crypt technology.
  • Encrypt both the OS and data volumes.
  • Encrypt each data volume at creation time.
  • Encrypt the VM snapshots.
  • Use an Azure private link service to restrict the export and import of managed disks to your Azure network.
  • For highly sensitive environments, encrypt data volumes using a CMK (as explained in Chapter 7, Applying Encryption in Cloud Services).
  • Set names for the Azure disk volumes to allow a better understanding of which disk volume belongs to which VM.
  • Use tagging (that is, labeling) for disk volumes to allow a better understanding of which disk volume belongs to which VM.

For more information, please refer to the following resource:

Server-side encryption of Azure Disk Storage:

https://docs.microsoft.com/en-us/azure/virtual-machines/disk-encryption

Best practices for securing Google Persistent Disk

Google Persistent Disk is a part of the GCP block storage.

The following command uses the Google Cloud SDK to encrypt a persistent disk, in a specific GCP project, in a specific region, using a specific encryption key:

gcloud compute disks

    create encrypted-disk

    --kms-key projects/[KMS_PROJECT_ID]/locations/[REGION]/keyRings/[KEY_RING]/cryptoKeys/[KEY]

Here is a list of best practices to follow:

  • Encrypt both the OS and data volumes.
  • Encrypt each data volume at creation time.
  • Encrypt the machine instance snapshots.
  • For highly sensitive environments, encrypt persistent disks using a CMK inside Google Cloud KMS (as explained in Chapter 7, Applying Encryption in Cloud Services).
  • Set names for Google's persistent disks to allow you to have a better understanding of which persistent disk belongs to which machine instance.
  • Use tagging (that is, labeling) for persistent disks or snapshots to allow you to have a better understanding of which disk or snapshot belongs to which machine instance.

For more information, please refer to the following resource:

Protect resources by using Cloud KMS keys:

https://cloud.google.com/compute/docs/disks/customer-managed-encryption

Summary

In this section, we learned how to secure block storage. Since block storage volumes are part of the common compute services (such as Amazon EC2, Azure VM, Google Compute Engine, and more), the only way to protect block storage volumes (and their snapshots) is to encrypt them, prior to using them.

Access to block storage volumes is done from within the guest operating system, and auditing is part of the capabilities of guest operating systems.

Securing file storage

File storage is a piece of storage such as the on-premises network-attached storage (NAS).

Each cloud provider has its own implementation of file storage, but at the end of the day, the basic idea of file storage are described as follows:

  • They offer support for common file sharing protocols (such as NFS and SMB/CIFS).
  • They have the ability to mount a volume from a managed file service into an operating system to store and retrieve files, for multiple VMs, in parallel.
  • They have the ability to control access permissions to the remote filesystem.
  • They enable automatic filesystem growth.

Next, we are going to examine the best practices for securing file storage services from AWS, Azure, and GCP.

For more information, please refer to the following resource:

Network-attached storage:

https://en.wikipedia.org/wiki/Network-attached_storage

Securing Amazon Elastic File System

Amazon Elastic File System (Amazon EFS) is the Amazon file storage service based on the NFS protocol.

Best practices for conducting authentication and authorization for Amazon EFS

AWS IAM is the supported service in which to manage permissions to access 
Amazon EFS.

Here is a list of best practices to follow:

  • Avoid using the AWS root account to access AWS resources such as Amazon EFS.
  • Create an IAM group, add users to the IAM group, and then grant the required permissions on the target Amazon EFS to the target IAM group.
  • Use IAM roles for federated users, AWS services, or applications that need access to Amazon EFS.
  • Use IAM policies to grant the minimal required permissions to create EFS volumes or access and use Amazon EFS.
  • When using IAM policies, specify conditions (such as the source IP) and what actions an end user can, along with the mentioned condition, take on the target filesystem.
  • Use resource-based policies to configure who can access the EFS volume and what actions this end user can take on the filesystem (for example, mount, read, write, and more).

For more information, please refer to the following resources:

Identity and access management for Amazon EFS:

https://docs.aws.amazon.com/efs/latest/ug/auth-and-access-control.html

Working with users, groups, and permissions at the Network File System (NFS) Level:

https://docs.aws.amazon.com/efs/latest/ug/accessing-fs-nfs-permissions.html

AWS managed policies for Amazon EFS:

https://docs.aws.amazon.com/efs/latest/ug/security-iam-awsmanpol.html

Security in Amazon EFS:

https://docs.aws.amazon.com/efs/latest/ug/security-considerations.html

Overview of managing access permissions to your Amazon EFS resources:

https://docs.aws.amazon.com/efs/latest/ug/access-control-overview.html

Best practices for securing network access to Amazon EFS

Because Amazon EFS is a managed service, it is located outside the customer's VPC. It is important to protect access to the Amazon EFS service.

Here is a list of best practices to follow:

  • Keep Amazon EFS (that is, all storage classes) private.
  • Use VPC security groups to control the access between your Amazon EC2 machines and the Amazon EFS mount volumes.
  • To secure access from your VPC to the Amazon EFS, use AWS PrivateLink, which avoids sending network traffic outside your VPC, through a secure channel, using an interface's VPC endpoint.
  • Use Amazon EFS access points to manage application access to the EFS volume.
  • Use STS to allow temporary access to Amazon EFS.
  • Use an IAM policy to enforce encryption at rest for Amazon EFS filesystems. You can do this by setting the value of elasticfilesystem:Encrypted to True inside the IAM policy condition.
  • For sensitive environments, use the EFS mount helper to enforce the use of encryption in transit using TLS version 1.2 when mounting an EFS volume.
  • Encrypt data at rest using AWS-managed CMK for Amazon EFS (as explained in Chapter 7, Applying Encryption in Cloud Services).
  • For sensitive environments, encrypt data at rest using a CMK (as explained in Chapter 7, Applying Encryption in Cloud Services).

For more information, please refer to the following resources:

Controlling network access to Amazon EFS file systems for NFS clients:

https://docs.aws.amazon.com/efs/latest/ug/NFS-access-control-efs.html

Working with Amazon EFS Access Points:

https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html

Amazon Elastic File System Network Isolation:

https://docs.aws.amazon.com/efs/latest/ug/network-isolation.html

Data encryption in Amazon EFS:

https://docs.aws.amazon.com/efs/latest/ug/encryption.html

Using IAM to enforce creating encrypted file systems:

https://docs.aws.amazon.com/efs/latest/ug/using-iam-to-enforce-encryption-at-rest.html

Best practices for conducting auditing and monitoring for Amazon EFS

Auditing is a crucial part of data protection.

As with any other managed service, AWS allows you to enable logging and auditing using two built-in services:

  • Amazon CloudWatch: This is a service that allows you to log access activities and raise the alarm according to predefined activities (such as excessive delete actions).
  • AWS CloudTrail: This is a service that allows you to monitor API activities (essentially, any action performed on Amazon EFS).

Here is a list of best practices to follow:

  • Enable Amazon CloudWatch alarms for excessive Amazon EFS usage (for example, a high volume of store or delete operations on a specific EFS volume).
  • Enable the use of AWS CloudTrail for any EFS volume to log any activity performed on the Amazon EFS API, including any activity conducted by a user, role, or AWS service.
  • Create a trail, using AWS CloudTrail, on any EFS volume to log events, such as a requested action, date, and time, requested parameters, and more, for access to objects stored inside AWS EFS.
  • Limit the access to the CloudTrail logs to a minimum number of employees, preferably those with an AWS management account, outside the scope of your end users (including outside the scope of your users), to avoid possible deletion or changes to the audit logs.

For more information, please refer to the following resource:

Logging and Monitoring in Amazon EFS:

https://docs.aws.amazon.com/efs/latest/ug/logging-monitoring.html

Summary

In this section, we learned how to secure the Amazon EFS service based on the AWS infrastructure. This included logging in, securing network access, data encryption (at both transit and rest), and logging and auditing.

Securing Azure Files

Azure Files is an Azure file storage service based on the SMB protocol.

Best practices for conducting authentication and authorization for Azure Files

Azure supports the following authentication and authorization mechanisms to control access to Azure Files:

  • Active Directory Domain Services (AD DS): This is like the on-premises Active Directory using Kerberos authentication.
  • Azure Active Directory Domain Services (Azure AD DS): This is an add-on service to Azure AD, which allows you to authenticate to legacy services (SMB in the case of Azure Files) and protocols (Kerberos in the case of Azure Files).

Here is a list of best practices to follow:

  • Use identity-based authentication to grant minimal permissions for sharing, directory, or file level access on the Azure Files service.
  • For a cloud-native environment in Azure (with no on-premises VMs), make sure all VMs are joined to an Azure AD domain and that all VMs are connected to the same VNet as Azure AD DS.
  • Enable Active Directory authentication over SMB to allow domain joined VMs access to Azure Files.
  • Avoid using storage account keys for authenticating to Azure Files.

For more information, please refer to the following resources:

Overview of Azure Files identity-based authentication options for SMB access:

https://docs.microsoft.com/en-us/azure/storage/files/storage-files-active-directory-overview

Planning for an Azure Files deployment:

https://docs.microsoft.com/en-us/azure/storage/files/storage-files-planning

Best practices for securing network access to Azure Files

Because Azure Files is a managed service, it is located outside the customer's VNet. It is important to protect access to the Azure Files service.

Here is a list of best practices to follow:

  • Since SMB is considered a non-secure protocol, make sure all access to Azure Files services from the on-premises network pass through a secured channel such as a VPN tunnel or an ExpressRoute service.
  • To secure access from your VNet to Azure Files, use an Azure private endpoint, which avoids sending network traffic outside your VNet, through a secure channel.
  • Remove the need for the use of transport encryption (HTTPS only) for all Azure Files shares.
  • For sensitive environments, require a minimum TLS version of 1.2 for Azure Blob storage.
  • Deny default network access to the Azure storage account and only allow access from a predefined set of IP addresses.
  • For data that you need to retain for long periods (due to regulatory requirements), enable the Azure Files soft delete feature to protect the data from accidental deletion.
  • Encrypt data at rest using Azure Key Vault (as explained in Chapter 7, Applying Encryption in Cloud Services).
  • For sensitive environments, encrypt data at rest using customer-managed keys stored inside Azure Key Vault (as explained in Chapter 7, Applying Encryption in Cloud Services).

For more information, please refer to the following resources:

Azure Files networking considerations:

https://docs.microsoft.com/en-us/azure/storage/files/storage-files-networking-overview

Prevent accidental deletion of Azure file shares:

https://docs.microsoft.com/en-us/azure/storage/files/storage-files-prevent-file-share-deletion

Require secure transfer to ensure secure connections:

https://docs.microsoft.com/en-us/azure/storage/common/storage-require-secure-transfer?toc=/azure/storage/files/toc.json

Enforce a minimum required version of Transport Layer Security (TLS) for requests to a storage account:

https://docs.microsoft.com/en-us/azure/storage/common/transport-layer-security-configure-minimum-version?toc=%2Fazure%2Fstorage%2Ffiles%2Ftoc.json&tabs=portal

Azure Storage encryption for data at rest:

https://docs.microsoft.com/en-us/azure/storage/common/storage-service-encryption?toc=/azure/storage/files/toc.json

Customer-managed keys for Azure Storage encryption:

https://docs.microsoft.com/en-us/azure/storage/common/customer-managed-keys-overview?toc=/azure/storage/files/toc.json

Best practices for conducting auditing and monitoring for Azure Files

Auditing is a crucial part of data protection.

Azure allows you to monitor Azure Files using the following services:

  • Azure Monitor: This logs access and audit events from Azure Files.
  • Advanced Threat Protection for Azure Storage: This allows you to detect an anomaly or any unusual activity in the Azure Files and Azure storage account.

Here is a list of best practices to follow:

  • Enable log alerts using the Azure Monitor service to track access to Azure Files and raise alerts (such as multiple failed access attempts to Azure Files in a short period of time).
  • Enable Azure Defender for Storage to receive security alerts inside the Azure Security Center console.
  • Enable Azure storage logging to audit all authorization events for access to the Azure storage.

For more information, please refer to the following resources:

Requests logged in logging:

https://docs.microsoft.com/en-us/azure/storage/common/storage-analytics-logging?toc=/azure/storage/files/toc.json

Monitoring Azure Files:

https://docs.microsoft.com/en-us/azure/storage/files/storage-files-monitoring?tabs=azure-portal

Summary

In this section, we learned how to secure Azure Files based on the Azure infrastructure. This included logging in, securing network access, data encryption (at both transit and rest), and logging and auditing.

Securing Google Filestore

Google Filestore is a GCP file storage service based on the NFS protocol.

Best practices for conducting authentication and authorization for Google Filestore

Google Cloud IAM is the supported service in which to manage permissions to access Google Filestore.

Here is a list of best practices to follow:

  • Keep your Google Filestore instances private.
  • Create an IAM group, add users to the IAM group, and then grant the required permissions on the target Google Filestore instance to the target IAM group.
  • Use IAM roles to configure minimal permissions to any Google Filestore instance.
  • Use Cloud Firestore Security Rules to allow mobile clients, web clients, or serverless authentication to Google Filestore.

For more information, please refer to the following resources:

Security for server client libraries:

https://cloud.google.com/firestore/docs/security/iam

Get started with Cloud Firestore Security Rules:

https://firebase.google.com/docs/firestore/security/get-started

Writing conditions for Cloud Firestore Security Rules:

https://firebase.google.com/docs/firestore/security/rules-conditions

Best practices for securing network access to Google Filestore

Because Google Filestore is a managed service, it is located outside the customer's VPC. It is important to protect access to Google Filestore.

Here is a list of best practices to follow:

  • Use IP-based access control to restrict access to Google Filestore.
  • Create a Google Filestore instance on the same VPC as your clients.
  • If the Google Filestore instance is located outside your VPC, use VPC firewall rules to restrict access between your VPC and Google Filestore.

For more information, please refer to the following resources:

  • Access Control:

https://cloud.google.com/filestore/docs/access-control

  • Configuring IP-based access control:

https://cloud.google.com/filestore/docs/creating-instances#configuring_ip-based_access_control

  • Configuring Firewall Rules:

https://cloud.google.com/filestore/docs/configuring-firewall

  • Architecture:

https://cloud.google.com/filestore/docs/architecture

Summary

In this section, we learned how to secure the Google Filestore service based on the GCP infrastructure – from logging in to securing network access.

Securing the CSI

A CSI is a standard driver for connecting container orchestration systems such as Kubernetes to block and file storage from various cloud providers.

For more information, please refer to the following resource:

Kubernetes Container Storage Interface (CSI) Documentation:

https://kubernetes-csi.github.io/docs/introduction.html

Securing CSI on AWS

Amazon Elastic Kubernetes Service (EKS) has a CSI driver for the following storage types:

  • Block storage: EBS
  • Managed NFS: EFS
  • Parallel filesystem (for HPC workloads): Amazon FSx for Lustre

Here is a list of best practices to follow:

  • When creating an IAM policy to connect to a CSI driver, specify the storage resource name instead of using wildcard.
  • Use IAM roles for service accounts to restrict access to your pod.
  • Always use the latest CSI version for your chosen storage type.
  • When using the CSI driver for EBS and its snapshots, always set (in the YAML configuration file) the value of encrypted to True and specify the Amazon KMS key ID (KmsKeyId). This allows the CSI driver to use a key from Amazon KMS.
  • When using the CSI driver for EFS, always set (in the YAML configuration file) the value of encryptInTransit to True.
  • Use Amazon Secrets Manager with the secret store CSI driver to store and retrieve secrets (such as tokens, SSH authentication keys, Docker configuration files, and more) to/from your EKS pods.

For more information, please refer to the following resources:

Amazon Elastic Block Store (EBS) CSI driver:

https://github.com/kubernetes-sigs/aws-ebs-csi-driver

Amazon EFS CSI Driver:

https://github.com/kubernetes-sigs/aws-efs-csi-driver

Amazon FSx for Lustre CSI Driver:

https://github.com/kubernetes-sigs/aws-fsx-csi-driver

How do I use persistent storage in Amazon EKS?:

https://aws.amazon.com/premiumsupport/knowledge-center/eks-persistent-storage/

How to use AWS Secrets & Configuration Provider with your Kubernetes Secrets Store CSI driver:

https://aws.amazon.com/blogs/security/how-to-use-aws-secrets-configuration-provider-with-kubernetes-secrets-store-csi-driver/

Introducing Amazon EFS CSI dynamic provisioning:

https://aws.amazon.com/blogs/containers/introducing-efs-csi-dynamic-provisioning/

Securing CSI on Azure

Azure Kubernetes Service (AKS) has a CSI driver for the following storage types:

  • Block storage: Azure Disk
  • Managed SMB and NFS: Azure Files

Here is a list of the best practices to follow:

  • Always use the latest CSI version for your chosen storage type.
  • Use Azure Key Vault with the secret store CSI driver to store and retrieve secrets (such as tokens, SSH authentication keys, Docker configuration files, and more) to/from your AKS pods.
  • Use a private endpoint to connect your AKS cluster to Azure Files.

For more information, please refer to the following resources:

Azure Disk CSI driver for Kubernetes:

https://github.com/kubernetes-sigs/azuredisk-csi-driver

Azure Key Vault Provider for Secrets Store CSI Driver:

https://github.com/Azure/secrets-store-csi-driver-provider-azure

Use the Azure disk Container Storage Interface (CSI) drivers in Azure Kubernetes Service (AKS):

https://docs.microsoft.com/en-us/azure/aks/azure-disk-csi

Use Azure Files Container Storage Interface (CSI) drivers in Azure Kubernetes Service (AKS):

https://docs.microsoft.com/en-us/azure/aks/azure-files-csi

Securing CSI on GCP

Google Kubernetes Engine has a CSI driver for the following storage types:

  • Block storage: Google Compute Engine Persistent Disk
  • Object storage: Google Cloud Storage
  • Managed NFS: Google Cloud Filestore

Here is a list of best practices to follow:

  • Always use the latest CSI version for your chosen storage type.
  • When using the CSI driver for Google Persistent Disk, specify (in the YAML file) the disk-encryption-kms-key key to allow the CSI driver to use a customer-managed encryption key from Google Cloud KMS.
  • Use Cloud IAM roles to restrict access from your GKE cluster to Google Cloud Filestore.
  • Use Google Secret Manager with the Secret Store CSI driver to store and retrieve secrets (such as tokens, SSH authentication keys, Docker configuration files, and more) to/from your GKE pods.

For more information, please refer to the following resources:

Google Compute Engine Persistent Disk CSI Driver:

https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver

Google Cloud Filestore CSI driver:

https://github.com/kubernetes-sigs/gcp-filestore-csi-driver

Google Cloud Storage CSI driver:

https://github.com/ofek/csi-gcs

Using the Compute Engine persistent disk CSI Driver:

https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver

Create a CMEK protected attached disk:

https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek#create_a_cmek_protected_attached_disk

Google Secret Manager Provider for Secret Store CSI Driver:

https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp

Summary

In this section, we learned how to secure a CSI, based on the AWS, Azure, and GCP infrastructures – from permissions and encryption to secret management.

Summary

In this chapter, we focused on the various storage services in AWS, Azure, and GCP, ranging from object storage to block storage, file storage, and, finally, container storage.

In each section, we learned how to manage identity management (for authentication and authorization), how to control network access (from access controls to network encryption), and how to configure auditing and logging.

In the next chapter, we will review the various network services in the cloud (including virtual networking, security groups and ACLs, DNS, CDN, VPN, DDoS protection, and WAF).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.138.33