This chapter focuses on one of the most common concepts when it comes to Azure and that is securing storage on the Azure platform. The focus here will be on implementing and managing storage from a security point of view, such as generating Shared Access Signature (SAS) tokens, managing access keys, configuring Azure Active Directory (AD) integration, and configuring access to Azure files. We will also explore the storage replication options available to us in Azure and understand the management of a blob's life cycle.
In this chapter, we are going to cover the following main topics:
To follow along with the hands-on material, you will need the following:
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
Also run the following:
Install-Module -Name Az -Scope CurrentUser -Repository PSGallery -Force
You can secure your storage account to a specific set of supported networks which are granted access by configuring network rules so that only applications that request data over the specific set of networks can access the storage account. When these network rules are effective, the application needs to use proper authorization on the request. This authorization can be provided by Azure AD credentials for blobs and queues, with a SAS token or a valid account access key.
By default, storage accounts are provisioned with a public endpoint, and thanks to the enhanced control Azure offers, network traffic can be limited to those trusted IP addresses and networks to which you have granted access on Azure. For good security practice, all public access to storage accounts should be set to deny for the public endpoint by default. The network rules defined for the storage account will apply across all protocols, including SMB and REST; therefore, to allow access, an explicit rule will need to be defined. There are additional exceptions that can be configured that give you the ability to allow access to Azure services on the trusted services list to the storage account, as well as configuring logging and metric access for any networks (such as for Log Analytics).
Top Tip
When integrating a resource with an Azure VNet, your VNet needs to exist within the same region as your resource.
In the following demonstration, we are going to configure network access to the storage account that we created in the previous chapter to restrict network access to a specific network in Azure, as well as allowing our public IP to communicate:
You have now completed this section on network restrictions on public endpoints. Should you wish to test connectivity with this, you can deploy a VM in the same VNet as the storage account and connect to the storage account from inside the VM. In the next section, we will discuss private endpoints.
Private endpoints provide a mechanism for Azure Storage accounts to have a private interface for a storage account and can be used to eliminate public access. They provide enhanced security over a public endpoint because they prevent unauthorized access by not being exposed publicly. When implementing a private endpoint, a Network Interface Card (NIC) is associated with the storage account and will be placed in a VNet. The traffic for the storage account will traverse the VNet to which it is associated. Private endpoints are provided through a service called Private Link.
Top Tip
For scenarios requiring advanced security, you should disable all public access to the storage account and enable a private endpoint. All traffic should be directed through a firewall for integration and a Network Security Group (NSG) should be implemented on the subnet layer to restrict unauthorized access further.
In the following demonstration, we will attach a private endpoint to a storage account:
You have now successfully deployed a private endpoint. That brings us to the end of this section. We encourage you to play with this more in the next chapter, where you can follow along with a lab deployment. We will now discuss network routing on a storage account.
Top Tip
Take note that a private endpoint can also be provisioned on the creation of a storage account.
The default network routing preference option chosen for storage accounts and most Azure services will be for the Microsoft network. This is a high-performance, low-latency global connection to all services within Azure and serves as the fastest delivery service to any consuming service or user. This is due to Microsoft configuring several points of presence within their global network. The closest endpoint to a client is always chosen. This option costs slightly more than traversing the internet. If you select Internet routing, then traffic will be routed in and out of the storage account outside the Microsoft network.
The following screenshot shows the setting under the Firewall and virtual networks tab on the Networking blade for your storage account:
You will note there is also an option to publish route-specific endpoints for the storage account. This can be used in scenarios where you might want the default network routing option to be configured for the Microsoft network, while providing internet endpoints or vice versa. These endpoints can be found in the Endpoints section of your storage account, as shown in the following screenshot:
From this list, you may copy the endpoints that are required. Now that we have briefly observed the configuration options available for network routing on storage accounts, in the next section, we will explore a PowerShell script for configuring a private endpoint on a storage account.
The following script creates a new private endpoint that is associated with an existing storage account. It is linked to the defined VNet and links to the first subnet within that VNet:
$storageAccount = Get-AzStorageAccount -ResourceGroupName "AZ104-Chapter7" -Name "az104xxxxxxxx" $privateEndpointConnection = New-AzPrivateLinkServiceConnection -Name 'myConnection' -PrivateLinkServiceId ($storageAccount.Id) -GroupId 'file'; $vnet = Get-AzVirtualNetwork -ResourceGroupName "AZ104-Chapter7" -Name "StorageVNET" ## Disable private endpoint network policy ## $vnet.Subnets[0].PrivateEndpointNetworkPolicies="Disabled" $vnet | Set-AzVirtualNetwork ## Create private endpoint New-AzPrivateEndpoint -ResourceGroupName "AZ104-Chapter7" -Name "myPrivateEndpoint" -Location "westeurope" -Subnet ($vnet.Subnets[0]) -PrivateLinkServiceConnection $privateEndpointConnection
Once this code has been run, you will have successfully created a private endpoint for your storage account. It will be linked to the VNet and subnet you defined. You can navigate to the private endpoint to discover its private IP address, which will be used for internal communication to the service going forward.
That brings an end to this section. We have learned about VNet integration for the storage accounts and the different options available. In the next section, we will explore managing access keys.
We encourage you to read up on this topic further by using the following links:
Storage access keys are like passwords for your storage account and Azure generates two of these when you provision your account, being a primary and secondary key. Just like passwords, they need to be changed from time to time to ensure you are not compromised. This practice is referred to as key rotation. In the following section, we will run through an example of how to access your keys and how to renew them.
In this demonstration, we will explore how to view access keys as well as how to renew them:
Now that you know how to access the storage access keys, we will look at how to rotate keys in the following exercise:
You have now completed a key rotation for a storage account. This ensures unauthorized access is prevented on the storage keys and it is best practice to rotate these keys every 90 days. As a recommendation, key2 should be rotated first and updated for any relevant applications and services, then followed by key1. This process ensures that the primary key (key1) is not directly impacting all business-critical services and causing unnecessary downtime. The rotation process should still be properly planned and maintained through an appropriate change control process within your organization.
Top Tip
As a best practice, keys should be rotated every 90 days to prevent unauthorized exposure to the account. This will also limit the potential attack window for compromised SAS tokens.
In the next section, we will explore SAS tokens.
SAS tokens are secure access tokens that provide delegated access to resources on your storage account. The storage service confirms the SAS token is valid in order to grant access. The construct of a SAS token includes the permissions granted on the token, the date validity, and the signing key for the storage account. When creating a SAS token, several items need to be considered that govern the granular level of access granted, which are as follows:
There are three types of SAS supported by Azure Storage:
SAS tokens can take two forms, as detailed here:
Top Tip
Microsoft advises a best security practice is to use Azure AD credentials whenever possible.
Now that you have an understanding of the core components of a SAS, we will explore some exercises for creating and managing these.
In this demonstration, you will learn how to create a SAS token for sharing access to a storage account:
You now know how to generate a SAS token and connect to a storage account using the token. In the next section, we will explore storage access policies and how these enhance the concept of SAS tokens.
Top Tip
Allowed protocols should be limited to HTTPS on the SAS creation for enhanced security. The SAS start and end time should be limited as far as possible to the necessary time required for access.
A storage access policy provides an additional layer of control over SAS by introducing policies for managing the SAS token. SAS tokens can now be configured for a start and expiry time with the ability to revoke access after they have been issued. The following steps demonstrate the process for creating a storage access policy on a container:
You have now learned how to create a storage access policy. You will learn how to edit an existing policy in place through the following steps:
You have now learned how to modify an existing policy. Let's follow the given steps to remove an existing access policy:
You have just learned how to delete an access policy. That concludes this section, where we have learned what SAS tokens are and how they work. We have also explored storage access policies as well as how these enhance the management of SAS tokens. In the next section, we have provided additional reading material for you to learn more if desired.
We encourage you to read up on the topic further by using the following links:
Storage accounts can provide identity-based authentication through either Active Directory (on-premises) or Azure Active Directory Domain Services (AADDS). Both offer the ability to utilize Kerberos authentication offered by Active Directory. The join is limited to a single forest, whereas multiple forest connections will require the configuration of domain trusts.
For the file share to provide authentication capabilities, it will join the respective directory service as a computer account object. There are three primary permissions (authorization) on the SMB share that you should be cognizant of:
In the following sections, we will investigate the steps involved in configuring Active Directory domain-joined Azure file shares and the allocation of permissions to these shares.
To authenticate through either directory service, several requirements are needed. The following diagram illustrates the requirements for an Active Directory integration:
We will now follow the process for configuring AD authentication on an Azure file share. In the section that follows this, we will explore configuring access to the file share and then mounting the file share. Finally, we will explore how to configure permissions on the share:
Import-Module -name AZFilesHybrid;
Join-AzStorageAccountForAuth -ResourceGroupName "AZ104-Chapter7" -StorageAccountName "storagename01" -Domain "domainname.com" -OrganizationalUnitName "OU=AzureShares,OU=Az104_Resources,DC=domainname,DC=com"
Your Azure file share should now be joined to your on-premises AD domain.
Top Tip
Should you receive an error for updating any module, such as the PowerShellGet module, you can run the following command to force an update. The module name can be changed accordingly:
get-module | Where-Object{$_.name -like "*PowerShellGet*"} | Update-module
In the next section, we will explore assigning share-level and file-level permissions, as well as mounting an SMB share on a Windows machine.
In the following section, we will explore assigning share and file permissions on the AD-joined storage from the previous exercise, as well as mounting the share and exploring how to validate the security.
In this section, we will look at the steps involved to assign share-level permissions:
You have just added contributor permissions for a user to your SMB share on Azure. This same process can be applied to the other SMB roles if desired. We will look at assigning file-level permission in the next section.
In this section, we will look at the steps involved to mount an Azure file share on the test VM with AD credentials. It should be noted that port 445 will need to be open on the Windows server and SMB 3.x enabled (these should be open by default):
You have now successfully mounted the SMB share for your Azure files storage and also seen the effect placed on the share using permissions. In the next section, we will explore the effects of file-level permissions.
In this section, we will look at the steps involved to assign share-level permissions:
You have now learned how to configure file-level ACLs for Azure Storage shares. This concludes the section for Azure AD authentication and integration for access to Azure file shares. In the next section, we provide additional reading material should you wish to learn more.
We encourage you to read up on this topic further by using the following links:
AzCopy is a utility that can be used for copying files to and from Azure Storage accounts. Authentication can be conducted using either an Active Directory account or a SAS token from storage. AzCopy provides many different functions, but the primary function is for file copying and is structured as azcopy copy [source] [destination] [flags].
You can download AzCopy from here: https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10.
In this exercise, you will copy data to your Azure blob using a SAS token:
You now have a copy of AzCopy on your machine ready to work with.
In this demonstration, we will copy data using the AzCopy utility and SAS tokens. This exercise can also be conducted using Azure AD credentials. Follow these steps to complete the exercise:
# Change all Variables Below
$SourceFilePath = "C:AzCopyfile1.txt"
$StorageAccountName = "az104chap7acc06082021"
$ContainerName = "azcopydestination"
$SASToken = "?sv=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx%3D"
# Run AzCopy Command
./azcopy.exe copy "$SourceFilePath" "https://$StorageAccountName.blob.core.windows.net/$($ContainerName)?$SASToken"
Now that you have seen AzCopy in action, you will complete the same task copying files from a source container on a storage account to a destination container on the same storage account.
We will now demonstrate a similar copy task to the previous section except this time, you will be copying data from a source container on a storage account to a destination container on the same storage account. Note that this technique can also be used across storage accounts as the principle is the same. Follow these steps:
Change all Variables Below
$StorageAccountName = "az104chap7acc06082021"
$SrcContainerName = "azcopysource"
$DestContainerName = "azcopydestination"
$SourceSASToken = "sp=rxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx%3D"
$DestSASToken = "sp=rxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx%3D"
# Run AzCopy Command
./azcopy.exe copy "https://$StorageAccountName.blob.core.windows.net/$($SrcContainerName)?$SourceSASToken" "https://$StorageAccountName.blob.core.windows.net/$($DestContainerName)?$DestSASToken" --overwrite=ifsourcenewer --recursive
You have just learned how to copy data between containers using AzCopy, which brings us to the end of this section, where we have learned what AzCopy is, how to download it, how it works, and also how to copy data between different containers. In the next section, we have provided additional reading material for you to learn more if desired.
That brings an end to this section, where we have learned how to use AzCopy to copy files to Azure Storage. In the next section, we will discuss storage replication and life cycle management.
We encourage you to read up on the topic further by using the following links:
In the following section, we will explore the various storage replication and life cycle management features available to us in Azure. First, we will describe some key services and configurations you should be aware of.
The following section will explore the various storage replication services available for Azure Storage.
Azure File Sync is a service that can synchronize the data from on-premises file shares with Azure Files. This way, you can keep the flexibility, compatibility, and performance of an on-premises file server, but also store a copy of all the data on the file share in Azure. You can use any protocol that's available on Windows Server to access your data locally, including Server Message Block (SMB), Network File System (NFS), and File Transfer Protocol over TLS (FTPS).
Blob object replication provides the capability within Azure to replicate blob objects based on replication rules. The copy will run asynchronously between source and destination containers across two different storage accounts. Several rules can be configured for your desired outcome. Note that for replication to be enabled, blob versioning needs to be enabled.
This is a capability available for GPv2 storage accounts, blob storage accounts, and Azure Data Lake Storage. It allows the management of the blob life cycle through rule-based policies. Data can be automatically transitioned between tiers using this functionality, as well as expired. The following actions can be applied to blobs based on the requirements, automated data tiering, blob snapshots, blob versioning, and blob deletion. Multiple rules can be created and can also be applied to a subset of blobs and containers through filters such as name prefixes. Pricing for blob life cycle management is based upon tier operational charges, as discussed in the previous chapter, but the service itself is free, and the delete operations are also free. It should be noted that this is a great feature to assist in the optimization of your overall storage account costs by automatically transitioning between tiers.
Blob data protection is a mechanism that assists with the recovery of data in the event of data deletion or being overwritten. The implementation of data protection is a proactive stance to securing data before an incident occurs. Azure Storage provides the capability of protecting data from being deleted or modified, as well as the restoration of data that has been deleted or modified. Soft delete for containers or blobs enables the preceding capability to restore data based on the period chosen to retain deleted data, where the default configuration is 7 days. When you restore a container, the blobs, as well as the versions and snapshots, are restored.
Blob versioning enables a blob to maintain several versions of the object, which can be used for restoring blob data as the version captures the current state of the blob upon being created or modified. This operation is run automatically when blob versioning is enabled.
Immutable storage, often referred to as Write Once, Read Many (WORM), can be configured on blob storage. This is often used to protect data from accidental deletion or overwrites. Many times, there are legal requirements to manage data in this manner. It is always advised to understand your organization's governance requirements regarding data to ensure you comply with the governance standards required and in place.
Immutable storage can be configured with two types of policies:
Top Tip
Container soft delete can only restore the entire container with all the contents, not individual blobs. To achieve blob-level recovery capability, soft delete for blobs should be enabled.
There are circumstances where you may delete a storage account and identify that you need to recover the data. There are instances where the storage account can be recovered provided the account was deleted in less than 14 days. The following requirements would also need to be adhered to:
You can read more about this here: https://docs.microsoft.com/en-us/azure/storage/common/storage-account-recover.
Next, we will look at the creation and configuration of the Azure File Sync service.
In the next demonstration, we are going to configure Azure File Sync. You will need the following to be in place to follow this demonstration:
Top Tip
Opening the RDP port for a VM during creation is covered later in the book. This can be configured on VM creation in Azure.
Once the preceding resources are created, we can start with the creation of the Azure File Sync service in Azure and the installation of Azure File Sync on the Windows Server.
First, we will create the Azure File Sync service in Azure. Therefore, take the following steps:
This concludes the section about Azure file storage and the Azure File Sync service. In the next section, we are going to look at Azure Storage replication.
In the previous chapter, we uncovered the different replication options available to us in Azure, including Locally-redundant storage (LRS), Zone-redundant storage (ZRS), Geo-redundant storage (GRS), and Geo-zone-redundant storage (GZRS). In this section, we will explore changing the replication chosen for a deployed storage account. Follow the given steps to implement Azure storage replication:
You have now completed the configuration of the replication type for a storage account.
Top Tip
For enhanced security, it is advised that the Secure transfer required and Allow Blob public access options in the Configuration blade for a storage account are configured to Enabled.
In the following demonstration, you will learn how to configure blob object replication. To follow along, you will require two storage accounts:
You have now completed the configuration of blob object replication and have seen it in action. In the next section, we will explore blob life cycle management.
Top Tip
While it may be tempting to see object replication as a backup mechanism, this is not something to rely on the same as a backup service, there is a difference in SLAs for instance and errors will be replicated too. Also remember that data is copied asynchronously, meaning there is a delay in the destination copy.
The following exercise will demonstrate the configuration of blob life cycle management:
You have now created your first life cycle management rule. Next, we will explore how to implement a life cycle management policy using JSON code.
At times, it may be desired to implement your policy as code, especially where the reuse of policies is applicable. This approach drives better consistency and reduces the likelihood of errors:
{
"rules": [
{
"enabled": true,
"name": "move-to-cool",
"type": "Lifecycle",
"definition": {
"actions": {
"baseBlob": {
"tierToCool": {
"daysAfterModificationGreaterThan": 1
}
}
},
"filters": {
"blobTypes": [
"blockBlob"
],
"prefixMatch": [
"files/log"
]
}
}
}
]
}
You know how to configure the life cycle management policy using JSON code. In the next section, we will explore the ability to disable and delete the rule.
You may want to delete a life cycle management rule. The following steps will guide you through the process of doing so:
That brings us to the end of the blob life cycle management section. In the next section, we will explore blob data protection.
Top Tip
Automated data tiering moves blobs to cooler tiers or deletes them. Associated actions within a single rule must follow a transitive implementation from hotter tiers to cooler tiers.
In the following exercise, you will explore configuring soft delete options as part of the data protection options available to you:
You now know how to configure blob data protection settings on your storage accounts. In the next section, we have provided additional reading material for the configuration of storage replication and life cycle management.
That brings an end to this section. We have learned about storage replication and life cycle management.
We encourage you to read up on the topic further by using the following links:
In this chapter, we covered how to manage the security of storage within Azure by integrating storage with VNets, using private endpoints, working with SAS tokens, and configuring access and authentication. You also learned how to configure storage replication and blob life cycle management. You now have the skills to secure and manage Azure Storage.
In the next chapter, we will work through some labs to enhance your new skills for storage management and work through practical applications of storage.
3.134.99.32