8. Architecting secure applications on Azure

In the previous chapter, we discussed Azure data services. As we are dealing with sensitive data, security is a big concern. Security is, undoubtedly, the most important non-functional requirement for architects to implement. Enterprises put lots of emphasis on having their security strategy implemented correctly. In fact, security is one of the top concerns for almost every stakeholder in an application's development, deployment, and management. It becomes all the more important when an application is built for deployment to the cloud.

In order for you to understand how you can secure your applications on Azure depending upon the nature of the deployment, the following topics will be covered in this chapter:

  • Understanding security in Azure
  • Security at the infrastructure level
  • Security at the application level
  • Authentication and authorization in Azure applications
  • Working with OAuth, Azure Active Directory, and other authentication methods using federated identity, including third-party identity providers such as Facebook
  • Understanding managed identities and using them to access resources

Security

As mentioned before, security is an important element for any software or service. Adequate security should be implemented so that an application can only be used by people who are allowed to access it, and users should not be able to perform operations that they are not allowed to execute. Similarly, the entire request-response mechanism should be built using methods that ensure that only intended parties can understand messages, and to make sure that it is easy to detect whether messages have been tampered with or not.

For the following reasons, security in Azure is even more important. Firstly, the organizations deploying their applications are not in full control of the underlying hardware and networks. Secondly, security has to be built into every layer, including hardware, networks, operating systems, platforms, and applications. Any omissions or misconfigurations can render an application vulnerable to intruders. For example, you might have heard of the recent vulnerability that affected Zoom meetings that let hackers record meetings even when the meeting host had disabled recording for attendees. Sources claim that millions of Zoom accounts have been sold on the dark web. The company has taken the necessary action to address this vulnerability.

Security is a big concern nowadays, especially when hosting applications in the cloud, and can lead to dire consequences if mishandled. Hence, it's necessary to understand the best practices involved in securing your workloads. We are progressing in the area of DevOps, where development and operations teams collaborate effectively with the help of tools and practices, and security has been a big concern there as well.

To accommodate security principles and practices as a vital part of DevOps without affecting the overall productivity and efficiency of the process, a new culture known as DevSecOps has been introduced. DevSecOps helps us to identify security issues early in the development stage rather than mitigating them after shipping. In a development process that has security as a key principle of every stage, DevSecOps reduces the cost of hiring security professionals at a later stage to find security flaws with software.

Securing an application means that unknown and unauthorized entities are unable to access it. This also means that communication with the application is secure and not tampered with. This includes the following security measures:

  • Authentication: Authentication checks the identity of a user and ensures that the given identity can access the application or service. Authentication is performed in Azure using OpenID Connect, which is an authentication protocol built on OAuth 2.0.
  • Authorization: Authorization allows and establishes permissions that an identity can perform within the application or service. Authorization is performed in Azure using OAuth.
  • Confidentiality: Confidentiality ensures that communication between the user and the application remains secure. The payload exchange between entities is encrypted so that it will make sense only to the sender and the receiver, but not to others. The confidentiality of messages is ensured using symmetric and asymmetric encryption. Certificates are used to implement cryptography—that is, the encryption and decryption of messages.

    Symmetric encryption uses a single key, which is shared with the sender and the receiver, while asymmetric encryption uses a pair of private and public keys for encryption, which is more secure. SSH key pairs in Linux, which are used for authentication, is a very good example of asymmetric encryption.

  • Integrity: Integrity ensures that the payload and message exchange between the sender and the receiver is not tampered with. The receiver receives the same message that was sent by the sender. Digital signatures and hashes are the implementation mechanisms to check the integrity of incoming messages.

Security is a partnership between the service provider and the service consumer. Both parties have different levels of control over deployment stacks, and each should implement security best practices to ensure that all threats are identified and mitigated. We already know from Chapter 1, Getting started with Azure, that the cloud broadly provides three paradigms—IaaS, PaaS, and SaaS—and each of these has different levels of collaborative control over the deployment stack. Each party should implement security practices for the components under its control and within its scope. Failure to implement security at any layer in the stack or by any party would make the entire deployment and application vulnerable to attack. Every organization needs to have a life cycle model for security, just as for any other process. This ensures that security practices are continuously improved to avoid any security flaws. In the next section, we'll be discussing the security life cycle and how it can be used.

Security life cycle

Security is often regarded as a non-functional requirement for a solution. However, with the growing number of cyberattacks at the moment, nowadays it is considered a functional requirement of every solution.

Every organization follows some sort of application life cycle management for their applications. When security is treated as a functional requirement, it should follow the same process of application development. Security should not be an afterthought; it should be part of the application from the beginning. Within the overall planning phase for an application, security should also be planned. Depending on the nature of the application, different kinds and categories of threats should be identified, and, based on these identifications, they should be documented in terms of scope and approach to mitigate them. A threat modeling exercise should be undertaken to illustrate the threat each component could be subject to. This will lead to designing security standards and policies for the application. This is typically the security design phase. The next phase is called the threat mitigation or build phase. In this phase, the implementation of security in terms of code and configuration is executed to mitigate security threats and risks.

A system cannot be secure until it is tested. Appropriate penetration tests and other security tests should be performed to identify potential threat mitigation that has not been implemented or has been overlooked. The bugs from testing are remediated and the cycle continues throughout the life of the application. This process of application life cycle management, shown in Figure 8.1, should be followed for security:

A flow diagram showing the security life cycle, which moves through Planning, Threat Identification (Design), Threat Mitigation (Build), Testing, and Remediation.
Figure 8.1: Security life cycle

Planning, threat modeling, identification, mitigation, testing, and remediation are iterative processes that continue even when an application or service is operational. There should be active monitoring of entire environments and applications to proactively identify threats and mitigate them. Monitoring should also enable alerts and audit logs to help in reactive diagnosis, troubleshooting, and the elimination of threats and vulnerabilities.

The security life cycle of any application starts with the planning phase, which eventually leads to the design phase. In the design phase, the application's architecture is decomposed into granular components with discrete communication and hosting boundaries. Threats are identified based on their interaction with other components within and across hosting boundaries. Within the overall architecture, threats are mitigated by implementing appropriate security features, and once the mitigation is in place, further testing is done to verify whether the threat still exists. After the application is deployed to production and becomes operational, it is monitored for any security breaches and vulnerabilities, and either proactive or reactive remediation is conducted.

As mentioned earlier, different organizations have different processes and methods to implement the security life cycle; likewise, Microsoft provides complete guidance and information about the security life cycle, which is available at https://www.microsoft.com/securityengineering/sdl/practices. Using the practices that Microsoft has shared, every organization can focus on building more secure solutions. As we are progressing in the era of cloud computing and migrating our corporate and customer data to the cloud, learning how to secure that data is vital. In the next section, we will explore Azure security and the different levels of security, which will help us to build secure solutions in Azure.

Azure security

Azure provides all its services through datacenters in multiple Azure regions. These datacenters are interconnected within regions, as well as across regions. Azure understands that it hosts mission-critical applications, services, and data for its customers. It must ensure that security is of the utmost importance for its datacenters and regions.

Customers deploy applications to the cloud based on their belief that Azure will protect their applications and data from vulnerabilities and breaches. Customers will not move to the cloud if this trust is broken, and so Azure implements security at all layers, as seen in Figure 8.2, from the physical perimeter of datacenters to logical software components. Each layer is protected, and even the Azure datacenter team does not have access to them:

A diagram illustrating the security features at different layers in Azure datacenters.
Figure 8.2: Security features at different layers in Azure datacenters

Security is of paramount importance to both Microsoft and Azure. Microsoft ensures that trust is built with its customers, and it does so by ensuring that its customers' deployments, solutions, and data are completely secure, both physically and virtually. People will not use a cloud platform if it is not physically and digitally secure.

To ensure that customers have trust in Azure, each activity in the development of Azure is planned, documented, audited, and monitored from a security perspective. The physical Azure datacenters are protected from intrusion and unauthorized access. In fact, even Microsoft personnel and operations teams do not have access to customer solutions and data. Some of the out-of-the-box security features provided by Azure are listed here:

  • Secure user access: A customer's deployment, solution, and data can only be accessed by the customer. Even Azure datacenter personnel do not have access to customer artifacts. Customers can allow access to other people; however, that is at the discretion of the customer.
  • Encryption at rest: Azure encrypts all its management data, which includes a variety of enterprise-grade storage solutions to accommodate different needs. Microsoft also provides encryption to managed services such as Azure SQL Database, Azure Cosmos DB, and Azure Data Lake as well. Since the data is encrypted at rest, it cannot be read by anyone. It also provides this functionality to its customers, as well as those who can encrypt their data at rest.
  • Encryption at transit: Azure encrypts all data that flows from its network. It also ensures that its network backbone is protected from any unauthorized access.
  • Active monitoring and auditing: Azure monitors all its datacenters actively on an ongoing basis. It actively identifies any breach, threat, or risk, and mitigates them.

Azure meets country-specific, local, international, and industry-specific compliance standards. You can explore the complete list of Microsoft compliance offerings at https://www.microsoft.com/trustcenter/compliance/complianceofferings. Keep this as a reference while deploying compliant solutions in Azure. Now that we know the key security features in Azure, let's go ahead and take a deep dive into IaaS security. In the next section, we will explore how customers can leverage the security features available for IaaS in Azure.

IaaS security

Azure is a mature platform for deploying IaaS solutions. There are lots of users of Azure who want complete control over their deployments, and they typically use IaaS for their solutions. It is important that these deployments and solutions are secure, by default and by design. Azure provides rich security features to secure IaaS solutions. In this section, some of the main features will be covered.

Network security groups

The bare minimum of IaaS deployment consists of virtual machines and virtual networks. A virtual machine might be exposed to the internet by applying a public IP to its network interface, or it might only be available to internal resources. Some of those internal resources might, in turn, be exposed to the internet. In any case, virtual machines should be secured so that unauthorized requests should not even reach them. Virtual machines should be secured using facilities that can filter requests on the network itself, rather than the requests reaching a virtual machine and it having to take action on them.

Ring-fencing is a mechanism that virtual machines use as one of their security mechanisms. This fence can allow or deny requests depending on their protocol, origin IP, destination IP, originating port, and destination port. This feature is deployed using the Azure network security groups (NSGs) resource. NSGs are composed of rules that are evaluated for both incoming and outgoing requests. Depending on the execution and evaluation of these rules, it is determined whether the requests should be allowed or denied access.

NSGs are flexible and can be applied to a virtual network subnet or individual network interfaces. When applied to a subnet, the security rules are applied to all virtual machines hosted on the subnet. On the other hand, applying to a network interface affects requests to only a particular virtual machine associated with that network interface. It is also possible to apply NSGs to both network subnets and network interfaces simultaneously. Typically, this design should be used to apply common security rules at the network subnet level, and unique security rules at the network interface level. It helps to design modular security rules.

The flow for evaluating NSGs is shown in Figure 8.3:

A flow diagram illustrating the evaluation of NSGs, which starts when the Azure host receives traffic and finally ends either by dropping the packet or allowing it.
Figure 8.3: A flow diagram representing the evaluation of NSGs

When a request reaches an Azure host, depending on whether it's an inbound or outbound request, appropriate rules are loaded and executed against the request/response. If the rule matches the request/response, either the request/response is allowed or denied. The rule matching consists of important request/response information, such as the source IP address, destination IP address, source port, destination port, and protocol used. Additionally, NSGs support service tags. A service tag denotes a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes and automatically updates them. This eliminates the hassle of updating the security rules every time there is an address prefix change.

The set of service tags available for use is available at https://docs.microsoft.com/azure/virtual-network/service-tags-overview#available-service-tags. Service tags can be used with NSGs as well as with Azure Firewall. Now that you have learned about how NSGs work, let's take a look at the NSG design, which will help you determine the primary points you should consider while creating NSG rules to improve security.

NSG design

The first step in designing an NSG is to ascertain the security requirements of the resource. The following should be determined or considered:

  • Is the resource accessible from the internet only?
  • Is the resource accessible from both internal resources and the internet?
  • Is the resource accessible from internal resources only?
  • Based on the architecture of the solution being deployed, determine the dependent resources, load balancers, gateways, and virtual machines used.
  • Configure a virtual network and its subnet.

Using the results of these investigations, an adequate NSG design should be created. Ideally, there should be multiple network subnets for each workload and type of resource. It is not recommended to deploy both load balancers and virtual machines on the same subnet.

Taking project requirements into account, rules should be determined that are common for different virtual machine workloads and subnets. For example, for a SharePoint deployment, the front-end application and SQL servers are deployed on separate subnets, so rules for each subnet should be determined.

After common subnet-level rules are identified, rules for individual resources should be identified, and these should be applied at the network interface level. It is important to understand that if a rule allows an incoming request on a port, that port can also be used for outgoing requests without any configuration.

If resources are accessible from the internet, rules should be created with specific IP ranges and ports wherever possible, instead of allowing traffic from all the IP ranges (usually represented as 0.0.0.0/0). Careful functional and security testing should be executed to ensure that adequate and optimal NSG rules are opened and closed.

Firewalls

NSGs provide external security perimeters for requests. However, this does not mean that virtual machines should not implement additional security measures. It is always better to implement security both internally and externally. Virtual machines, whether in Linux or Windows, provide a mechanism to filter requests at the operating system level. This is known as a firewall in both Windows and Linux.

It is advisable to implement firewalls for operating systems. They help build a virtual security wall that allows only those requests that are considered trusted. Any untrusted requests are denied access. There are even physical firewall devices, but on the cloud, operating system firewalls are used. Figure 8.4 shows firewall configuration for a Windows operating system:

The ‘Windows Firewall with Advanced Security’ window—showing the firewall configuration for a Windows operating system.
Figure 8.4: Firewall configuration

Firewalls filter network packets and identify incoming ports and IP addresses. Using the information from these packets, the firewall evaluates the rules and decides whether it should allow or deny access.

When it comes to Linux, there are different firewall solutions available. Some of the firewall offerings are very specific to the distribution that is being used; for example, SUSE uses SuSefirewall2 and Ubuntu uses ufw. The most widely used implementations are firewalld and iptables, which are available on every distribution.

Firewall design

As a best practice, firewalls should be evaluated for individual operating systems. Each virtual machine has a distinct responsibility within the overall deployment and solution. Rules for these individual responsibilities should be identified and firewalls should be opened and closed accordingly.

While evaluating firewall rules, it is important to take NSG rules at both the subnet and individual network interface level into consideration. If this is not done properly, it is possible that rules are denied at the NSG level, but left open at the firewall level, and vice versa. If a request is allowed at the NSG level and denied at the firewall level, the application will not work as intended, while security risks increase if a request is denied at the NSG level and allowed at the firewall level.

A firewall helps you build multiple networks isolated by its security rules. Careful functional and security testing should be executed to ensure that adequate and optimal firewall rules are opened and closed.

It makes the most sense to use Azure Firewall, which is a cloud-based network service on top of NSGs. It is very easy to set up, provides central management for administration, and requires zero maintenance. Azure Firewall and NSGs combined can provide security between virtual machines, virtual networks, and even different Azure subscriptions. Having said that, if a solution requires that extra level of security, we can consider implementing an operating system–level firewall. We'll be discussing Azure Firewall in more depth in one of the upcoming sections, Azure Firewall.

Application security groups

NSGs are applied at the virtual network subnet level or directly to individual network interfaces. While it is sufficient to apply NSGs at the subnet level, there are times when this is not enough. There are different types of workloads available within a single subnet and each of them requires a different security group. It is possible to assign security groups to individual network interface cards (NICs) of the virtual machines, but it can easily become a maintenance nightmare if there is a large number of virtual machines.

Azure has a relatively new feature known as application security groups. We can create application security groups and assign them directly to multiple NICs, even when those NICs belong to virtual machines in different subnets and resource groups. The functionality of application security groups is similar to NSGs, except that they provide an alternate way of assigning groups to network resources, providing additional flexibility in assigning them across resource groups and subnets. Application security groups can simplify NSGs; however, there is one main limitation. We can have one application security group in the source and destination of a security rule, but having multiple application security groups in a source or destination is not supported right now.

One of the best practices for creating rules is to always minimize the number of security rules that you need, to avoid maintenance of explicit rules. In the previous section, we discussed the usage of service tags with NSGs to eliminate the hassle of maintaining the individual IP address prefixes of each service. Likewise, when using application security groups, we can reduce the complexity of explicit IP addresses and multiple rules. This practice is recommended wherever possible. If your solution demands an explicit rule with an individual IP address or range of IP addresses, only then should you opt for it.

Azure Firewall

In the previous section, we discussed using Azure Firewall within a Windows/Linux operating system to allow or disallow requests and responses through particular ports and services. While operating system firewalls play an important role from a security point of view and must be implemented for any enterprise deployment, Azure provides a security resource known as Azure Firewall that has a similar functionality of filtering requests based on rules and determining whether a request should be allowed or rejected.

The advantage of using Azure Firewall is that it evaluates a request before it reaches an operating system. Azure Firewall is a network resource and is a standalone service protecting resources at the virtual network level. Any resources, including virtual machines and load balancers, that are directly associated with a virtual network can be protected using Azure Firewall.

Azure Firewall is a highly available and scalable service that can protect not only HTTP-based requests but any kind of request coming into and going out from a virtual network, including FTP, SSH, and RDP. Azure Firewall can also span multiple Availability Zones during deployment to provide increased availability.

It is highly recommended that Azure Firewall is deployed for mission-critical workloads on Azure, alongside other security measures. It is also important to note that Azure Firewall should be used even if other services, such as Azure Application Gateway and Azure Front Door, are used, since all these tools have different scopes and features. Additionally, Azure Firewall provides support for service tags and threat intelligence. In the previous section, we discussed the advantages of using service tags. Threat intelligence can be used to generate alerts when traffic comes from or goes to known malicious IP addresses and domains, which are recorded in the Microsoft Threat Intelligence feed.

Reducing the attack surface area

NSGs and firewalls help with managing authorized requests to the environment. However, the environment should not be overly exposed to attacks. The surface area of the system should be optimally enabled to achieve its functionality, but disabled enough that attackers cannot find loopholes and access areas that are open without any intended use, or open but not adequately secured. Security should be adequately hardened, making it difficult for any attacker to break into the system.

Some of the configurations that should be done include the following:

  • Remove all unnecessary users and groups from the operating system.
  • Identify group membership for all users.
  • Implement group policies using directory services.
  • Block script execution unless it is signed by trusted authorities.
  • Log and audit all activities.
  • Install malware and anti-virus software, schedule scans, and update definitions frequently.
  • Disable or shut down services that are not required.
  • Lock down the filesystem so only authorized access is allowed.
  • Lock down changes to the registry.
  • A firewall must be configured according to the requirements.
  • PowerShell script execution should be set to Restricted or RemoteSigned. This can be done using the Set-ExecutionPolicy -ExecutionPolicy Restricted or Set-ExecutionPolicy -ExecutionPolicy RemoteSigned PowerShell commands.
  • Enable enhanced protection through Internet Explorer.
  • Restrict the ability to create new users and groups.
  • Remove internet access and implement jump servers for RDP.
  • Prohibit logging into servers using RDP through the internet. Instead, use site-to-site VPN, point-to-site VPN, or express routes to RDP into remote machines from within the network.
  • Regularly deploy all security updates.
  • Run the security compliance manager tool on the environment and implement all of its recommendations.
  • Actively monitor the environment using Security Center and Operations Management Suite.
  • Deploy virtual network appliances to route traffic to internal proxies and reverse proxies.
  • All sensitive data, such as configuration, connection strings, and credentials, should be encrypted.

The aforementioned are some of the key points that should be considered from a security standpoint. The list will keep on growing, and we need to constantly improve security to prevent any kind of security breach.

Implementing jump servers

It is a good idea to remove internet access from virtual machines. It is also a good practice to limit remote desktop services' accessibility from the internet, but then how do you access the virtual machines at all? One good way is to only allow internal resources to RDP into virtual machines using Azure VPN options. However, there is also another way—using jump servers.

Jump servers are servers that are deployed in the demilitarized zone (DMZ). This means it is not on the network hosting the core solutions and applications. Instead, it is on a separate network or subnet. The primary purpose of the jump server is to accept RDP requests from users and help them log in to it. From this jump server, users can further navigate to other virtual machines using RDP. It has access to two or more networks: one that has connectivity to the outside world, and another that's internal to the solution. The jump server implements all the security restrictions and provides a secure client to connect to other servers. Normally, access to emails and the internet is disabled on jump servers.

An example of deploying a jump server with virtual machine scale sets (VMSSes), using Azure Resource Manager templates is available at https://azure.microsoft.com/resources/templates/201-vmss-windows-jumpbox.

Azure Bastion

In the previous section, we discussed implementing jump servers. Azure Bastion is a fully managed service that can be provisioned in a virtual network to provide RDP/SSH access to your virtual machines directly in the Azure portal over TLS. The Bastion host will act as a jump server and eliminate the need for public IP addresses for your virtual machines. The concept of using Bastion is the same as implementing a jump server; however, since this is a managed service, it's completely managed by Azure.

Since Bastion is a fully managed service from Azure and is hardened internally, we don't need to apply additional NSGs on the Bastion subnet. Also, since we are not attaching any public IPs to our virtual machines, they are protected against port scanning.

Application security

Web applications can be hosted within IaaS-based solutions on top of virtual machines, and they can be hosted within Azure-provided managed services, such as App Service. App Service is part of the PaaS deployment paradigm, and we will look into it in the next section. In this section, we will look at application-level security.

SSL/TLS

Secure Socket layer (SSL) is now deprecated and has been replaced by Transport Layer security (TLS). TLS provides end-to-end security by means of cryptography. It provides two types of cryptography:

  • Symmetric: The same key is available to both the sender of the message and the receiver of the message, and it is used for both the encryption and decryption of the message.
  • Asymmetric: Every stakeholder has two keys—a private key and a public key. The private key remains on the server or with the user and remains a secret, while the public key is distributed freely to everyone. Holders of the public key use it to encrypt the message, which can only be decrypted by the corresponding private key. Since the private key stays with the owner, only they can decrypt the message. Rivest–Shamir–Adleman (RSA) is one of the algorithms used to generate these pairs of public-private keys.
  • The keys are also available in certificates popularly known as X.509 certificates, although certificates have more details apart from just the keys and are generally issued by trusted certificate authorities.

TLS should be used by web applications to ensure that message exchange between users and the server is secure and confidential and that identities are being protected. These certificates should be purchased from a trusted certificate authority instead of being self-signed certificates.

Managed identities

Before we take a look at managed identities, it is important to know how applications were built without them.

The traditional way of application development is to use secrets, such as a username, a password, or SQL connection strings, in configuration files. Putting these secrets into configuration files makes application changes to these secrets easy and flexible without modifying code. It helps us stick to the "open for extension, closed for modification" principle. However, this approach has a downside from a security point of view. The secrets can be viewed by anyone who has access to configuration files since generally these secrets are listed there in plain text. There are a few hacks to encrypt them, but they aren't foolproof.

A better way to use secrets and credentials within an application is to store them in a secrets repository such as Azure Key Vault. Azure Key Vault provides full security using the hardware security module (HSM), and the secrets are stored in an encrypted fashion with on-demand decryption using keys stored in separate hardware. Secrets can be stored in Key Vault, with each secret having a display name and key. The key is in the form of a URI that can be used to refer to the secret from applications, as shown in Figure 8.5:

Navigating to the Secrets blade from the left-hand navigation to view the secrets stored in the key vault.
Figure 8.5: Storing secrets inside a key vault

Within application configuration files, we can refer to the secret using the name or the key. However, there is another challenge now. How does the application connect to and authenticate with the key vault?

Key vaults have access policies that define permissions to a user or group with regard to access to secrets and credentials within the key vault. The users, groups, or service applications that can be provided access are provisioned and hosted within Azure Active Directory (Azure AD). Although individual user accounts can be provided access using Key Vault access policies, it is a better practice to use a service principal to access the key vault. A service principal has an identifier, also known as an application ID or client ID, along with a password. The client ID, along with its password, can be used to authenticate with Azure Key Vault. This service principal can be allowed to access the secrets. The access policies for Azure Key Vault are granted in the Access policies pane of your key vault. In Figure 8.6, you can see that the service principal—https://keyvault.book.com—has given access to the key vault called keyvaultbook:

Navigating to the Access policy blade from the left-hand navigation and checking the access granted for the service principals.
Figure 8.6: Granted access for a service principal to access a key vault

This brings us to another challenge: to access the key vault, we need to use the client ID and secret in our configuration files to connect to the key vault, get hold of the secret, and retrieve its value. This is almost equivalent to using a username, password, and SQL connection string within configuration files.

This is where managed identities can help. Azure launched managed service identities and later renamed them managed identities. Managed identities are identities managed by Azure. In the background, managed identities also create a service principal along with a password. With managed identities, there is no need to put credentials in configuration files.

Managed identities can only be used to authenticate with services that support Azure AD as an identity provider. Managed identities are meant only for authentication. If the target service does not provide role-based access control (RBAC) permission to the identity, the identity might not be able to perform its intended activity on the target service.

Managed identities come in two flavors:

  • System-assigned managed identities
  • User-assigned managed identities

System-assigned identities are generated by the service itself. For example, if an app service wants to connect to Azure SQL Database, it can generate the system-assigned managed identity as part of its configuration options. These managed identities also get deleted when the parent resource or service is deleted. As shown in Figure 8.7, a system-assigned identity can be used by App Service to connect to Azure SQL Database:

Navigating to the ‘Identity’ blade from ‘Settings’ and clicking on the ‘System assigned’ tab and enabling it for the App Service.

Figure 8.7: Enabling a system-assigned managed identity for App Service

User-assigned managed identities are created as standalone separate identities and later assigned to Azure services. They can be applied and reused with multiple Azure services since their life cycles do not depend on the resource they are assigned to.

Once a managed identity is created and RBAC or access permissions are given to it on the target resource, it can be used within applications to access the target resources and services.

Azure provides an SDK as well a REST API to talk to Azure AD and get an access token for managed identities, and then use the token to access and consume the target resources.

The SDK comes as part of the Microsoft.Azure.Services.AppAuthentication NuGet package for C#. Once the access token is available, it can be used to consume the target resource.

The code needed to get the access token is as follows:

var tokenProvider = new AzureServiceTokenProvider();

string token = await tokenProvider.GetAccessTokenAsync("https://vault.azure.net");

Alternatively, use this:

string token = await tokenProvider.GetAccessTokenAsync("https://database.windows.net");

It should be noted that the application code needs to run in the context of App Service or a function app because the identity is attached to them and is only available in code when it's run from within them.

The preceding code has two different use cases. The code to access the key vault and Azure SQL Database is shown together.

It is important to note that applications do not provide any information related to managed identities in code and is completely managed using configuration. The developers, individual application administrators, and operators will not come across any credentials related to managed identities, and, moreover, there is no mention of them in code either. Credential rotation is completely regulated by the resource provider that hosts the Azure service. The default rotation occurs every 46 days. It's up to the resource provider to call for new credentials if required, so the provider could wait for more than 46 days.

In the next section, we will be discussing a cloud-native security information and event manager (SIEM): Azure Sentinel.

Azure Sentinel

Azure provides an SIEM and security orchestration automated response (SOAR) as a standalone service that can be integrated with any custom deployment on Azure. Figure 8.8 shows some of the key features of Azure Sentinel:

The key features of Azure Sentinel—Collect, Detect, Investigate, and Respond.
Figure 8.8: Key features of Azure Sentinel

Azure Sentinel collects information logs from deployments and resources and performs analytics to find patterns and trends related to various security issues that are pulled from data sources.

There should be active monitoring of the environment, logs should be collected, and information should be culled from these logs as a separate activity from code implementation. This is where the SIEM service comes into the picture. There are numerous connectors that can be used with Azure Sentinel; each of these connectors will be used to add data sources to Azure Sentinel. Azure Sentinel provides connectors for Microsoft services such as Office 365, Azure AD, and Azure Threat Protection. The collected data will be fed to a Log Analytics workspace, and you can write queries to search these logs.

SIEM tools such as Azure Sentinel can be enabled on Azure to get all the logs from log analytics and Azure Security Center, which in turn can get them from multiple sources, deployments, and services. SIEM can then run its intelligence on top of this collected data and generate insights. It can generate reports and dashboards based on discovered intelligence for consumption, but it can also investigate suspicious activities and threats, and take action on them.

While Azure Sentinel may sound very similar in functionality to Azure Security Center, Azure Sentinel can do much more than Azure Security Center. Its ability to collect logs from other avenues using connectors makes it different from Azure Security Center.

PaaS security

Azure provides numerous PaaS services, each with its own security features. In general, PaaS services can be accessed using credentials, certificates, and tokens. PaaS services allow the generation of short-lived security access tokens. Client applications can send these security access tokens to represent trusted users. In this section, we will cover some of the most important PaaS services that are used in almost every solution.

Azure Private Link

Azure Private Link provides access to Azure PaaS services as well as Azure-hosted customer-owned/partner-shared services over a private endpoint in your virtual network. While using Azure Private Link, we don't have to expose our services to the public internet, and all traffic between our service and the virtual network goes via Microsoft's backbone network.

Azure Private Endpoint is the network interface that helps to privately and securely connect to a service that is powered by Azure Private Link. Since the private endpoint is mapped to the instance of the PaaS service, not to the entire service, users can only connect to the resource. Connections to any other service are denied, and this protects against data leakage. Private Endpoint also lets you access securely from on-premises via ExpressRoute or VPN Tunnels. This eliminates the need to set up public peering or to pass through the public internet to reach the service.

Azure Application Gateway

Azure provides a Level 7 load balancer known as Azure Application Gateway that can not only load balance but also help in routing using values in URL. It also has a feature known as Web Application Firewall. Azure Application Gateway supports TLS termination at the gateway, so the back-end servers will get the traffic unencrypted. This has several advantages, such as better performance, better utilization of the back-end servers, and intelligent routing of packets. In the previous section, we discussed Azure Firewall and how it protects resources at the network level. Web Application Firewall, on the other hand, protects the deployment at the application level.

Any deployed application that is exposed to the internet faces numerous security challenges. Some of the important security threats are as follows:

  • Cross-site scripting
  • Remote code execution
  • SQL injection
  • Denial of Service (DoS) attacks
  • Distributed Denial of Service (DDoS) attacks

There are many more, though.

A large number of these attacks can be addressed by developers by writing defensive code and following best practices; however, it is not just the code that should be responsible for identifying these issues on a live site. Web Application Firewall configures rules that can identify such issues, as mentioned before, and deny requests.

It is advised to use Application Gateway Web Application Firewall features to protect applications from live security threats. Web Application Firewall will either allow the request to pass through it or stop it, depending on how it's configured.

Azure Front Door

Azure has launched a relatively new service known as Azure Front Door. The role of Azure Front Door is quite similar to that of Azure Application Gateway; however, there is a difference in scope. While Application Gateway works within a single region, Azure Front Door works at the global level across regions and datacenters. It has a web application firewall as well that can be configured to protect applications deployed in multiple regions from various security threats, such as SQL injection, remote code execution, and cross-site scripting.

Application Gateway can be deployed behind Front Door to address connection draining. Also, deploying Application Gateway behind Front Door will help with the load balancing requirement, as Front Door can only perform path-based load balancing at the global level. The addition of Application Gateway to the architecture will provide further load balancing to the back-end servers in the virtual network.

Azure App Service Environment

Azure App Service is deployed on shared networks behind the scenes. All SKUs of App Service use a virtual network, which can potentially be used by other tenants as well. In order to have more control and a secure App Service deployment on Azure, services can be hosted on dedicated virtual networks. This can be accomplished by using Azure App Service Environment (ASE), which provides complete isolation to run your App Service at a high scale. This also provides additional security by allowing you to deploy Azure Firewall, Application Security Groups, NSGs, Application Gateway, Web Application Firewall, and Azure Front Door. All App Service plans created in App Service Environment will be in an isolated pricing tier, and we cannot choose any other tier.

All the logs from this virtual network and compute can then be collated in Azure Log Analytics and Security Center, and finally with Azure Sentinel.

Azure Sentinel can then provide insights and execute workbooks and runbooks to respond to security threats in an automated way. Security playbooks can be run in Azure Sentinel in response to alerts. Every security playbook comprises measures that need to be taken in the event of an alert. The playbooks are based on Azure Logic Apps, and this will give you the freedom to use and customize the built-in templates available with Logic Apps.

Log Analytics

Log Analytics is a new analytics platform for managing cloud deployments, on-premises datacenters, and hybrid solutions.

It provides multiple modular solutions—a specific functionality that helps to implement a feature. For example, security and audit solutions help to ascertain a complete view of security for an organization's deployment. Similarly, there are many more solutions, such as automation and change tracking, that should be implemented from a security perspective. Log Analytics security and audit services provide information in the following five categories:

  • Security domains: These provide the ability to view security records, malware assessments, update assessments, network security, identity and access information, and computers with security events. Access is also provided to the Azure Security Center dashboard.
  • Anti-malware assessment: This helps to identify servers that are not protected against malware and have security issues. It provides information about exposure to potential security problems and assesses their criticality of any risk. Users can take proactive actions based on these recommendations. Azure Security Center sub-categories provide information collected by Azure Security Center.
  • Notable issues: This quickly identifies active issues and grades their severity.
  • Detections: This category is in preview mode. It enables the identification of attack patterns by visualizing security alerts.
  • Threat intelligence: This helps to identify attack patterns by visualizing the total number of servers with outbound malicious IP traffic, the malicious threat type, and a map that shows where these IPs come from.

The preceding details, when viewed from the portal, are shown in Figure 8.9:

The ‘Security And Audit’ pane of Log Analytics, displaying the details about Security Domains, Notable issues, Detections, and Threat Intelligence.
Figure 8.9: Information being displayed in the Security And Audit pane of Log Analytics

Now that you have learned about security for PaaS services, let's explore how to secure data stored in Azure Storage.

Azure Storage

Storage accounts play an important part in the overall solution architecture. Storage accounts can store important information, such as user personal identifiable information (PII) data, business transactions, and other sensitive and confidential data. It is of the utmost importance that storage accounts are secure and only allow access to authorized users. The stored data is encrypted and transmitted using secure channels. Storage, as well as the users and client applications consuming the storage account and its data, plays a crucial role in the overall security of data. Data should be kept encrypted at all times. This also includes credentials and connection strings connecting to data stores.

Azure provides RBAC to govern who can manage Azure storage accounts. These RBAC permissions are given to users and groups in Azure AD. However, when an application to be deployed on Azure is created, it will have users and customers that are not available in Azure AD. To allow users to access the storage account, Azure Storage provides storage access keys. There are two types of access keys at the storage account level—primary and secondary. Users possessing these keys can connect to the storage account. These storage access keys are used in the authentication step when accessing the storage account. Applications can access storage accounts using either primary or secondary keys. Two keys are provided so that if the primary key is compromised, applications can be updated to use the secondary key while the primary key is regenerated. This helps minimize application downtime. Moreover, it enhances security by removing the compromised key without impacting applications. The storage key details, as seen on the Azure portal, are shown in Figure 8.10:

The storage key details displayed in the Azure portal.
Figure 8.10: Access keys for a storage account

Azure Storage provides four services—blob, files, queues, and tables—in an account. Each of these services also provides infrastructure for their own security using secure access tokens.

A shared access signature (SAS) is a URI that grants restricted access rights to Azure Storage services: blobs, files, queues, and tables. These SAS tokens can be shared with clients who should not be trusted with the entire storage account key to restrict access to certain storage account resources. By distributing an SAS URI to these clients, access to resources is granted for a specified period.

SAS tokens exist at both the storage account and the individual blob, file, table, and queue levels. A storage account–level signature is more powerful and has the right to allow and deny permissions at the individual service level. It can also be used instead of individual resource service levels.

SAS tokens provide granular access to resources and can be combined as well. These tokens include read, write, delete, list, add, create, update, and process. Moreover, even access to resources can be determined while generating SAS tokens. It could be for blobs, tables, queues, and files individually, or a combination of them. Storage account keys are for the entire account and cannot be constrained for individual services—neither can they be constrained from the permissions perspective. It is much easier to create and revoke SAS tokens than it is for storage account access keys. SAS tokens can be created for use for a certain period of time, after which they automatically become invalid.

It is to be noted that if storage account keys are regenerated, then the SAS token based on them will become invalid and a new SAS token should be created and shared with clients. In Figure 8.11, you can see an option to select the scope, permissions, start date, end date, allowed IP address, allowed protocols, and signing key to create an SAS token:

Selecting the ‘Shared access signature’ option from the left-hand navigation and creating an SAS token.
Figure 8.11: Creating an SAS token

If we are regenerating key1, which we used to sign the SAS token in the earlier example, then we need to create a new SAS token with key2 or the new key1.

Cookie stealing, script injection, and DoS attacks are common means used by attackers to disrupt an environment and steal data. Browsers and the HTTP protocol implement a built-in mechanism that ensures that these malicious activities cannot be performed. Generally, anything that is cross-domain is not allowed by either HTTP or browsers. A script running in one domain cannot ask for resources from another domain. However, there are valid use cases where such requests should be allowed. The HTTP protocol implements cross-origin resource sharing (CORS). With the help of CORS, it is possible to access resources across domains and make them work. Azure Storage configures CORS rules for blob, file, queue, and table resources. Azure Storage allows the creation of rules that are evaluated for each authenticated request. If the rules are satisfied, the request is allowed to access the resource. In Figure 8.12, you can see how to add CORS rules to each of the storage services:

Navigating to the ‘CORS’ option under Settings and then adding CORS rules to each of the storage services.
Figure 8.12: Creating CORS rules for a storage account

Data must not only be protected while in transit; it should also be protected while at rest. If data at rest is not encrypted, anybody who has access to the physical drive in the datacenter can read the data. Although the possibility of a data breach is negligible, customers should still encrypt their data. Storage service encryption also helps protect data at rest. This service works transparently and injects itself without users knowing about it. It encrypts data when the data is saved in a storage account and decrypts it automatically when it is read. This entire process happens without users performing any additional activity.

Azure account keys must be rotated periodically. This will ensure that an attacker is not able to gain access to storage accounts.

It is also a good idea to regenerate the keys; however, this must be evaluated with regard to its usage in existing applications. If it breaks the existing application, these applications should be prioritized for change management, and changes should be applied gradually.

It is always recommended to have individual service–level SAS tokens with limited timeframes. This token should only be provided to users who should access the resources. Always follow the principle of least privilege and provide only the necessary permissions.

SAS keys and storage account keys should be stored in Azure Key Vault. This provides secure storage and access to them. These keys can be read at runtime by applications from the key vault, instead of storing them in configuration files.

Additionally, you can also use Azure AD to authorize the requests to the blob and queue storage. We'll be using RBAC to give necessary permissions to a service principal, and once we authenticate the service principal using Azure AD, an OAuth 2.0 token is generated. This token can be added to the authorization header of your API calls to authorize a request against blob or queue storage. Microsoft recommends the use of Azure AD authorization while working with blob and queue applications due to the superior security provided by Azure AD and its simplicity compared to SAS tokens.

In the next section, we are going to assess the security options available for Azure SQL Database.

Azure SQL

SQL Server stores relational data on Azure, which is a managed relational database service. It is also known as a Database as a Service (DBaaS) that provides a highly available, scalable, performance-centric, and secure platform for storing data. It is accessible from anywhere, with any programming language and platform. Clients need a connection string comprising the server, database, and security information to connect to it.

SQL Server provides firewall settings that prevent access to anyone by default. IP addresses and ranges should be whitelisted to access SQL Server. Architects should only allow IP addresses that they are confident about and that belong to customers/partners. There are deployments in Azure for which either there are a lot of IP addresses or the IP addresses are not known, such as applications deployed in Azure Functions or Logic Apps. For such applications to access Azure SQL, Azure SQL allows whitelisting of all IP addresses to Azure services across subscriptions.

It is to be noted that firewall configuration is at the server level and not the database level. This means that any changes here affect all databases within a server. In Figure 8.13, you can see how to add clients IPs to the firewall to grant access to the server:

In the Firewall setting pane, adding clients’ IPs to the firewall to grant access to the server.
Figure 8.13: Configuring firewall rules

Azure SQL also provides enhanced security by encrypting data at rest. This ensures that nobody, including the Azure datacenter administrators, can view the data stored in SQL Server. The technology used by SQL Server for encrypting data at rest is known as Transparent Data Encryption (TDE). There are no changes required at the application level to implement TDE. SQL Server encrypts and decrypts data transparently when the user saves and reads data. This feature is available at the database level. We can also integrate TDE with Azure Key Vault to have Bring Your Own Key (BYOK). Using BYOK, we can enable TDE using a customer-managed key in Azure Key Vault.

SQL Server also provides dynamic data masking (DDM), which is especially useful for masking certain types of data, such as credit card details or user PII data. Masking is not the same as encryption. Masking does not encrypt data, but only masks it, which ensures that data is not in a human-readable format. Users should mask and encrypt sensitive data in Azure SQL Server.

SQL Server also provides an auditing and threat detection service for all servers. There are advanced data collection and intelligence services running on top of these databases to discover threats and vulnerabilities and alert users to them. Audit logs are maintained by Azure in storage accounts and can be viewed by administrators to be actioned. Threats such as SQL injection and anonymous client logins can generate alerts that administrators can be informed about over email. In Figure 8.14, you can see how to enable Threat Detection:

Selecting the ‘Auditing & Threat Detection’ blade from the left-hand navigation to enable Threat Protection and selecting the types of threats to be detected.
Figure 8.14: Enabling Threat Protection and selecting the types of threat to detect

Data can be masked in Azure SQL. This helps us store data in a format that cannot be read by humans:

Selecting the ‘Add mask’ button at the top and configuring the settings to mask data.
Figure 8.15: Configuring the settings to mask data

Azure SQL also provides TDE to encrypt data at rest, as shown in Figure 8.16:

Moving to the ‘Transparent data encryption’ blade and enabling TDE.
Figure 8.16: Enabling TDE

To conduct a vulnerability assessment on SQL Server, you can leverage SQL Vulnerability Assessment, which is a part of the unified package for advanced SQL security capabilities known as Advanced Data Security. SQL Vulnerability Assessment can be used by customers proactively to improve the security of the database by discovering, tracking, and helping you to remediate potential database vulnerabilities.

We have mentioned Azure Key Vault a few times in the previous sections, when we discussed managed identities, SQL Database, and so on. You know the purpose of Azure Key Vault now, and in the next section, we will be exploring some methods that can help secure the contents of your key vault.

Azure Key Vault

Securing resources using passwords, keys, credentials, certificates, and unique identifiers is an important element of any environment and application from the security perspective. They need to be protected, and ensuring that these resources remain secure and do not get compromised is an important pillar of security architecture. Management and operations that keep the secrets and keys secure, while making them available when needed, are important aspects that cannot be ignored. Typically, these secrets are used all over the place—within the source code, inside configuration files, on pieces of paper, and in other digital formats. To overcome these challenges and store all secrets uniformly in a centralized secure storage, Azure Key Vault should be used.

Azure Key Vault is well integrated with other Azure services. For example, it would be easy to use a certificate stored in Azure Key Vault and deploy it to an Azure virtual machine's certificate store. All kinds of keys, including storage keys, IoT and event keys, and connection strings, can be stored as secrets in Azure Key Vault. They can be retrieved and used transparently without anyone viewing them or storing them temporarily anywhere. Credentials for SQL Server and other services can also be stored in Azure Key Vault.

Azure Key Vault works on a per-region basis. What this means is that an Azure Key Vault resource should be provisioned in the same region where the application and service are deployed. If a deployment consists of more than one region and needs services from Azure Key Vault, multiple Azure Key Vault instances should be provisioned.

An important feature of Azure Key Vault is that the secrets, keys, and certificates are not stored in general storage. This sensitive data is backed up by the HSM. This means that this data is stored in separate hardware on Azure that can only be unlocked by keys owned by users. To provide added security, you can also implement virtual network service endpoints for Azure Key Vault. This will restrict access to the key vault to specific virtual networks. You can also restrict access to an IPv4 address range.

In the Azure Storage section, we discussed using Azure AD to authorize requests to blobs and queues. It was mentioned that we use an OAuth token, which is obtained from Azure AD, to authenticate API calls. In the next section, you will learn how authentication and authorization are done using OAuth. Once you have completed the next section, you will be able to relate it to what we discussed in the Azure Storage section.

Authentication and authorization using OAuth

Azure AD is an identity provider that can authenticate users based on already available users and service principals available within the tenant. Azure AD implements the OAuth protocol and supports authorization on the internet. It implements an authorization server and services to enable the OAuth authorization flow, implicit as well as client credential flows. These are different well-documented OAuth interaction flows between client applications, authorization endpoints, users, and protected resources.

Azure AD also supports single sign-on (SSO), which adds security and ease when signing in to applications that are registered with Azure AD. You can use OpenID Connect, OAuth, SAML, password-based, or the linked or disabled SSO method when developing new applications. If you are unsure of which to use, refer to the flowchart from Microsoft here: https://docs.microsoft.com/azure/active-directory/manage-apps/what-is-single-sign-on#choosing-a-single-sign-on-method.

Web applications, JavaScript-based applications, and native client applications (such as mobile and desktop applications) can use Azure AD for both authentication and authorization. There are social media platforms, such as Facebook, Twitter, and so on, that support the OAuth protocol for authorization.

One of the easiest ways to enable authentication for web applications using Facebook is shown next. There are other methods that use security binaries, but that is outside the scope of this book.

In this walkthrough, an Azure App Service will be provisioned along with an App Service Plan to host a custom web application. A valid Facebook account will be needed as a prerequisite in order to redirect users to it for authentication and authorization.

A new resource group can be created using the Azure portal, as shown in Figure 8.17:

Creating a new resource group with the Azure portal and filling the details in the Basic tab, such as subscription, resource group name, and region.
Figure 8.17: Creating a new resource group

After the resource group has been created, a new app service can be created using the portal, as shown in Figure 8.18:

Creating a new application by filling in details such as subscription, resource group name, instance details, and App Service plan in the Web App pane.
Figure 8.18: Creating a new application

It is important to note the URL of the web application because it will be needed later when configuring the Facebook application.

Once the web application is provisioned in Azure, the next step is to create a new application in Facebook. This is needed to represent your web application within Facebook and to generate appropriate client credentials for the web application. This is the way Facebook knows about the web application.

Navigate to developers.facebook.com and log in using the appropriate credentials. Create a new application by selecting the Create App option under My Apps in the top-right corner, as shown in Figure 8.19:

Selecting the Create App optionfrom the Facebook Developer portal under My Apps in the top-right corner.
Figure 8.19: Creating a new application from the Facebook developer portal

The web page will prompt you to provide a name for the web application to create a new application within Facebook:

Adding details of the new application in the ‘Create a New App ID’ pane.
Figure 8.20: Adding a new application

Add a new Facebook Login product and click on Set Up to configure login for the custom web application to be hosted on Azure App Service:

Choosing the ‘Facebook Login’ product from the options displayedin the ‘Add a Product’ pane.
Figure 8.21: Adding Facebook login to the application

The Set Up button provides a few options, as shown in Figure 8.22, and these options configure the OAuth flow, such as authorization flow, implicit flow, or client credential flow. Select the Web option because that is what needs Facebook authorization:

Choosing the Web option from the four options displayed—iOS, Android, Web, and Other.
Figure 8.22: Selecting the platform

Provide the URL of the web application that we noted earlier after provisioning the web application on Azure:

Entering the site URL forthe application in the ‘Tell Us about Your Website’ pane.
Figure 8.23: Providing the site URL to the application

Click on the Settings item in the menu on the left and provide the OAuth redirect URL for the application. Azure already has well-defined callback URLs for each of the popular social media platforms, and the one used for Facebook is domain name/.auth/login/facebook/callback:

Navigating to settings in the ‘Facebook for developers’ windowand adding the URI in the textbox under ‘Valid OAuth Redirect URIs’.
Figure 8.24: Adding OAuth redirect URIs

Go to the Basic settings from the menu on the left and note the values for App ID and App Secret. These are needed to configure the Azure App Services authentication/authorization:

Noting down the App ID and the App Secret displayed at the top.
Figure 8.25: Finding the App ID and App Secret

In the Azure portal, navigate back to the Azure App Service created in the first few steps of this section and navigate to the authentication/authorization blade. Switch on App Services Authentication, select Log in with Facebook for authentication, and click on the Facebook item from the list:

Enabling Facebook authentication in App Service and choosing the ‘Action to take when request is not authenticated’ as the ‘Log in with Facebook’ option from the drop-down list.
Figure 8.26: Enabling Facebook authentication in App Service

On the resultant page, provide the already noted app ID and app secret, and also select the scope. The scope decides the information shared by Facebook with the web application:

Adding the App ID and App secret in the resultant page from the previous step and then checking the boxes to configure the scope.
Figure 8.27: Selecting the scope

Click OK and click the Save button to save the authentication/authorization settings.

Now, if a new incognito browser session is initiated and you go to the custom web application, the request should get redirected to Facebook. As you might have seen on other websites, when you use Log in with Facebook, you will be asked to give your credentials:

The browser displaying the Facebook prompt requesting the credentials to log in.
Figure 8.28: Logging in to the website using Facebook

Once you have entered your credentials, a user consent dialog box will ask for permission for data from Facebook to be shared with the web application:

Auser consent dialog boxappears on the screen, asking for permission for data from Facebook to be shared with the web application.
Figure 8.29: User consent to share your information with the application

If consent is provided, the web page from the web application should appear:

The web page from the web application appears on the screen and it shows that the app service is up and running.
Figure 8.30: Accessing the landing page

A similar approach can be used to protect your web application using Azure AD, Twitter, Microsoft, and Google. You can also integrate your own identity provider if required.

The approach shown here illustrates just one of the ways to protect a website using credentials stored elsewhere and the authorization of external applications to access protected resources. Azure also provides JavaScript libraries and .NET assemblies to use the imperative programming method to consume the OAuth endpoints provided by Azure AD and other social media platforms. You are recommended to use this approach for greater control and flexibility for authentication and authorization within your applications.

So far, we have discussed security features and how they can be implemented. It is also relevant to have monitoring and auditing in place. Implementing an auditing solution will help your security team to audit the logs and take precautionary measures. In the next section, we will be discussing the security monitoring and auditing solutions in Azure.

Security monitoring and auditing

Every activity in your environment, from emails to changing a firewall, can be categorized as a security event. From a security standpoint, it's necessary to have a central logging system to monitor and track the changes made. During an audit, if you find suspicious activity, you can discover what the flaw in the architecture is and how it can be remediated. Also, if you had a data breach, the logs will help security professionals to understand the pattern of an attack and how it was executed. Also, necessary preventive measures can be taken to avoid similar incidents in the future. Azure provides the following two important security resources to manage all security aspects of the Azure subscription, resource groups, and resources:

  • Azure Monitor
  • Azure Security Center

Of these two security resources, we will first explore Azure Monitor.

Azure Monitor

Azure Monitor is a one-stop shop for monitoring Azure resources. It provides information about Azure resources and their state. It also offers a rich query interface, using information that can be sliced and diced using data at the levels of subscription, resource group, individual resource, and resource type. Azure Monitor collects data from numerous data sources, including metrics and logs from Azure, customer applications, and the agents running in virtual machines. Other services, such as Azure Security Center and Network Watcher, also ingest data to the Log Analytics workspace, which can be analyzed from Azure Monitor. You can use REST APIs to send custom data to Azure Monitor.

Azure Monitor can be used through the Azure portal, PowerShell, the CLI, and REST APIs:

The dashboard in the Azure portal displaying the Activity log with details such as Operation name, Status, Time, Time stamp, and Subscription.
Figure 8.31: Exploring activity logs

The following logs are those provided by Azure Monitor:

  • Activity log: This shows all management-level operations performed on resources. It provides details about the creation time, the creator, the resource type, and the status of resources.
  • Operation log (classic): This provides details of all operations performed on resources within a resource group and subscription.
  • Metrics: This gets performance information for individual resources and sets alerts on them.
  • Diagnostic settings: This helps us to configure the effects logs by setting up Azure Storage for storing logs, streaming logs in real time to Azure Event Hubs, and sending them to Log Analytics.
  • Log search: This helps integrate Log Analytics with Azure Monitor.

Azure Monitor can identify security-related incidents and take appropriate action. It is important that only authorized individuals should be allowed to access Azure Monitor, since it might contain sensitive information.

Azure Security Center

Azure Security Center, as the name suggests, is a one-stop shop for all security needs. There are generally two activities related to security—implementing security and monitoring for any threats and breaches. Security Center has been built primarily to help with both these activities. Azure Security Center enables users to define their security policies and get them implemented on Azure resources. Based on the current state of Azure resources, Azure Security Center provides security recommendations to harden the solution and individual Azure resources. The recommendations include almost all Azure security best practices, including the encryption of data and disks, network protection, endpoint protection, access control lists, the whitelisting of incoming requests, and the blocking of unauthorized requests. The resources range from infrastructure components, such as load balancers, network security groups, and virtual networks, to PaaS resources, such as Azure SQL and Storage. Here is an excerpt from the Overview pane of Azure Security Center, which shows the overall secure score of the subscription, resource security hygiene, and more:

The Overview pane of Security Center displaying information about Policy and Compliance, and Resource security hygiene.
Figure 8.32: Azure Security Center overview

Azure Security Center is a rich platform that provides recommendations for multiple services, as shown in Figure 8.33. Also, these recommendations can be exported to CSV files for reference:

The ‘Recommendations’ pane of Security Center, displaying the security recommendations for identity and access.
Figure 8.33: Azure Security Center recommendations

As was mentioned at the beginning of this section, monitoring and auditing are crucial in an enterprise environment. Azure Monitor can have multiple data sources and can be used to audit logs from these sources. Azure Security Center gives continuous assessments and prioritized security recommendations along with the overall secure score.

Summary

Security is always an important aspect of any deployment or solution. It has become much more important and relevant because of deployment to the cloud. Moreover, there is an increasing threat of cyberattacks. In these circumstances, security has become a focal point for organizations. No matter the type of deployment or solution, whether it's IaaS, PaaS, or SaaS, security is needed across all of them. Azure datacenters are completely secure, and they have a dozen international security certifications. They are secure by default. They provide IaaS security resources, such as NSGs, network address translation, secure endpoints, certificates, key vaults, storage, virtual machine encryption, and PaaS security features for individual PaaS resources. Security has a complete life cycle of its own and it should be properly planned, designed, implemented, and tested, just like any other application functionality.

We discussed operating system firewalls and Azure Firewall and how they can be leveraged to increase the overall security landscape of your solution. We also explored new Azure services, such as Azure Bastion, Azure Front Door, and Azure Private Link.

Application security was another key area, and we discussed performing authentication and authorization using OAuth. We did a quick demo of how to create an app service and integrate Facebook login. Facebook was just an example; you could use Google, Twitter, Microsoft, Azure AD, or any custom identity provider.

We also explored the security options offered by Azure SQL, which is a managed database service provided by Azure. We discussed the implementation of security features, and in the final section, we concluded with monitoring and auditing with Azure Monitor and Azure Security Center. Security plays a vital role in your environment. An architect should always design and architect solutions with security as one of the main pillars of the architecture; Azure provides many options to achieve this.

Now that you know how to secure your data in Azure, in the next chapter, we will focus on big data solutions from Hadoop, followed by Data Lake Storage, Data Lake Analytics, and Data Factory.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.136.226