This appendix does not focus on the individual Azure chapter objectives. It does, however, serve as an overview that outlines all of the most important tools in Azure that you need to know for the exam. The exam focuses heavily on how to use these tools to accomplish different security objectives, so it's critical that you understand what these tools are, how they differ, and what services they provide from a security perspective.
Let's begin our overview with Chapter 2.
Chapter 3 focuses on identity and access management on the Azure platform. The tools discussed in Chapter 2 focus on how to create and manage identities that are used in controlling access to the Azure resources. By the end of this chapter overview, you should understand how these tools contribute to access control within the Azure environment.
Azure AD is a cloud-based identity and access management service. While it's similar to the traditional Windows AD, Azure AD is not a cloud version of that service—it is an independent service. Azure AD allows employees (or anyone on the on-premises network) to access external resources, including Microsoft 365, the Azure portal, and software-as-a-service (SaaS) applications. It can help users in Azure's cloud environment to access resources on your corporate network and intranet. You can also integrate this service with your on-premises Windows AD server, connect it to Azure AD, and extend your on-premises directories to Azure. Doing so allows users to use the same login credentials to access local and cloud-based resources.
According to Microsoft, there are three main groups that Azure AD is intended for:
The following are the most important features of Azure AD:
Windows Hello is set up on a user's device. During that setup, Windows asks the user to set a gesture, which typically is a biometric like a fingerprint or possibly a PIN. The user provides the biometric to verify their identity and Windows then uses Windows Hello to authenticate the user.
The biggest reason that Windows Hello is such a reliable form of authentication is because it allows fully integrated biometric authentication based on facial recognition or fingerprint matching. Windows Hello uses a combination of infrared cameras and software, which results in high accuracy of their biometric authentication while guarding against spoofing. Most major hardware vendors build devices that have Windows Hello–compatible cameras, so compatibility is rarely an issue. Most devices already have fingerprint reader hardware, and it can be added to those devices that don't have it fairly easily.
Windows Hello and Windows Hello for Business varies as follows:
This application helps you sign into your accounts when you're using two-factor verification, which helps you to use your accounts more securely, since passwords can be forgotten, stolen, and so on. Using two-factor verification that employs verification via your phone makes it much harder for your account to be compromised.
Authenticator is simple to use. The standard verification method is where one factor is a password. After you sign in using your username and password, you would need to either approve the notification or enter the provided verification code, usually via your smartphone.
Azure API Management is a management platform for all APIs across all your Azure environments. APIs are important for simplifying application integrations and making data and services reusable and universally accessible to users. The Azure API is designed to make API usage easy for applications on the Azure platform.
Azure API Management consists of three elements: an API gateway, a management plane, and a developer portal. All these components are hosted in Azure and are fully managed by default.
Whenever a client application makes a request, it first reaches the API gateway, which then forwards the request to the proper backend services. The gateway acts as a proxy for the backend services and provides consistent configuration for routing, security, throttling, caching, and observability.
Using a self-hosted gateway, an Azure customer can deploy the API gateway to the same environments that host their APIs. Doing so allows them to optimize API traffic and ensures compliance with the local regulations and guidelines. The API gateway can perform the following actions:
The management plane is how you interact with the service; it provides you with full access to the API Management service. You can interact with it via the Azure portal, Azure PowerShell, the Azure command-line interface (CLI), a Visual Studio Code extension, or client software development kits (SDKs) in most programming languages. You can use the management plane to perform the following actions:
The developer portal is an automatically generated and fully customizable website that holds the documentation for your APIs. An API provider can customize the look and feel of their developer portal. Some common examples include adding custom content to the site, changing its styles, or adding your branding.
The developer portal allows developers to discover APIs, onboard them for use, and learn how to consume them in their applications. Here are some actions you can perform in the developer portal:
Chapter 3 focuses on how to implement platform protection in Azure. In this overview, we will review the security tools that you can use to secure your environment from outside attacks and ensure proper network segmentation.
Azure Firewall is a cloud-native and intelligent network firewall service designed to protect against threats to your cloud workloads. It's a stateful firewall since it's a service with high availability and unrestricted scalability. You can obtain it in either the standard or premium editions.
The standard edition of Azure Firewall provides filtering and threat intelligence feeds directly from Microsoft's cybersecurity team. The firewall's threat intelligence-based filtering will alert you of and deny traffic to/from malicious Internet protocol (IP) addresses and domains. The firewall's database for malicious IP addresses and domains is consistently updated in real time to protect against new threats.
The premium version of this firewall has quite a few improvements over the standard edition. First, it allows for a signature-based intrusion detection and protection system (IDPS) that allows for the rapid detection of attacks by looking for specific patterns of byte sequences in network traffic and known malicious instruction sequences used by known malware. The premium version has more than 58,000 signatures in over 50 categories that are constantly being updated in real time to protect against new and emerging exploits.
Azure Firewall Manager is a security management service that allows you to create central security policies and enforce route management for cloud-based security. You can use it to provide security management for the following two types of network architectures:
Now that you understand what Azure Firewall Manager is, let's look at some of its key features:
TABLE A.1 Rule collection groups
Rule collection group name | Priority |
---|---|
Default DNAT (Destination Network Address Translation) rule collection group | 100 |
Default Network rule collection group | 200 |
Default Application rule collection group | 300 |
By default, you can't delete a default group or change their priority values, but you can add new groups and give them the priority value of your choice.
Each of the rule collection types must match their parent rule collection group category. For example, DNAT rule collection must be part of the DNAT rule collection group.
Integration is done through the use of automated route management, which doesn't require the setting up or managing of user-defined routes (UDRs). You can deploy secure hubs that are configured with the security partner of your choice in multiple Azure regions to get connectivity and security for your users. See Figure A.2.
The Azure Application Gateway is Azure's web traffic load balancer that enables you to manage the amount of traffic going to your application, thus preventing it from becoming overloaded. Azure's Application Gateway is more advanced than a traditional load balancer. Traditionally, load balancers operate at the Transport layer of the OSI model (i.e., Layer 4 TCP and UDP) and can only route traffic based on the source IP address and port to a destination IP address and port. However, Azure's Application Gateway operates at Layer 7 of the OSI model and can make decisions based on additional attributes found in an HTTP request, such as URL-based routing. From a security viewpoint, this is very important for protecting against DDoS attacks and ensuring high uptimes for all of your network resources. In addition to load balancing, this tool comes with other useful features for security and scalability. Here are some of the most important features to remember:
The WAF also offers monitoring to detect any potentially malicious activity. It uses real-time WAF logs, which are integrated with Azure Monitor to track WAF alerts and monitor trends. It also integrates with Defender for Cloud, which gives you a central view of the security state of all your Azure, hybrid, and multicloud resources. See Figure A.3.
Azure Front Door is great for building, operating, and scaling out your web applications. It is a global, scalable entry point used to create fast, secure, and widely scalable web applications using Microsoft's global network. It's not limited to just new applications; you can use Azure Front Door with your existing enterprise applications to make them widely available on the web. Azure Front Door providers have a lot of options for traffic routing, and it comes with backend health monitoring so that you can identify any backend instances that may not be working correctly. Here's some of the key features that come with Azure Front Door:
A web application firewall (WAF) is a specific type of application firewall that monitors, filters, and, if necessary, blocks HTTP traffic to and from a web application. Azure's WAF uses Open Web Application Security Project (OWASP) rules to protect applications against common web-based attacks, such as SQL injection, cross-site scripting, and hijacking attacks. In Azure, WAFs are part of your Application Gateway, which we just discussed in the last section. All of WAF's customizations are contained in a WAF policy that must be associated with your Application Gateway. There are three types of WAF policies in Azure: global, per-site, and per-URI policies. Global WAF policies allow you to associate the policy with every site behind your Application Gateway with the same managed rules, custom rules, exclusions, or any other rules you define. A per-site policy allows you to protect multiple sites with different security needs. Lastly, the path-based rule (per URI) allows you to set rules for specific website pages. It's also important to know that a more specific policy will always override a more general one in Azure. That means you can have a global policy that applies to all machines and have per-site policies for specific instances, and the per-site policy will override the global policy.
VNet service endpoints provide secure and direct connectivity to Azure services over an optimized route via the Azure backbone network. This network is a connection of hundreds of datacenters located in 38 regions around the world that are designed to provide near-perfect availability, high capacity, and the flexibility required to respond to unpredictable demand spikes, which provides you with a more secure and efficient route for sending and receiving traffic. It allows private IP addresses on a VNet to reach the endpoint of an Azure service without needing a public IP address on the VNet.
Azure Private Links allow you to access Azure PaaS services and Azure hosted customer-owned/partner services over a private endpoint. A private endpoint is simply any network interface that uses a private IP address from your virtual network. It simplifies network architecture and secures the connection between endpoints by keeping the traffic on the Microsoft global network and eliminating the potential for data exposure over the Internet. Here are the key benefits of using Azure Private Links:
Azure DDoS Protection is Azure's default DDoS protection service. Unlike other tools in the list, this tool doesn't need to be configured by an administrator; every endpoint in Azure is protected by Azure's DDoS Protection basic version, free of cost. The important thing is to understand is the difference in features among the basic version and the paid version of DDoS Protection, which comes with two main features:
Now, let's look at the features of the standard Azure DDoS protection tool, which is a paid-for resource:
Azure's endpoint protection feature has been integrated into a tool called Microsoft Defender for Cloud, which provides antimalware protection to Azure VMs in three primary ways:
Defender for Cloud also generates a secure score for all of your subscriptions, based on its assessment of your connected resources compared to the Azure security benchmark. This secure score helps you to understand at a quick glance how good your security is, and it provides a compliance dashboard that allows you to review your compliance using the built-in benchmark. Using the enhanced features, you can customize the standards used to assess compliance and add other regulations that your organization is subject to, such as NIST, Azure Center for Information Security (CIS), or other organization-specific security requirements.
Defender for Cloud also gives you hardening recommendations based on the security misconfigurations and weaknesses that it has found. You can use these recommendations to improve your organization's overall security.
A container is a form of operating system virtualization. Think of this container as a package of software components. A container houses all of the necessary executables, code, libraries, and configuration files to run an application. However, a container doesn't house the operating system images, which makes it more lightweight and portable with less overhead. To support larger application deployments, you must combine multiple containers to be deployed as one or more container clusters.
The Azure Container Registry is a service for building, storing, and managing container images and their related artifacts and allows for the easy and quick creation of containers. When it comes to security, the Azure Container Registry supports a set of built-in Azure roles. These roles enable you to assign various permission levels to an Azure Container Registry, which allows you to use RBAC to assign specific permissions to users, service principals, or other identities that may need to interact with a container in that particular registry or the service itself. You can also create custom roles with a unique set of permissions. Here are some specific features of Azure Container Registry:
login
command. To ensure good security, Azure Container Registry transfers container images over HTTPS and supports TLS to provide secure client connections.
When it comes to access control for container registries, you can use an Azure identity, an Azure Active Directory–backed service principal, or a provided admin account. Use Azure role-based access control (RBAC) to assign users or systems fine-grained permissions to a given registry.
Azure App Service quickly and easily creates enterprise-grade web and mobile applications for any platform or device and then deploys them with a reliable cloud infrastructure. Azure App Service Environment (ASE) allows you to have an isolated and dedicated hosting environment to run your functions and web applications. There are two ways to deploy an ASE: you can use an external IP address (External ASE), or you can use an internal IP address (ILB ASE). Doing so allows you to host both public and private applications in the cloud. It's important that you understand the following security features that Azure App Service offers for securing your cloud-hosting applications:
This section focuses on the tools that automate and manage security operations in Azure. These tools help you to monitor and enforce your security standards in your organization.
Azure Policy is a tool that helps enforce the standards of your organization and ensures the compliance of your Azure resources. An Azure Policy gives you the ability to define a set of properties that your cloud resources should have, and then it compares that defined list of properties to your resource's actual properties to identify those that are noncompliant. When defining these rules in an Azure policy, you describe them using the JavaScript Object Notation (JSON) format; the policy rules are known as policy definitions. You can assign policy definitions to any set of resources that Azure supports. These rules may use functions, parameters, logical operators, conditions, and property aliases to match the exact standards you want for your organization.
You can also control the response to a noncompliant evaluation of a resource with those policy definitions. For example, if a user wants to make a change to a resource that will result in it being noncompliant, you have multiple options: you can deny the requested change, you can log the attempted/successful change to that resource, and you can alter the resource before/after the change occurs, among other options. All these options are possible by adding what's called an effect in the policies in which you create.
You can create the policies from scratch, or you can use some of Azure's prebuilt policies that are created and available by default:
To create individual policies, the easiest method to use is Azure Policy, a service that allows you to create, assign, and manage the policies that control or audit your cloud resources. You can use Azure Policy to create individual policies, or you can create initiatives, which are combinations of individual policies. There are three steps to implementing a policy in Azure Policy:
Every policy definition that you create in Azure Policy has an evaluation called an effect, which determines what will happen when a policy rule is evaluated for matching. The effect can be applied whether it's the creation of a new resource, an updated resource, or an existing resource. Here are the various effect types that you can create in Azure:
test
to all VMs that are created for testing purposes so that they don't get confused with production VMs.If you have multiple effects attached to a policy definition, there's a certain order in which the effects will be evaluated. This order of evaluation is as follows:
Threat modeling is the process of identifying risks and threats that are likely to affect an organization. Microsoft has created its own threat modeling tool to allow for the easy creation of threat-modeling diagrams. This tool helps you plan your countermeasures and security controls to mitigate threats. When you are threat modeling, you need to consider multiple elements in order to obtain a good overview of your company's complete threat landscape, which consists of all threats pertaining to your organization.
There are three primary elements of threat modeling: threat actors, threat vectors, and the countermeasures you plan to use.
The first element you must identify as part of your threat modeling process is the threat actors who will be targeting your organization. A threat actor is a state, group, or individual who has malicious intent. In the cybersecurity field, malicious intent usually means a threat actor is seeking to target private corporations or governments with cyberattacks for financial, military, or political gain. Threat actors are most commonly categorized by their motivations, and to some extent, their level of sophistication. Here are some of the most common types of threat actors:
A threat vector is the path or means by which a threat actor gains access to a computer by exploiting a certain vulnerability. The total number of attack vectors that an attacker can use to compromise a network or computer system or to extract data is called your company's attack surface. When threat modeling, your goal is to identify as many of your threat vectors as possible, and then to implement security controls to prevent these attackers from being able to exploit those threat vectors. Here are some common examples of threat vectors:
Your cyberthreat surface consists of all the endpoints that can be exploited, which give an attacker access to your company's network. Any device that is connected to the Internet, such as smartphones, laptops, workstations, and even printers, is a potential entry point to your network and is part of your company's overall threat surface. It's important to map out your threat surface so that you understand what needs to be protected to prevent your business from being hacked. To map out this threat surface, it's extremely important that you have a complete inventory of all of your company's digital assets.
Now that you have identified your threat surface, the most relevant threat actors for your business, and the threat vectors they will likely use, you can start planning your appropriate attack countermeasures. Countermeasures consist of a wide range of redundant security controls you can use to ensure that you have defense-in-depth coverage. Defense-in-depth simply means that every important network resource is protected by multiple controls so that no single control failure leaves the resource exposed. The key here is not only to have multiple layers of controls, but to also ensure that you use all the appropriate categories and multiple types of security controls to defend your company against attacks.
You must ensure that you have coverage for all of the following control categories so that your company is properly protected:
In addition to having coverage for all the control categories to protect your company, you must ensure that you have coverage for all the following control types:
Microsoft Sentinel is the cloud-native security information and event management (SIEM) and security orchestration, automation, and response (SOAR) technology that leverages AI to provide advanced threat detection and response based on the information collected across your company's environment. First, let's look at the SIEM aspect of it.
A SIEM is responsible for collecting and analyzing security data, which is collected from the different systems within a network to discover abnormal behavior and potential cyberattacks. Some common technologies that feed data into a SIEM for analysis are firewalls, endpoint data, antivirus software, applications, and network infrastructure devices. The second aspect of Microsoft Sentinel is that it acts as a SOAR, which is designed to coordinate, execute, and automate tasks between different people and tools within a single platform. For example, using SOAR, you can define an automated playbook that tells the system what actions it should take when a certain condition is met. If it suspects a file is malicious, it may automatically quarantine or delete that file. In the following sections, we will look at the features of Azure Sentinel broken down into its SIEM and SOAR features:
A SIEM works in the following ways:
The next aspect to Microsoft Sentinel is security orchestration, automation, and response (SOAR). SOAR is a combination of software that enables your organization to collect data about security threats and to respond to those security events without the need for human intervention.
Security orchestration focuses on connecting and integrating different security tools/systems with one another to form one cohesive security operation. Some of the common systems that might be integrated are vulnerability scanners, endpoint security solutions, end-user behaviors, firewalls, and IDS/IPS. This aspect can also connect external tools like an external threat intelligence feed. By collecting and analyzing all this information together, you can gain insights that might not have been found if you'd analyzed all that information separately. However, as the datasets grow larger, more and more alerts will be issued—and ultimately a lot more false positives and noise will be created that must be sorted through in order to get to the useful information.
The data and alerts collected from this security orchestration are then used to create automated processes that replace manual work. Traditional tasks, which would need to be performed by analysts—tasks such as vulnerability scanning, log analysis, and ticket checking—can be standardized and performed solely by a SOAR system. These automated processes are defined in playbooks, which contain the information required for the automated processes. The SOAR system can also be configured to escalate a security event to humans if needed. As you can imagine, this automated system will save your company a lot of money and time on human capital. Also, machines tend to be more reliable and consistent than humans, which leads to fewer mistakes in your security processes.
As the name suggests, security response is all about providing an efficient way for analysts to respond to a security event. It's where a SOAR creates a single view for analysts to provide planning, managing, monitoring, and the reporting of actions once a threat is detected. In addition to providing a single view of information for the analyst, a SOAR can respond to potential incidents on behalf of an analyst through automation.
Microsoft Sentinel also offers threat-hunting capabilities. Sentinel's threat-hunting search and query tools are based on the MITRE framework, and they enable you to proactively hunt for threats across your Azure environment. You can use Azure's prebuilt hunting queries, or you can create your own custom detection rules during threat hunting.
Microsoft Sentinel ingests data from services and applications by connecting to the service and forwarding the events and logs of interest to itself. To obtain data from physical and virtual machines, you can install a log analytics agent to collect the logs and to forward them to Microsoft Sentinel. For firewalls and proxies, you will need to install the log analytics agency agent on a Linux syslog server, and from there the agent will collect the log files and forward them to Microsoft Sentinel.
Once you have connected all of the data sources you want to Microsoft Sentinel, you can begin using it to detect suspicious behavior. You can do this in two ways: you can either use Microsoft's prebuilt detection rules, or you can create custom detection rules to suit your needs. Microsoft recommends that people leverage their prebuilt rules because they have been created to allow for the easy detection of malicious behavior and are regularly updated on Microsoft's security teams. These templates were designed by Microsoft's in-house security experts and analysts and are based on known threats and patterns of suspicious activity and common attack vectors. You also have the option of customizing them to your liking, which is usually easier than creating a new one from scratch.
Automated responses in Microsoft Sentinel are facilitated by automation rules. An automation rule is a set of instructions that allow you to perform actions around incidents without the need for human intervention. For example, you can use these rules to automate processes like assigning incidents to certain people, closing noisy incidents/false positives, and changing an incident's severity or adding tags to incidents based on predetermined characteristics. Automation rules also allow you to run playbooks in response to incidents.
A playbook is a set of procedures that can be executed by Sentinel as an automated response to an alert or an incident. Playbooks are used to automate and orchestrate your response and can be configured to run automatically in response to specific alerts or incidents. This automated run is configured by attaching the playbook to an analytics rule or an automation rule. Playbooks can also be triggered manually if need be.
This section focuses on how you can secure your data and applications within the Azure platform. Primarily this refers to database security, using secure data storage, creating data backups, and ensuring proper encryption throughout your environment. Your goal should be to understand all of the different Azure tools that you can use to achieve each of these goals.
Azure's Storage platform is Microsoft's cloud storage solution for data storage. Azure Storage is designed to offer highly available, scalable, secure, and reliable storage of data objects in the cloud. In Azure, data storage is facilitated through an Azure Storage account. You can find the complete list of Azure Storage account types at https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview
, but it's important to note that each account supports one or more type of Azure Storage data service. These services are as follows:
Table A.2 contains a breakdown of the different storage accounts that Azure supports.
TABLE A.2 Various Azure-supported storage accounts and their breakdown
Type of storage account | Supported storage services | Redundancy options | Usage |
---|---|---|---|
Standard general-purpose v2 | Blob (including Data Lake Storage), Queue, Table storage, Azure Files | LRS/GRS/RA-GRS ZRS/GZRS/RA-GZRS | This is the standard storage account for blobs, file shares, queues, and tables. You will want to use this standard for the majority of scenarios in Azure Storage. |
Premium block blobs | Blob storage (including Data Lake Storage) | LRS ZRS | This is the premium storage account for blobs and appended blobs. It should be used in scenarios where there are high transaction rates, where smaller objects are being used, or in situations that require consistently low storage latency. |
Premium file shares | Azure Files | LRS ZRS | This is a premium storage account for file shares. It should be used for enterprise or high-performance applications. It is a storage account that can support both SMB and NFS file shares. |
Premium page blobs | Page blobs only | LRS | This is a premium storage account for page blobs only. |
The benefits of Azure Storage are as follows:
Azure SQL Database is a platform as a service (PaaS) database engine that handles the majority of Azure's database management functions. Most of these functions can be performed without user involvement, including upgrading, patching, backups, and monitoring.
Azure SQL allows you to create data storage for applications and solutions in Azure while providing high availability and good performance. It allows applications to process both relational data and nonrelational structures, such as graphs, JSON, and XML.
When deploying an Azure SQL database, you have two options:
In Azure, you can define the amount of resources allocated to your databases as follows:
Databases can scale the resources being used in two ways: via dynamic scalability and autoscaling. Autoscaling is when a service automatically scales based on certain criteria, whereas dynamic scalability allows for the manual scaling of a resource with no downtime.
Azure SQL Database comes with built-in monitoring and troubleshooting features to help you determine how your databases are performing and to help you monitor and troubleshoot database instances. Query Store is a built-in SQL Server monitoring feature that records the performance of your queries in real time. It helps you find potential performance issues and the top resources for consumers. It uses a feature called automatic tuning, an intelligent performance service that continuously monitors queries that are executed on a database, and it uses the information it gathers to automatically improve their performance. Automatic tuning in SQL Database gives you two options: you can manually apply the scripts required to fix the issue, or you can let SQL Database apply the fix automatically. In addition, SQL Database can test and verify that the fix provides a benefit. Based on its evaluation, it then will either retain or revert the change.
In addition to the performance monitoring and alerting tools, Azure SQL Database gives you performance ratings that let you monitor the status of thousands of databases as you scale up or down. This allows you to see the effect of making these changes based on your current and projected performance needs. You can also generate metrics and resource logs for further monitoring of your database environment as well.
Azure SQL Database contains several features to help continue business operations during disruptions. In traditional SQL server environments, you have at least two machines set up locally that have exact, synchronously maintained data copies to prevent against the failure of one machines. While this provides good availability, it doesn't protect against a natural disaster that destroys both physical machines.
Azure SQL therefore provides options for ensuring that you have your data stored in locations far enough away that no single catastrophic event can bring down your services. This is primarily done by spreading replicas of your database in different availability zones. Azure availability zones are physically separated and stored in different locations within each Azure region, which allows them to be tolerant to local “failures.” Azure defines these failures as issues that can cause outages, such as software and hardware failures, earthquakes, floods, and fires. Azure ensures that a minimum of three separate availability zones are present in all available zone-enabled regions. These datacenter locations are selected using vulnerability risk assessment criteria created by Microsoft, which has identified all significant datacenter-related risks. Microsoft also considers risks that may be shared between availability zones. You can use availability zones to design and operate databases that will automatically transition between zones as needed without interrupting any of your services.
Azure's service level agreement (SLA) helps to maintain ongoing service 24/7 by mandating that Azure takes certain actions on behalf of its customers. The Azure Platform completely manages every database, and it guarantees no data loss with a high percentage of data availability. Azure also guarantees it will handle patching; backups; replication; failure detection; underlying potential hardware, software, and network failures; bug fixes; failovers; database upgrades; and a few other maintenance tasks.
The last notable features for SQL Databases are its built-in business continuity and global scalability features. These include the following:
Azure Active Directory (Azure AD) authentication is a method for connecting to Azure SQL Database, Azure SQL Managed Instance, and Synapse SQL in Azure Synapse Analytics using identities in Azure AD. The benefit of Azure AD authentication is that you can centrally manage the identities of database users and Microsoft services in one central location. Here are some of the benefits of using Azure AD for database authentication:
Azure Cosmos DB is a fully managed NoSQL database designed for modern app development. It allows for single-digit millisecond response times, automatic and instant scalability, and guaranteed speed at scale. Cosmos provides business continuity through the SLA-backed availability of 99.99 percent and a promise of enterprise-grade security. Table A.3 contains a summary of the benefits and features of Azure Cosmo.
TABLE A.3 Summary of Azure Cosmo's benefits and features
Benefit | Features |
---|---|
Guarantees speed at scale |
|
Simplified application development |
|
Guaranteed business continuity and availability |
|
Fully managed and cost-effective |
|
Azure Synapse Link for Azure Cosmos DB |
|
Azure Synapse Analytics is an analytics service that combines three services: data integration, enterprise data warehousing, and big data analytics. It's designed to give you the ability to freely query data to generate new business insights. See Figure A.5.
Table A.4 lists some of its most important features.
TABLE A.4 Azure Synapse Analytics Features
Feature Name | Description |
---|---|
Unified analytics platform | It provides a single unified environment for data integration, data exploration, data warehousing, big data analytics, and machine learning tasks. |
Serverless and dedicated options | It supports both data lake and data warehousing use cases. |
Enterprise data warehousing | It allows you to create data warehouses on the foundation of a SQL engine. |
Data lake exploration | It combines relational and nonrelational data to easily query files in the data lake. |
Choice of language | Synapse supports multiple languages, allowing you to use the programming language of your choice. |
To make the most of Azure Cosmos, you need to leverage a tool called Azure Synapse Link, which allows you to obtain real-time analytics for your operational data in Azure Cosmos. The Azure Synapse Link makes this possible by creating a seamless integration between Azure Cosmos DB and Azure Synapse Analytics. Azure Synapse Link utilizes the Azure Cosmos DB Analytical Store, which is a fully isolated column store that enables large-scale analytics against operational data in your Azure Cosmos DB without any impact to your workloads. Table A.5 lists the key benefits to using Azure Synapse links.
TABLE A.5 Key benefits of using Azure Synapse Links
Benefits | Description |
---|---|
Reduced complexity | With Synapse Link, you can directly access the Azure Cosmos DB analytical store using Azure Synapse Analytics without the need for complex data movements. Any changes made to the operational data will be visible in near real time without the need for extraction, transformation, or loading (ETL) jobs. This allows you to run analytics against the analytical store without the need for additional data transformation. |
Near real-time insights into your operational data | You can get quality data insights in near real time. |
No impact on operational workloads | When you use Azure Synapse Link, you can run queries against an Azure Cosmos DB analytical store, which is a representation of your real data. The analytical workload is separate from your transactional workload traffic, and therefore doesn't have any negative impact on your operational data. |
Optimization for large-scale analytics | Azure Cosmos DB is optimized to provide scalability, elasticity, and performance for analytical workloads. Using Azure Synapse Analytics, you can access Azure Cosmos's storage layer with simplicity and high performance. |
Cost-effectiveness | Using Azure Synapse Link gives you a cost-optimized, fully managed solution for generating operational analytics. It eliminates extra storage and compute layers that are used in traditional ETL pipelines for analyzing operational data. It uses a consumption-based pricing model that is based on data storage, analytical read/write operations, and queries that are executed. |
Analytics for local, global, and multiregional data | Copies of your data will be created and distributed to nearby datacenters, allowing you to run analytical queries effectively against the nearest regional copy of your data in Cosmos. |
Allows for hybrid transaction/analytics processing (HTAP) scenarios for your operational data | HTAP is a solution that generates insights based on real-time updates to your operational data. It allows you to raise alerts based on live trends and create near-real-time dashboards and business experiences based on user behavior. |
Azure Key Vault is a cloud service for securely storing and accessing secrets. Microsoft defines a secret as anything to which you want to tightly control access. Some common examples of this are API keys, passwords, certificates, and cryptographic keys. The compromise of company secrets could provide an attacker with highly privileged access to your company's environment and cause major data breaches, which is why you want to ensure that they are properly protected. Here is a summary of the security features of Azure Key Vault:
18.216.190.41