Chapter 1
Gaining the Azure Solutions Architect Expert Certification

The Azure Solutions Architect Expert certification is one of the more complicated/senior certificates to earn when compared to the other currently available Azure certifications. Table 1.1 describes their level of complexity. Visualize an organization that creates a solution running on Azure. Preferably, a group of Azure Developer Associates will code and test the solution based on best-case cloud design patterns as explained and designed by the Azure Solutions Architect Expert.

TABLE 1.1 Azure Certifications

Level Azure Certificate Description
100 Azure Administrator Associate Implement, monitor, and maintain Microsoft Azure solutions, focusing on security, network, compute, and storage
200 Azure Developer Associate Design, build/code, test, and support cloud applications and services
300 Azure Solutions Architect Expert An expert in security, network, compute, and storage for designing solutions to run on Azure
400 Azure DevOps Expert An expert in managing processes, people, and technologies for continually delivering solutions on Azure

An Azure Solutions Architect Expert will design and likely configure the security, network, compute, and storage on the Azure platform. Once the application is coded and tested and the platform is ready to run the application, the Azure DevOps Expert will collaborate with all concerned parties and deploy the application to the platform. Any further changes will be managed by the Azure DevOps Expert through the proactive engagement of key stakeholders and the adherence and compliance to their processes and will be redeployed using several possible technologies. Finally, the Azure Administrator Associates will monitor and maintain the implemented Azure features, designed by the Azure Solutions Architect Expert, developed by the Azure Developer Associates, and deployed by the Azure DevOps Expert.

Every role plays a significant part in the overall success of the solution running on the Azure platform. The solution can be as simple as an Azure Function or as complex as a hybrid Azure VM Scale Set running across multiple virtual networks in multiple regions/data centers. To attain this higher level of Azure certification, senior IT professionals must recognize that although these are four separate certifications, they all play a distinct role toward the design, creation, deployment, and maintenance of the solution.

Let's now discuss getting on the path for Azure Solutions Architect Expert certification.

The Journey to Certification

As Ralph Waldo Emerson wrote, “Life is a journey, not a destination.” The same can be said about the approach for achieving the Azure Solutions Architect Expert certification. The experiences you pick up while working with and learning Azure features are the true purpose of your goal and not necessarily the certification itself. An IT professional can be an expert at designing Azure solutions without taking the exams and earning the certification. Simply having the certification is commendable, but without the knowledge and wisdom learned along the way, how much value does it really denote?

Unlike life, where a destination is reachable with potentially an infinite number of experiences and from multiple directions, the path to the Azure Solutions Architect Expert certification is simple. Previously, the exams required to become a certified Azure Solutions Architect Expert were AZ-300 and AZ-301. As you can see in Figure 1-1, those exams were retired in September of 2020. The replacement exams are AZ-303 and AZ-304.

Schematic illustration of the Azure Solutions Architect Expert Certification path.

FIGURE 1.1 Azure Solutions Architect Expert Certification path

The AZ-303 Azure Architect Technologies exam is focused on these components:

  • Implement and monitor an Azure infrastructure curriculum contains designing monitoring solutions in terms of capturing diagnostics, exceptions, and performance data. Using Azure Monitor, Log Analytics, and Application Insights will provide a place to store and analyze that data. Data can be captured from Azure Active Directory, Networking, VMs, Azure App Services, and Data storage products, to name a few. How to configure, store, and analyze them is something you need to know.
  • Implementation of management and security solutions curriculum contains designing management solutions using tools like Update Management, Azure Backup, and Azure Migrate. Once your compute, data, and security products are provisioned, you need to know how to configure and support them. Additionally, proper security implementations with Key Vault, RBAC, and network appliances like Azure Firewall, Azure Front Door, and Azure Traffic Manager are also products and use cases you need to know.
  • Implementation of solutions for apps curriculum contains designing compute workloads using Azure App Service, Azure App Service Web App for Containers, Azure Functions, and Azure Kubernetes Services (AKS). You will need to know when to use these products and the benefits and constraints for choosing them.
  • Implementation and management of data platforms curriculum contains designing data stores like Azure SQL, Azure Cosmos DB, and Azure SQL managed instances. Each of them store data, and you need to know when to choose which one and how to configure them.

The AZ-304 Azure Architect Design exam is focused on these components:

  • Design monitoring curriculum contains designing monitoring with Azure Monitor and Azure Sentinel. Keep in mind that cost is always a factor and you need to know how to implement such solutions in the most cost effective manner. How to design and configure a monitoring solution include not only capturing and viewing, but also alerting and taking actions when an identifiable event takes place.
  • Designing identity and security curriculum contains designing security which is the most important aspect of computing today. Tools like Azure Active Directory (AAD), Azure Policy, and Azure Blueprint are helpful for managing and enforcing authentication. Additionally, concepts like multifactor authentication (MFA), Conditional Access, Single Sign-on (SSO), and Privileged Identity Management (PIM) are must know concepts, not only what they are and how they are used, but also how to implement and monitor them.
  • Designing data storage curriculum contains designing the data stores for storing your application or big data. Learning about relational vs. non-relational data stores, Azure Data Factory, Azure Data Bricks, and Azure Synapse Analytics are necessary to clear this portion of the exam.
  • Designing for business continuity curriculum contains designing redundancy and failover solutions. Azure Backup and Azure Site Recovery (ASR) are the tools primarily used in this area. Concepts like retention policy, snapshots, and archiving must be known and not only understood, but implemented and monitored.
  • Designing infrastructure curriculum contains designing your compute, network, storage and messaging requirements. Almost every company has something unique about their IT applications. Knowing the internals of Azure VM's and Azure App Services and choosing which one best fits their requirements is a key knowledge element. Event Hub or Service Bus are messaging products; why use which one? You need to know this and will learn it in this book. How to effectively implement and monitor all Azure products and features and the use case for each is best known by candidates taking this exam.

The amount of technologies, Azure features, and concepts that one must comprehend to pass these exams is relatively high. I recommend that you take the optional AZ-900 Azure Fundamentals exam prior to attempting the AZ-303 and AZ-304 exams. Doing so will provide a taste of what is to come, may help define areas needing more study, and can provide a more gradual descent, or should it be more eloquently stated, assent into the clouds.

A Strategy to Pass the Azure Exams

Now that your head is spinning with all the knowledge required to take the exams, let me provide a few tips to help pass them. Reading through the requirements of AZ-303 and AZ-304 and knowing what is covered in this book, you will be in good shape to pass.

  • Use Azure daily.
  • Read Azure articles, keeping yourself current.
  • Learn to recognize Azure product names, features, and functionality.
  • Gain a deep knowledge of a few, along with some knowledge of many, Azure products and features.

Before taking most Microsoft certification exams, candidates are prompted to accept certain terms and conditions, as well as committing to abide by a nondisclosure agreement (NDA). Therefore, the following sections contain activities and efforts that will most likely play a role in helping you achieve the Azure Solutions Architect Expert certification. No specifics about the exam are provided as per the NDA.

Use Azure Daily

It shouldn't be a stretch to imagine that using the product often will play a large role in gaining the required knowledge tested on the exam. In my specific case, I successfully completed the 70-533 Developing Microsoft Azure Solutions exam in October 2015 and had been fully engaged with Azure for a few years prior to that. Then I completed the Azure Solutions Architect Expert certification in February 2019. This means I have been working with Azure on a daily basis for about six years. According to Wikipedia, Microsoft Azure was announced in 2008 and went live in 2010, meaning I have worked on Azure almost since its existence.

My role has been primarily supporting customers who want to migrate existing or create new solutions to run on Azure App Service. Originally, Azure App Service was named Azure Web Sites (AWS), but over time that acronym caused some confusion with a solid competitor with the same acronym, so the name was changed. Even today when an Azure App Service solution is created, the default URL is * azurewebsites.net.

Azure App Service will be discussed in more detail in later chapters, but Azure App Service as an entry point into Azure is a good one for customers and for those wanting to learn Azure. Azure App Service can expose you to the following additional Azure features:

  • Azure Monitor and Application Insights
  • Deployment techniques
  • Azure Active Directory, OAuth, and managed Identity (MI)
  • Backup and recovery techniques and Traffic Manager
  • SSL/TLS certificates
  • Hybrid Connection Manager, Azure VNet, and Azure CDN
  • WebJobs
  • Autoscaling

The list could go on and on, with many more Azure features and products. The fact is that once the entry point is discovered, the path toward getting visibility into other Azure features becomes well-defined. The pace of learning quickly increases, and the visibility of how tightly these products and features are all integrated with each other also becomes obvious.

The point is that using Azure and its many features is necessary to prepare for and ultimately pass the exam. Daily interaction and usage builds depth into many products, which usually results in the horizontal consumption of others.

Read Azure Articles, Keeping Yourself Current

Once upon a time there was a saying in IT that “things change faster than the internet.” Having been part of that era, I confirm that the rate at which things changed, primarily for the better, was intense. Changes to hardware, changes to operating systems, and changes to products such as Internet Information Services (IIS), as well as the programming capabilities created to run on them, were all happening at once. Keeping up was a challenge, but there was the internet itself to search for ideas, tips, and solutions.

Not much has changed from an intensity perspective since then, but the saying mostly used today is that “we are moving at cloud speed” or “we are moving at the speed of the cloud.” Things are changing very fast still, and it is hard to keep up. The ability to stay current on all Azure features exceeds the capacity of a single human mind. Therefore, tools, teamwork, and logical groupings of information are necessary to have a chance of remaining current and relevant. It is possible to have Azure change notifications pushed directly to your email inbox. As shown in Figure 1.2, on GitHub it is possible to be notified based on specific scenarios.

Table 1.2 describes the scenarios.

Snapshot of the Possible GitHub notifications.

FIGURE 1.2 Possible GitHub notifications

TABLE 1.2 GitHub Change Notification Types

Type Description
Not Watching Be notified only when participating or @mentioned.
Releases Only Be notified of new releases and when participating or @mentioned.
Watching Be notified of all conversations.
Ignoring Never be notified.

For example, the Azure Functions host is open source and hosted on GitHub here:

github.com/Azure/azure-functions-host

If you would like to be notified of changes or conversations, then select the type of notification to receive. Additionally, to get notifications when announcements or changes occur to the Azure App Service environment, sign up here:

github.com/Azure/app-service-announcements

Many Azure features are hosted on GitHub and not only support the ability to be notified of changes but also allow people to contribute to the enhancement of the product.

A list of changes to all Azure features is stored here:

azure.microsoft.com/en-us/updates

A graphical representation is accessible here:

aka.ms/azheatmap

Additionally, a list of all Azure products is available here:

docs.microsoft.com/en-us/azure/#pivot=products&panel=all

Click the product of interest to get a great set of documentation for that specific product.Finally, here is a link to the official Azure blog:

azure.microsoft.com/en-us/blog

Here you can access and/or integrate an RSS reader to read new posts in your email client, for example. Each of these information sources will help you get into the flow of information required to remain current on Azure.

A final suggestion on how to remain current (or even be a little ahead) is to engage in private previews and in previews of new features being created for Azure. To engage in a private preview, you generally need to attend a Microsoft conference where new features get release, have someone inform you, or watch videos posted on Channel 9 that contain this information, for example. Not all private previews are open to the public, and sometimes joining is possible only through invitation from Microsoft product managers. If you are interested in working with private previews, attend a Microsoft conference to hear about them and ask for instructions on how to become engaged.

After a new feature leaves private preview, it enters the public preview phase, often known as beta testing. The feature becomes available on the portal, presented by its name followed by a “(Preview)” tag, similar to that shown in Figure 1.3.

Snapshot of the Typical Azure feature preview.

FIGURE 1.3 Typical Azure feature preview

This is a manual way to find newly released features for a given product. It is at this point that it is fully open and Microsoft is eager for customers and users to test and provide feedback if nonintuitive scenarios are found or bugs are experienced. Finally, after testing is considered successful, the feature reaches “Generally Available” status and is fully supported by Microsoft Support. In many cases, products or features in preview are supported, because Microsoft wants you to test them and give feedback. However, a root cause solution for the issue is commonly not provided as part of the request, but a fix will come later.

It is important that IT professionals not only remain current with the new features, products, and technologies, but that they learn about what already exists and go deeper into how the technology actually works, all the way down to the bits. IT professionals need to learn in both forward and backward directions; otherwise you might miss why a feature was created in the first place.

Recognize Azure Product Names, Features, and Functionalities

The more products and features that pass in front of your eyes and get stored into your subconscious, the easier it will become to answer certain types of questions. For example, assume this question is presented to you in one of the exams:

  • Which of the following are considered an Azure Storage product or feature? (Choose all that apply.)
    1. Blob Storage
    2. Data Storage
    3. Queue Storage
    4. File Storage

Having some experience with Azure Storage accounts, you might remember that Azure Storage accounts support multiple service types. The types of storage services are Blob, Files, Tables, and Queue. Based on that knowledge, it is for certain that options A, C, and D are the correct answers. But are you certain that there is no Azure product or feature called Data Storage? During the exam you must recollect if you have ever seen, heard, or read about an Azure Data Storage product or feature. You need to be confident in choosing the correct answer. In fact, there isn't an Azure product or feature called Data Storage in this context. You can check all the Azure products here to be sure:

docs.microsoft.com/en-us/azure/#pivot=products&panel=all

Continuing on that same thought process of name recognition, once you gain enough experience with the kinds of Azure products that exist, then learning what they do and what they are used for is the next step. Knowing the functional purpose of the product or feature will help to answer questions like the following:

  • Which of the following Azure products provide messaging management capabilities? (Choose three.)
    1. Service Bus
    2. Event Hub
    3. ExpressRoute
    4. Queue Storage

Service Bus, Event Hub, and Queue Storage are all products that receive messages from endpoints for later processing. ExpressRoute is not a messaging product; rather, ExpressRoute allows an on-premise network to connect over a private link into the Azure cloud using a connectivity provider. Therefore, the answer is A, B, and D.

In conclusion, knowledge of and experience with a large majority of all existing Azure products and features are needed. Without knowing the products that exist, there is no chance of knowing what they do. Knowing both what products exist and what they do is a necessity to achieve the Azure Solutions Architect Expert certification.

Strive for a Deep Knowledge of a Few, Some Knowledge of Many, and a Basic Knowledge of All

Up to now, the words product (aka service) and feature have been used numerous times. There is a subtle difference between them. When the word product is used, it refers to an object like Azure Virtual Network, Azure App Service, Azure Web Apps, or Azure SQL. The product is something that is bought; it is the item that is selected after clicking the + Create A Resource link in the portal, as shown in Figure 1.4.

Snapshot of the + Create A Resource link.

FIGURE 1.4 The + Create A Resource link

The products are also in the Favorites section, also shown in Figure 1.4.

Features, on the other hand, are additional, usually optional, capabilities that can be added to the product. For example, it is recommended that the Application Insights feature is combined with an Azure App Service web app. Also, there is an Azure Search feature that works well with Azure SQL. Backup is not a product and cannot be used on its own; it is a feature that is bound to a specific product. Not all features are available on all products; some scenarios wouldn't make sense such as having the Backup feature for Event Hub. Other features such as SSL/TLS or custom domain features for an Azure Function make good sense.

From an exam perspective, it is important to have a broad knowledge of all Azure products and features. The following list shows how many products fall into different Azure categories:

  • Compute (13)
  • Networking (13)
  • Storage (14)
  • Web (7)
  • Mobile (6)
  • Databases (12)
  • Security and identity (19)
  • Containers (8)

That is a large number of products/services to comprehend, and the number is continuing to grow. The recommended approach is to pick a few of the products that are of most interest to you (Azure VM, Azure App Service, Azure Functions, Azure Database, Azure Security, and so on). Then, based on personal preference, create and configure those products and study them deeply to become an expert in them. Experience how those products interact with each other and learn all the available features available to them. The knowledge gained using this approach, via creating and configuring Azure products and integrating them with other products and features, leads to mastering a few and knowing others rather well.

Do not think that it is required to know all products and features down to the bits to pass the Azure exams. However, deep knowledge in the security, compute, network, and databases categories is expected. This is obvious in the outlines of the exams on the Microsoft Certification website.

docs.microsoft.com/en-us/learn/certifications/exams/az-303

docs.microsoft.com/en-us/learn/certifications/exams/az-304

Therefore, learn a few products deeply and learn something about the other many products. You will be expected to at least notice if a product is presented as an Azure product but is not. You can then exclude it from the possible correct answers.

Next, read more about the products and features in the next section. These are the ones that are highly recommended to know in detail.

An Introduction to “Must-Know” Azure Features

Recognizing a feature name and its functions can help you pass the Azure Solutions Architect Expert exams. This section discusses some of the more well-known and utilized Azure products in a little more detail. Later in the book, you will get the opportunity to create and utilize these products as well as numerous other Azure products in a more complete scenario.

Azure Active Directory and Security

Security is one of the most important aspects of an IT solution. If you cannot manage the access and manage who has what rights on the system, then it is unlikely the solution will have much value. Azure provides many products and features to help create, enforce, maintain, and monitor your features, from a security perspective.

Whether you realize it or not, the first significant product created when you begin consuming Azure products and features is an Azure Active Directory (AAD), sometimes referred to as a tenant. The first time you set up an Azure subscription, a requested option is the initial domain name. This name must be unique on Azure and is prefixed with *. onmicrosoft.com, where * is the unique name provided by the creator. The tenant must always be sent to AAD when requesting authentication.

Let's take a step back and touch quickly on how AAD came to be. The first authentication technique was a simple user ID and password data store. However, this broke down because each application typically had their own data store and an account was required for each application. As the number of systems being used increased, so did the maintenance and complexity of gaining access to an application. Therefore, the Active Directory concept was created to store all corporate user credentials, user profiles, and permissions in a central global location. (We are discussing enterprise authentication at this point, not private user accounts.) Active Directory worked just fine; all applications implemented with Integrated Windows Authentication (IWA) could log in to any system on an intranet, without having to sign in multiple times as users moved from one application to the next.

Since the size of a typical corporation is relatively small when compared to all possible users accessing the internet, Active Directory worked well internally. There have been attempts to create a global authentication provider for the internet; however, those attempts have failed.

What Azure Active Directory does is to provide those same authentication and authorization capabilities but on the Azure platform. The Azure Active Directory tenant can be used to link and authenticate software as a service (SaaS) capabilities like Bing, Office 365, or Dynamics CRM. Additionally, AAD can be synchronized with an on-premise Active Directory so that users inside an internal network can use the same credentials to authenticate on a compute resource running on the Azure platform. The tool to use for this synchronization is called Azure AD Connect, which will be discussed in much more detail in the next chapter.

In addition to having detailed knowledge of AAD, to increase the probability of successful exam completion, it is necessary to know these Azure security features and technologies and their use cases:

  • Azure AD Connect
  • Custom domain names
  • RBAC
  • Key Vault
  • Azure Active Directory for Domain Services
  • Managed Service Identity (MSI)
  • Storage Access Signature (SAS)
  • AAD conditional access policies
  • AAD domain services
  • Azure confidential computing
  • Multifactor authentication
  • Single sign-on (SSO)
  • Security Center
  • Encryption techniques, data-at-rest, data-in-transit
  • Certificate authentication, SSL/TLS, certificate authority (CA)
  • Forms, token, OAuth, JWT, and claims authentication techniques

This list contains not only products and features available on Azure but also technologies and models for securing application and resources. Knowing which type of authentication is best used in a given scenario is helpful when taking the exam. Additionally, having some practical experience with the tools improves your knowledge of the limits and constraints imposed by each, which leads to making good designs with specific products and features based on a given scenario. More detail, a few scenarios, and exercises are covered in later chapters.

Networking

Using any of the Azure Infrastructure as a Service (IaaS) or Platform as a Service (PaaS) offerings, a network infrastructure is provided as part of the service. Azure VM, App Service, or Azure Functions is allocated an IP address (internal and external), and in most scenarios, the endpoint becomes immediately and globally accessible.

Keep these two topics in mind:

  • Cost   Although the network is provided as a service, that doesn't mean it is free. Network attributes such as public IP addresses, inbound and outbound global/regional peering, and inbound and outbound data transfers between availability zones all have a cost; it is nominal, but it is not free. Data transfers between resources within the same virtual network are free, however. The task of managing, planning, and calculating costs is a significant part of managing an Azure subscription. It requires so much that Microsoft purchased a company named Cloudyn in 2017. Cloudyn is a tool that helps manage and monitor costs, and it can provide some tips on how to optimize the consumed Azure resource. Read more about Cloudyn here:

    docs.microsoft.com/en-us/azure/cost-management/overview

  • Configuration   There are some significant configuration activities to consider once you have deployed to Azure. Internet connectivity into an Azure data center, firewalls, DMZs, and other basic networking devices and configurations provide only the fundamental capabilities for the deployed application.

    The number of configurations depends a lot on the requirements of the solution being created or migrated to Azure. From a default perspective, typical configuration includes the following: inbound and outbound port/firewall rules, network security groups (NSGs), enabling security via the Azure Security Center, and any load balancing rules. These are some other products and features that may require configurations:

  • Azure Virtual Network, VNet peering
  • Azure DNS
  • Azure Availability Zones
  • Azure Load Balancer
  • Azure Application Gateway
  • Azure Content Delivery Network
  • Web Application Firewall
  • Azure DDoS Protection
  • Azure ExpressRoute
  • VPN capabilities

Other tools that are useful to know from the context of Azure networking are as follows:

  • Network Performance Monitor (NPM)
  • Azure Log Analytics
  • NPM for ExpressRoute
  • Azure Monitor
  • Azure Network Watcher

These costs, configuration requirements, network products features, and monitoring capabilities are “must-know” topics for an aspiring Azure Solutions Architect Expert candidate.

Azure Virtual Machines

Azure Virtual Machines (VM) is Microsoft's IaaS product offering. Azure VM was one of the first cloud products Microsoft offered on the Azure platform and is a highly scalable and enterprise-ready compute platform. IaaS means that all the hardware necessary to run a server, for example, CPUs/compute, memory, and storage, is provided by the cloud service provider, in this case Microsoft. Additionally, the networking capabilities, summarized previously, are provided and supported. The customer is responsible for the operating system, configuration, updates, and applications that run on top of that.

When choosing Azure VM to handle your compute requirements, there are many important considerations to make that help ensure a successful deployment or migration. The top three are operating system, location, and size. Azure VM supports Windows and Linux operating systems and is available in each of the 54 Azure regions around the world. There are many different sizes of virtual machines ranging from a one-vCPU machine and 1GB of RAM up to 128 cores and 432GB of RAM. The offerings in Azure change rapidly; the point is that there is a great range of available virtual machines in Azure VM.

In addition, to have an expert grasp of Azure VM to increase your chances of passing the exam, it is necessary to know these features and their use cases:

  • OS disks, data disks, and managed disks
  • Availability set
  • Configuration
  • Identity
  • Disaster recovery and backup
  • Update management
  • Configuring alerts and monitoring

You should also know about the following tools/products for planning, configuring, deploying, securing, and maintaining virtual machines on Azure VM:

  • Azure Migrate
  • Azure Site Recovery
  • Azure Backup
  • Azure Automation

Azure App Service

Azure App Service is Microsoft's PaaS offering. This means that in addition to the network infrastructure being the responsibility of the cloud provider, the operating system and its patching are no longer a concern to the customer. Azure App Service comes in four flavors.

  • Azure Web Apps/WebJobs
  • Azure App Service Web App for Containers
  • Azure Mobile Apps
  • Azure API Apps

There is one additional flavor—an enterprise-grade app known as the App Service Environment (ASE). ASE is an isolated Azure App Service tenant that belongs solely to the customer who creates it. There are some capabilities available only in ASE, for example, configuring cypher suite order. That specific example makes sense in the public App Service tenant because all customers running in that tenant may not want the cypher suite configured in that order.

Azure App Service offers both the Windows and Linux operating systems and supports a large majority of coding languages, both Microsoft and open source such as .NET Core, Java, Ruby, Node.js, and so on. All flavors of Azure App Service support built-in autoscaling, load balancing, integration with AAD and OAuth providers, and great deployment capabilities with integration with Git, DevOps (formerly Visual Studio Team Service [VSTS]), and GitHub.

From an Azure Web Apps perspective, use one of those source code repositories for hosting a custom web application or for running content management systems (CMSs) such as WordPress, Umbraco, Joomla!, or Drupal. The WebJobs service is used for longer-running workloads that don't require an HTTP connection to remain connected from start to finish. Consider a WebJob instance to be something like a batch job or a small program that runs offline in the background that is triggered by a cron scheduler; these are referred to as a triggered WebJob. WebJobs can also be continuous/AlwaysOn. In this case, they monitor a queue of some kind, and when a message enters the queue, the WebJob performs some action on it. A WebJob can also be configured as a singleton or set up to run on all the instances of the application.

Azure App Service Web App for Containers supports Windows- and Linux-based containerized applications. When using the noncontainerized web app, the application runs within a sandbox, which imposes certain limitations. For example, writing the registry, accessing the event log, working with COM, and changing the order of the default cypher suite as mentioned before are not supported in the Azure Web Apps sandbox. You can, however, do all the previously mentioned activities in a Azure App Service Web App for Containers instance. Azure App Service Web App for Containers is even more flexible than ASE in many cases. As already stated, as a designer and an Azure architect, you must know the requirements of the application and the capabilities of the product for which you are planning your migration. It is possible to build your own container in Visual Studio or use an existing one hosted on Docker Hub, the Azure Container Registry, or GitHub.

Azure Mobile Apps was at one time called Zumo and is similar to Azure Web Apps but is optimized for a mobile scenario. Simple integration with a data source, easy integration with multiple authentication providers, and the ability to send push notifications to mobile devices are some of the specific features of an app on Azure Mobile Apps. As mobile apps typically operate on multiple operating systems such as iOS, Android, and Windows, there is a cross-platform SDK specifically created to work in this context. Autoscaling, monitoring, load balancing, and other basic features are the same between Azure Web Apps and Azure Mobile Apps.

Finally, let's take a look at Azure API Apps. Don't confuse Azure API Apps with API Management discussed later in this book. Azure API Apps is the quickest way to deploy and create RESTful APIs. While the apps require no infrastructure management, Azure API Apps has built-in support for Swagger. Azure App Webs instances would typically have some kind of a GUI or an interface for an individual to interact in some way with the application. API Apps instances do not have a GUI or page to interact with; instead, they are endpoints that usually exchange and process JSON documents.

The following are some related products, features, and concepts to know in the Azure App Service context:

  • Cloning
  • Application Insights
  • Backups
  • Custom domains
  • Managed identity
  • SSL/TLS, App Service certificates
  • Scale-up versus scale-out
  • KUDU/SCM
  • Cross-origin resource sharing (CORS)

Azure App Service is one of the main points of entry onto the Azure platform. This product can be configured to utilize and interface with almost all other Azure products and features. It is a cost-effective product, but keep in mind the sandbox in which the application must operate in; see the full list of products here:

github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox

Azure Functions

Azure Functions is the serverless product offering from Microsoft. This doesn't mean that the code somehow executes without having any supporting compute resource; rather, it means that when the application is not in use, it is not allocated to a server/compute resource. There are currently two scenarios in which you can configure an Azure Function: a Consumption plan and an App Service plan.

Running in the Consumption plan means that after 20 minutes of nonuse, the resources will be deallocated and placed back into a pool of compute resources, and the application gets shut down. This is the biggest difference when compared to the App Service plan. Running in Consumption mode is dynamic, which is why the Consumption plan is often referred to as dynamic mode. In addition to the shutdown difference, when running in Consumption, the following limitations apply:

  • The combined consumption is limited to 1.5GB of memory per Functions app.
  • Scaling is handled by a scale controller; there is no manual or autoscaling.

The 1.5GB memory limit is based on the Functions app. A Functions app can contain many functions. Functions are always bound to a trigger like on Storage Queue, Cosmos DB, or HTTP. The combined memory consumption of all functions within the Functions app would constitute the limit threshold. Unlike running in the App Service plan's Consumption mode, where scaling is managed by a scale controller, the controller executes custom algorithms per trigger type and scale based on their unique attributes. As a consumer of an Azure Function, scaling is something that you should not be concerned about in dynamic mode.

Since Azure Functions runs on the same architecture as Azure App Service, when running in that mode, the function has access to the same compute resources as App Service would. This means that the Functions app can consume more than 1.5GB of memory, up to the maximum allowed to the selected App Service plan's SKU; scaling based on custom rules is supported, and enabling AlwaysOn is supported. The ability to enable AlwaysOn means that after 20 minutes of nonuse, the application/function will not be shut down; this helps avoid any latent cold-start or warmup issue. Finally, running Azure Functions on an existing Azure App Service plan (ASP) may be more cost effective if you already have an ASP that has spare or unused compute capacity.

From an Azure Functions perspective, these additional topics should be of interest:

  • Azure Cosmos DB
  • Azure Event Hub
  • Azure Event Grid
  • Azure Notification Hubs
  • Azure Service Bus (queues and topics)
  • Azure Storage (blob, queues, and tables)

Azure Functions is more useful for smaller workloads that resemble a single method. Therefore, the approach toward their consumption should be based on what typically runs within a single function/method of a typical nonserverless application. The point is, do not overload a single Azure Function with a massive workload. Although it will scale, it may not work as expected. Instead, break any large workload into smaller work sets and implement them using the Durable Functions model.

API Management

Azure API Management (APIM) is a virtual gateway that is placed between consumers and actual APIs. You can think of APIM as a proxy or gateway in front of other APIs that reroutes the request to the proper API based on the parameters and signature of the URL. Why would this product be implemented? APIM provides the following benefits:

  • Can expose separate APIs to external and internal consumers
  • Hides the actual landscape from the consumer, which reduces complexity and confusion
  • Controls usage and limits consumption
  • Includes monitoring capabilities and error detection
  • Connects to any and multiple APIs on the internet and exposes them via a single endpoint
  • Allows group-based access control to APIs

The following are the scenarios in which APIM will be most beneficial:

  • Integration
  • Hybrid solutions
  • Migration

From an integration perspective, assume that two companies have merged; both companies have internal and external APIs that are fundamental to the success of their company. Assume also that each of those APIs exposes a different endpoint, so managing, documenting, and knowing which to use and where to access it becomes possibly more complex than the API itself. What APIM can do is to expose a single endpoint where all APIs are accessible and manageable. This greatly simplifies the consumption, management, and maintenance of all APIs.

Hybrid solutions are when a company has created a product that depends on components hosted on-premise and in the cloud. Some customers want or need to keep some content or processes in-house for compliance or legal reasons. APIM can be used as a gateway between both of these on-premise and cloud-hosted APIs. APIM can be configured to run within a virtual network; then the virtual network is configured using a VPN or ExpressRoute to access on-premise resources. As mentioned already, APIM has solid authentication capabilities that are relatively simple to implement, directly in the portal.

Finally, imagine a scenario where a company has an API app running on an Azure App Service plan. This API would have a unique endpoint for all consumers to access so that its use case can be accessed. A change to the endpoint would require all consumers to update their code, which is a large ask, and sometimes an impossible one, so avoiding this from the beginning is most optimal. Having APIM manage all the requests to APIs in a company's solution helps with that scenario. If the backend API endpoint needs to change, the APIM endpoint remains the same. A change to the routing via the Azure Portal, PowerShell, or CLI would result in the migration to the new API endpoint.

APIM supports the configuration of many API types, as listed in Table 1.3.

TABLE 1.3 Azure API Management–Supported API Types

Type Description
OpenAPI Non-language-specific but standard interface to REST APIs
WADL An XML description of an HTTP-based web service, RESTful
WSDL An XML description of an HTTP-based web service, SOAP
Logic app Scalable and hybrid workflow management, visual
API app An API hosted using the Azure App Service product
Functions app Event-driven, serverless compute via the Azure App Service product

The following are other topics that are helpful to know in the context of API Management:

  • CA certificates
  • Client certificates
  • OAuth 2.0
  • Delegation
  • Custom domains
  • Managed Service Identity
  • Virtual networks

APIM is a useful product for managing migrations and integrations between companies and IT solutions.

Azure Monitor

Migrating an application or a solution consisting of numerous applications to a cloud provider is a big decision that requires many considerations. Properly estimating the compute requirements, configuring the security configuration, and organizing resources into supportable groups are just a few items to keep in mind. One significant consideration is how to monitor the solution and make sure it remains available and performant after deployment. Azure provides a product called Azure Monitor for just that.

Azure Monitor includes both Log Analytics and Application Insights. Combined, this feature supports end-to-end monitoring of applications that can identify how a customer consumes a product, can locate bottlenecks, and even can recommend some actions to improve experiences using the recommendations analyzer. Azure Monitor combines the capabilities of both these other products; or, Log Analytics and Application Insights can continue to be used on their own.

Log Analytics stores application and diagnostic logs into a query-able data source based on Azure Data Explorer, which is a highly scalable service for logging and storing telemetry data. The Log Analytics portion is the interface to create the workspaces that capture the data and to query the data source to gather performance, usage, and application behavior information. The query language used to extract the data is called KUSTO, and it is relatively simple and well-documented.

Here is an example that scans a table called Event , which stores content typically found in the event logs on a typical Windows server. Additionally, the query retrieves only those with an EventLevelName value of Error from one day ago.

Event
| where EventLevelName == "Error"
| where TimeGenerated> ago(1d)
| summarize count() by Source

Application Insights, on the other hand, is more metrics driven. It represents a set of data in a graphical calculation over a given time frame. The outcome of that calculation is a chart showing a value. For example, if a chart is created to show the availability of a specific product, per day, then the chart would show the availability from 0% to 100%. Application Insights offers great charting capabilities found in a product called Metrics Explorer, which monitors the availability, performance, and usage of web applications. Application Insights can monitor applications hosted on Azure and on-premise.

In addition to charting metrics data, Application Insights can trigger alerts, which can find bottlenecks in code executions using a profiler based on PerfView, and can build an application map that identifies dependencies between different Azure products.

In conclusion, Azure Monitor is the go-forward name for the previous products Log Analytics and Application Insights. Knowing the capabilities included in Azure Monitor is a great asset. Pulling data from different products, querying that data using KUSTO, graphing that data to define performance metrics, sending alerts based on captured data, and finding dependencies between the different products that make up the solution are all impactful features.

Azure SQL

Azure SQL is Microsoft's database as a service (DaaS) offering. It provides data storage, allows you to configure and optimize data without needing to be a database administrator (DBA), and removes the requirement for managing the server on which the database runs.

Four fundamental activities are required for placing your data into a relational database on the Azure platform:

  • Creating the database
  • Configuring security
  • Migrating the data
  • Monitoring the database

Two scenarios allow the consumption of a SQL Server database on the Azure platform:

  • Azure SQL Database
  • SQL Server on an Azure virtual machine

Azure SQL Database provides the following options:

  • Single Database   The single database option is a database that runs inside an Azure SQL Database instance but has its own resources and is manageable using a SQL Database server. A single database is the typical entry point for new applications that desire to start small and then scale up as consumption increases. When choosing elastic pools, a customer would have multiple databases, and each would reside on the same Azure SQL Database server. However, the ability to predict the amount of capacity required per database is unknown or overly complicated. Elastic pools help solve that problem by having the platform provide the performance capacity (elasticity) per database within a defined budget.
  • Elastic Pool Elastic pools like single databases are priced using the database transaction unit (DTU) pricing model, or eDTU for elastic. DTU is the term created by the Azure SQL team that measures reads, writes, CPU, and memory consumption. The number of DTUs allocated to a single or elastic pool is determined by the selected service tier, for example, Basic, Standard, or Premium. The more you pay, the more DTUs you get; it's that simple.
  • Managed Instance   The price of managed instances is not calculated using the DTU model, rather the vCore-based purchasing model. This model is straightforward and is similar to the way prices are calculated for on-premise SQL Server databases. This offering is as close as you can get to a replica of an on-premise installation of SQL Server, while still getting the benefits of DaaS.

In the unfortunate event that an on-premise SQL Server installation has dependencies or requirements that prevent using any of the DaaS offerings, Azure provides an option to run SQL Server on a virtual machine. In this scenario, you simply choose the configuration (SQL Server and Windows version) and determine the license agreement and size of the virtual machine. From that point, it is the same procedure for your own on-premise SQL Server instance running on IaaS.

For security, Azure SQL provides a firewall-setting feature. By default, all Azure resources pass through the firewall, and all other resources external to Azure (determined based on IP) are not allowed through. The Azure SQL database is not inside a virtual network and has a globally accessible endpoint. Azure SQL does currently support connectivity to a virtual network, but it doesn't exist in one. A product called VNet Service Endpoints changes this, but as of this writing, it is in preview. The globally accessible endpoint is a point of interest/concern for some customers, and VNet Service Endpoints is expected to resolve this issue. If having a global endpoint is a showstopper, then the options are strong firewall settings or a SQL Server on Azure Virtual Machine that can be contained in a VNet.

Once the type of database required has been provisioned, the next step is to migrate the data to it. The available tools are based on the type of database chosen. For both a single pool and an elastic pool, use the Data Migration Assistant (DMA) to confirm the source database is compatible with the destination database. Once confirmed, export the source database, which creates a BACPAC file. Import that file into the destination database using the Azure Portal or a SqlPackage. This procedure is not intended for migrating production databases as there could be extended downtime. To move a production database and reduce downtime, use the Data Migration Service (DMS).

The process for migrating data to a managed instance or to a SQL Server instance on Azure Virtual Machine is the same for on-premise installations. First use DMA to identify and resolve any conflicts; then use the native SQL Service restore capabilities, via a BAK file, to migrate the data to Azure.

Once the database is provisioned, secure, and populated with data, monitoring/optimizing it is the next and final step (excluding maintaining it, of course). Here are the numerous products and tools capable of monitoring an Azure SQL database. Note that monitoring and tuning a SQL Server instance on Azure Virtual Machine is the same as on-premise SQL Server and IaaS (via Log Analytics).

  • Query Performance Insights
  • Query Store
  • XEvents

Setting up an Azure SQL database is quick and easy. The connection string is found with ease, and as the endpoint is globally accessible, making a connection to it from code is just as quick.

Azure Cosmos DB

Unlike Azure SQL, which is a relational DBMS, Azure Cosmos DB is classified as a NoSQL database. Cosmos DB includes databases, containers, and items exposed by numerous APIs. When choosing a data store for an application, if the following scenarios match the data storage requirements, then Azure Cosmos DB would be an optimal decision:

  • Utilizes key/value pairs
  • Needs document storage capability
  • Graphs data stores
  • Globally replicates data
  • Provides results in JSON format
  • Exposes data using a RESTful API

When Cosmos DB is provisioned from Azure, one required attribute to select is the API, which also dictates which database type to implement. The currently supported Cosmos DB types are as follows:

  • Core (SQL)
  • MongoDB
  • Cassandra
  • Azure Table
  • Gremlin (graph)

The database/API dictates the entity type, referred to as a container; see Table 1.4.

TABLE 1.4 Azure Cosmos DB Container Entities

Database/API Azure Cosmos DB Container
Core (SQL) Collection
MongoDB Collection
Casandra Table
Azure Table Table
Gremlin (graph) Graph

Table 1.5 describes how Azure Cosmos DB refers to items.

TABLE 1.5 Azure Cosmos DB Item Entities

Database/API Azure Cosmos DB Container
Core (SQL) Document
MongoDB Document
Casandra Row
Azure Table Item
Gremlin (graph) Node or Edge

There is a slight terminology barrier for implementing a Cosmos DB, but this is common for all types of technologies and their corresponding features. Figure 1.5 shows the relationship between the database account, database, container, and item entities listed in Table 1.5.

The important takeaway from this section is that you should understand under what scenarios one would choose to use Cosmos DB and the differences it has compared to a relational DBMS. Primarily, Cosmos DB exposes its data via a RESTful API, communicates in the format of JSON, and is, by default, platform independent.

Azure Storage

There is not much you can accomplish without storage. This is true regardless of running workloads in the cloud. Although it is possible to store large amounts of content in memory, that content must come from some physical location. Azure Storage provides some useful capabilities for storing content that is globally accessible, secure, scalable, and durable.

Schematic illustration of the database account relationships.

FIGURE 1.5 Database account relationships

Azure Storage provides four data services, which are described in Table 1.6.

TABLE 1.6 Azure Storage Services

Type Description
Azure Blob storage Scalable object store for binary and text data
Azure File storage Managed file shares
Azure Queue storage Messaging store and queuing
Azure Table storage NoSQL data store, now part of Cosmos DB

To utilize Azure Storage, you must first create a storage account from within Azure Portal via a REST API, via the CLI, or by using an ARM template. Once an account is created, from within the portal you would see something like that shown in Figure 1.6.

Snapshot of creating a storage account.

FIGURE 1.6 Creating a storage account

It is possible to have multiples of each service types within a single storage account.

A blob is typically a file, for example, an image, a video, or an audio file. These and any other file type can be stored in Azure Blob storage. The following are some common uses of Azure blobs:

  • Support the upload of large files from a website.
  • Back up and restore, archival or disaster recovery.
  • Writing logs

There are numerous reasons to support the uploading of files from a form on a website. Consider YouTube, which supports the upload of very large files—files so large that an HTTP connection made for the upload would likely time out before the transmission of the data completes. Using the System.Web.UI.WebControls.FileUpload class isn't an option in this scenario. Instead, one would use Microsoft.WindowsAzure.Storage.Blob.CloudBlobClient and upload the large file to an Azure Storage blob container that is designed specifically for this purpose, instead of storing it directly on the hard drive running the app.

Performing backups of a database or website and storing a VHD image of an IaaS instance as a block blob for quick recovery are also common scenarios for blobs. In case of a disaster situation, pulling data, content, or an image from a blob is a feasible recovery strategy. Writing logs to a blob file makes a lot of sense if there is a need to expose them to some other entity. For example, accessing IIS logs typically requires access to the server and then performing a manual effort to consolidate them. It is possible to configure Azure App Service to write IIS logs to a blob's storage container. Accessing/analyzing them is then much more efficient, because the logs are in a single location and globally accessible.

The following are important topics to know in this context:

  • Locally redundant storage (LRS)
  • Zone-redundant storage (ZRS)
  • Georedundant storage (GRS)
  • Read-access geo-redundant storage (RA-GRS)
  • Encryption at rest
  • Immutable blobs
  • Page and block blobs

Azure File storage is somewhat synonymous with what we commonly refer to as a file share that uses the Server Message Block (SMB) protocol. Historically in Windows Explorer, when you right-click a folder and select Share, then all those who are given access can read, write, and/or delete the content of that directory. That scenario, however, is restricted to those within an intranet. True, there are some configurations using WebDav via IIS that could provide access to internal shares externally. However, that configuration can be complicated to configure and maintain. Azure files simplify that by providing the same mapping capabilities as one would expect from doing what was just described globally.

Mapping a drive to an Azure files share is supported on Windows, Linux, and macOS and is configured by using a PowerShell cmdlet: net use , sudo mount , or mount_smbfs . An Azure file share is globally accessible via *. file.core.windows.net, where * is the name of your storage account. Finally, Azure files are utilized by Microsoft for its own products; one specifically is Azure Functions.

Azure Queue storage is a big deal and is discussed in more detail later in the book. There is a lot to consider when deciding on a messaging and queueing solution, and it is not made simpler because Microsoft offers numerous messaging capabilities:

  • Azure Queue storage
  • Azure Event Grid
  • Azure Event Hub
  • Azure Service Bus

Each one provides similar offerings, specifically, messaging/queuing capabilities, but each has a specific use case that warrants their existence. An Azure queue is specifically designed for storing a message, which is, for example, a short string up to 64KB in size that is accessible via authenticated HTTP or HTTPS calls. The message may be a location (blob URL) to a file that needs to be processed offline by WebJobs or may contain details of the size of an image to be generated. Thus, you would use an Azure blob when you need to store an entire file and an Azure queue to store a message.

Azure Table storage, as previously mentioned, is now part of Cosmos DB. Regardless of the reclassification, an Azure table remains a NoSQL datastore. The datastore accepts global authenticated access and is most useful for storing massive amounts of data (terabytes) where the data is not bound by complicated stored procedures, joins, or foreign keys. Again, it is common that Microsoft has numerous products that provide similar capabilities, in this case, Azure SQL and Cosmos DB. Both of those products provide a solution to a specific use case.

The product you choose to store the object in—which your commissioned compute power processes, whether it be a file (blob or files), a message, or a NoSQL entity—has numerous design considerations. Having a thorough understanding of your business need/use case is fundamental in the decision driving the consumption of the storage solution.

Service Bus

Service Bus is one of the messaging capabilities mentioned in the previous section that focused on Azure Storage. As you certainly recollect, one of the four Azure Storage services is Azure Queue storage, which naturally leads to the question of why both, and even why four, messaging features exist. Service Bus has a specific and defined set of unique faculties. If the following fall within the defined requirements of your solution, then Service Bus is the correct choice, as they are currently unique to Service Bus:

  • Consumes messages in batches
  • Requires AMQP 1.0 support
  • Guarantees FIFO
  • Restricts the message size: greater than 64KB and less than 256KB
  • Requires RBAC and greater control over senders and receivers
  • Supports high throughput and parallel processing
  • Restricts the queue size to less than 80GB

Both Service Bus and Event Hub utilize a protocol beginning with sb:// that maps to a hostname of *. servicebus.windows.net. This means they are built upon each other. Table 1.7 summarizes the messaging services and should provide clarity toward choosing Service Bus over other Azure messaging services.

TABLE 1.7 Azure Messaging Services

Type When to Use Purpose
Service Bus Financial processing and order processing Highly valued messaging
Event Grid Status change reactions Reactive programming
Event Hubs Data streaming and telemetry Big data pipeline

Service Bus can be categorized into two distinct entities.

  • Queues (explained earlier in this chapter)
  • Topics and subscriptions

After the creation of a Service Bus namespace, the option to create either a Service Bus queue or a topic exists, as shown in Figure 1.7.

Snapshot of creating a Service Bus queue or topic.

FIGURE 1.7 Creating a Service Bus queue or topic

Messages within a Service Bus Queue can be processed by a single consumer. Where a consumer may be an Azure Functions app or perhaps a WebJobs app, that's a one-to-one relationship. I know you already know what topics and subscriptions they provide. You guessed it—it's a one-to-many form of communication. We're talking high volumes here—big data, machine learning scale, where the message is made available to each subscription registered to a specific topic.

The mapping of a message to a subscription is useful for scaling compute resources so that multiple instances/services/consumers can process a message as fast as possible upon its arrival in the queue. Like an older product called Stream Analytics, a Service Bus message can be analyzed using a filter and sent to a specific subscriber that executes an anticipated, delicate, or uniquely subtle code sequence. The creators of the Azure messaging capabilities have given great thought to many use cases and are eager to understand any you encounter that have no provision. Choose the one that meets the needs or your business requirements.

Site Recovery

Even when a solution has been migrated to the cloud, a disaster recovery solution remains a necessity. Often and only after something unexpected occurs does one recognize the need of a recovery plan. This plan is commonly referred to as a business continuity and disaster recovery (BCDR) strategy. Some failures can be a simple transient issue that corrects itself, while others can result in some major downtime. In the worst case, data or source code is lost, and sometimes it is not recoverable.

One should not expect that simply because a solution exists in the cloud that there are built-in redundancies and backups. There are some built-in redundancies on the Azure platform—be certain of that—but those are for the stabilization of the platform and infrastructure, not for your app. Ask yourself this question, “How could Microsoft know what parts of a solution should or should not be backed up?” The answer is that Microsoft cannot know this; customers have so many varied requirements, and the storage of the backups and the configuration of redundancies have a cost. Backup is not something provided by default or for free.

It is up to the individual subscribing entity to design the BCDR strategy. Remember, the strategy should specifically match the explicit requirements of the subscribing entity and operate within its given cost constraints. To help with not only the recovery of IaaS-based solutions but also with the migration and movement of them to and within Azure, Microsoft has a product named Site Recovery. Note that Site Recovery is focused on IaaS. Other non-IaaS components of a BCDR strategy are discussed later in the book. You will need to know this product in depth for the Azure Solutions Architect Expert exams.

From a recovery perspective, Site Recovery protects IaaS solutions in the following scenarios:

  • Azure VM
  • Physical servers
  • VMware virtual machines
  • Hyper-V virtual machines

From a BCDR perspective, the use of Site Recovery has two main configurations:

  • To act as a BCDR solution for your on-premise solution and servers
  • To act as a BCDR solution for your Azure-hosted IaaS compute workloads

Setting up a BCDR strategy for a company's servers running in their own privately funded and maintained data center is commonplace. This is expensive because although you could get some failover support when servers are in the same data center, what happens if the entire data center suffers an outage? The point is, to implement a real, enterprise BCDR strategy, you would need two data centers, with mirror replication of the hardware, software, and data, and they must be in different regions. Instead of setting up a second data center, a company can utilize Azure. Connecting, creating, replicating, and maintaining an Azure BCDR environment can occur over any secure means for those procedures, such as an ExpressRoute or a VPN similar to that shown in Figure 1.8.

Schematic illustration of the secure means for connecting, creating, replicating, and maintaining an Azure BCDR environment.

FIGURE 1.8 Secure means for connecting, creating, replicating, and maintaining an Azure BCDR environment

Additionally, running in a single data center on Azure can create a solution that requires massively high availability vulnerable to a natural or data-center-wide outage. Therefore, by using Site Recovery, you can build the same on-premise to Azure BCDR strategy as one that would exist between two Azure regionally dispersed data centers.

A tool called the Site Recovery Deployment Planner (SRDP) is helpful in the planning portion of the building of a BCDR instance or for migrating an IaaS virtual machine from one region to another. SRDP provides a compatibility assessment report on the following attributes, for example:

  • Number of disks, disk size, IOPS, OS version
  • Storage type, number of cores, virtual machine size recommendation
  • Estimated cost

No IT solution is a complete solution without a BCDR plan, whether the solution is hosted completely on-premise, is a hybrid on-premise to Azure solution, or is hosted completely on Azure. In the last two scenarios, Site Recovery can help create a BCDR plan and assist with moves from IaaS to Azure, as well.

Azure Bastion

Making a remote connection to one of your Azure IaaS compute resources is typically performed using either RDP over port 3389 or SSH over port 22. By default, when you connect to those machines from your company or from home, the connection traverses the internet. If you do not want the connection to your machines to travel across the internet, you can instead access the console using Azure Bastion from within the Azure portal. When you do this, the connection takes place over port 443 from within the Azure portal to the machine. This Azure product makes connecting to your provisioned Azure compute more secure.

Summary

This chapter provided the starting point for the path toward earning the Azure Solutions Architect Expert certification. Feel confident that if you are already comfortable with the Azure products covered in this chapter, what you need now is some additional hands-on practice and some test questions to get your brain neurons connected to this Azure knowledge.

The next chapters cover the approach to take if you are planning to migrate an organization to the cloud, starting with creating the subscriptions, setting security and access, and then deciding which compute and storage resources to use. Finally, implement some management capabilities to help the transition, such as monitoring, and don't forget the BCDR strategy. You will create, configure, and monitor in each of these scenarios, so get ready to get your hands and brain moving.

Exam Essentials

  • Make sure you are on the right Azure path.   There are numerous Azure certification paths. Make sure you are on the path that best fits your career's objects. Azure Solutions Architect Expert certification is for system administrators or system architects, whereas someone who is more development-focused might consider the Azure Developer or Azure Dev Ops Engineer certification.
  • Gain experience with Azure.   To pass this exam, you will need experience with the platform. Do not expect any book or training to get you through this without having worked on the platform for some amount of time. If you do that, then this certification will have greater meaning to you and others.
  • Keep up-to-date by subscribing to online resources.   The Azure platform is constantly changing and requires daily actions to keep your skills up-to-date. Here are some of the most popular resources to read on a regular basis:

    azure.microsoft.com/en-us/blog

    blogs.technet.microsoft.com/blog/tag/azure

    docs.microsoft.com/en-us/azure

  • Know what products exist in Azure.   Sometimes answers to questions contain products that may not be Azure products. They read like they are, but they are not. Knowing what really is available will help you remove at least one possible answer to a few questions.
  • You need to know at least one Azure product deeply, numerous products well, and a little about them all.   No one knows everything, but you can be an expert in one or two Azure products and features. Those products and features usually have a connection or dependency to other Azure features, and you could then learn some internals about them. There are some products and features that are not related to any other directly. For example, there is no direct relationship between Azure Cognitive Services and Azure VNet, but it would be useful to have at least a basic knowledge of the benefits each provides.
  • Focus on certain products. The Azure Solutions Architect exam is heavy on Azure Active Directory, networking, compute (specifically IaaS), and migration from on-premise to Azure. Make sure you have a good understanding of the products and features and their limits. Having those will provide you with the best chance of passing the exam.

Key Terms

Azure AD Connect Designing for Migration, Deployment and Integration
Azure Administrator Associates Designing for Security and Identity
Azure Data Explorer Determining Workload Requirements
Azure Developer Associates Developing for the Cloud
Azure DevOps Expert Error
Azure Solutions Architect Expert Event
Building and Deploying Applications Implementing Security and Authentication
Business continuity and disaster recovery (BCDR) strategy Implementing Security and Workloads
Configuring and Deploying Infrastructure Metrics Explorer
Data Migration Assistant (DMA) PerfView
Data Migration Service (DMS) Platform as a service (PaaS)
Designing a Data Solution Product
Designing an Infrastructure Strategy Site Recovery
Designing for Continuity and Recovery Site Recovery Deployment Planner (SRDP)

Review Questions

Many questions can have more than a single answer. Please select all correct answer choices.

  1. Which security feature can you implement to protect Azure App Service from unauthorized internet access? (Choose two.)
    1. Azure Active Directory
    2. Role-based access control (RBAC)
    3. Managed identity
    4. Single sign-on (SSO)
  2. When choosing a compute resource to execute your workload, what are the Azure options? (Choose two.)
    1. Azure VM
    2. Azure ExpressRoute
    3. Azure Functions
    4. API Management
  3. A custom domain can be bound to which of the following Azure products? (Choose all that apply.)
    1. Azure App Service
    2. Azure Storage Account
    3. Azure VM
    4. Azure SQL
  4. Azure Storage consists of which of the following service types?
    1. Blobs
    2. Queues
    3. Files
    4. Only A and B
  5. You need to decide on the most cost-efficient Azure Storage redundancy solution, and you must store the data in multiple data centers. Which of the following would you choose?
    1. LRS
    2. ZRS
    3. GRS
    4. RA-GRS
  6. You need to grant access to products running on Azure to another product or feature. Which Azure product or feature would you use?
    1. Key Vault
    2. AAD conditional access policies
    3. Managed identity
    4. Role-based access control (RBAC)
  7. Which of the following is a valid business continuity and disaster recovery (BCDR) Azure product for an IaaS workload solution?
    1. Azure Availability Sets
    2. Azure Automation
    3. Azure Availability Zones
    4. Azure DNS
  8. What is the most current supported SSL/TLS version for Azure App Service?
    1. SSL 3.0
    2. TLS 1.0
    3. TLS 1.2
    4. TLS 1.3
  9. Which of the following is not a supported Azure VPN Gateway configuration?
    1. Point-to-point
    2. Site-to-site
    3. VNet-to-VNet
    4. Point-to-site
  10. An Azure Active Directory tenant has which of the following?
    1. *. atmicrosoft.com
    2. *. onmicrosoft.net
    3. *. onmicrosoft.com
    4. *. contoso.com
  11. You need to run an offline process that typically consumes 2GB of memory, takes 80 percent of two CPUs, and runs four times per day. Which Azure product is the most cost efficient?
    1. Azure Functions
    2. Azure VM
    3. Azure App Service WebJobs
    4. Azure Batch
  12. What is multifactor authentication?
    1. User ID and password
    2. Something you know and something you have
    3. Client certificate and PIN
    4. Windows Hello for Business
  13. Which product is most useful for monitoring the health of an Azure solution?
    1. Azure Monitor
    2. Application Insights
    3. Log Analytics
    4. Security Center
  14. What Azure features are available for Azure VM? (Choose all that apply.)
    1. Azure Automation
    2. Disaster Recovery and Backup
    3. Azure Migrate
    4. Azure Site Recovery
  15. True or false: Cross-origin resource sharing (CORS) restricts access to the rendering of a web page based on IP address.
    1. True
    2. False
  16. In what scenario would you use the Azure API Management product? (Choose two.)
    1. An API that needs to scale based on consumption, regardless of limits
    2. Integration
    3. Managing access permission
    4. A REST API endpoint that will not change
  17. Senior management requires you to run your workload on Azure App Service. You need to implement autoscaling. Under load, which of the following would you do?
    1. Scale out
    2. Scale up
    3. Both A and B
    4. Neither A nor B
  18. A solution needing migration to Azure requires the registration of an assembly into the Global Assembly Cache (GAC). Which Azure product would you choose to run the workload?
    1. Azure Functions
    2. Azure App Service
    3. Azure VM
    4. None of the above
  19. Which of the following options describe a RESTful API? (Choose all that apply.)
    1. Supports cross-platform software
    2. Is the fundamental concept underlying the Windows Communication Foundation (WCF) library
    3. Typically converses using JavaScript Object Notation (JSON)
    4. Is supported by Cosmos DB
  20. Which of the following describes an Azure Web App for Containers instance? (Choose two.)
    1. Can have slow startup times when compared to Azure Functions
    2. Is compatible with Azure files
    3. Currently supports Linux only
    4. Can consume GPU resources
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
44.213.80.203