Mock Exam Answers

Implementing IaaS solutions

  1. C is the correct answer.

Although having VMs within an availability zone will bring VMs closer together than if they were in different availability zones or regions, proximity placement groups ensure that they are physically located close to each other for when you have low latency requirements.

Feel free to revisit Chapter 2, Implementing IaaS Solutions, to review the availability options for VMs, including the useful links in the Further reading section of that chapter.

  1. B is the correct answer.

The default deployment mode is incremental, which will only make changes to the resources defined in the template if they need to be changed but won’t delete any resources. The complete deployment mode will delete any resources within the deployment scope (the resource group, in this case) that are present within that scope but not defined within the template.

This topic was covered in Chapter 2, Implementing IaaS Solutions.

  1. C is the correct answer.

Commands with az acr refer to Azure Container Registry and are not executed on the local machine. The docker run command requires a container to have already been built, which happens via the docker build command.

This topic was covered in Chapter 2, Implementing IaaS Solutions.

  1. A is the correct answer.

ACI container groups allow you to host multiple containers on the same host machine, sharing the same life cycle, resources, network, and storage volumes. As the ACI infrastructure already exists, this option is more suitable than a Kubernetes Pod, which would require new infrastructure and therefore an additional cost.

Azure Container Registry is used for storing but not running container images, and a Log Analytics workspace won’t help with obtaining the logging information or writing it to long-term storage.

Container groups were discussed in Chapter 2, Implementing IaaS Solutions.

  1. C is the correct answer.

To have a name specified at deployment time without hardcoding the value in the template, you should create a parameters section in the template with a parameter for the name. You should also change the name value of the resource to get the value from that parameter. During deployment, the staName parameter (in this example) can be specified to create a new resource with that name, providing the name is available, as in this example:

az deployment group create -f .sta.json -g AZ-204 -p staName="mysta2"

The following is an example of how the template could look:

{

    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",

    "contentVersion": "1.0.0.0",

    "parameters": {

        "staName": {

            "type": "string"

        }

    },

    "resources": [

        {

            "type": "Microsoft.Storage/storageAccounts",

            "apiVersion": "2021-09-01",

            "name": "[parameters('staName')]",

            "location": "West Europe",

            "kind": "StorageV2",

            "sku": {

                "name": "Standard_LRS"

            },

            "properties": {

                "accessTier": "Hot"

            }

        }

    ]

}

This topic was covered in Chapter 2, Implementing IaaS Solutions. We could have also set the parameter to only be part of the name and used the uniqueString() template function to generate a more unique name.

Creating Azure App Service web apps

  1. B and C are correct.

Filesystem storage is intended for short-term logging and disables itself after 12 hours, so it won’t be useful in this scenario, leaving Blob storage as the storage solution of choice. Windows apps offer application logging to both Blob and filesystem storage, whereas Linux only offers application logging to filesystem storage, making a Windows App Service plan a requirement to meet the needs of this scenario.

This topic was covered in Chapter 3, Creating Azure App Service Web Apps, although filesystem storage only being designed for short-term logging wasn’t specifically mentioned. This is an example where your own exploration would be useful because we can’t cover every possible question in a single book.

  1. C is the correct answer.

The error indicates that cross-origin resource sharing (CORS) is blocking requests from https://az204.com. The command should be run from myapi rather than myapp because CORS is configured on the destination to specify where requests are accepted from.

The CLI command could look as follows:

az webapp cors add -g "AZ-204" -n "myapi" --allowed-origins "https://az204.com"

This topic was covered in Chapter 3, Creating Azure App Service Web Apps.

  1. B is the correct answer.

Hybrid Connections can be used to provide Azure App Service web apps with access to resources in any network, including on-premises networks. This requires a relay agent within the network, which will relay the request from the web app to the on-premises TCP endpoint. The endpoint will then communicate with the web app over port 443 in an outbound connection from that TCP endpoint.

Private endpoints relate to inbound connections to the web app, not outbound, and they use private links. As the on-premises network isn’t a VNet, simply enabling VNet integration also won’t help here.

This topic was covered in Chapter 3, Creating Azure App Service Web Apps.

  1. A is the correct answer.

When multiple scale-out rules are triggered, autoscale evaluates the new capacity of each rule triggered and takes the scale action that will result in the greatest capacity of those triggered rules. In this scenario, that would be 7. Autoscale doesn’t combine the instance counts of multiple rules – it only selects the single action that provides the greatest capacity.

This topic was covered in Chapter 3, Creating Azure App Service Web Apps.

Implementing Azure Functions

  1. C is the correct answer.

On the Consumption plan, functions that have been idle for a period will be scaled down to zero instances. After this happens, the first request may experience some latency because a cold startup is required to scale back up from zero.

This topic was covered in Chapter 4, Implementing Azure Functions.

  1. C is the correct answer.

This topic was covered in Chapter 4, Implementing Azure Functions, with a link to information on NCrontab syntax in the Further reading section of the chapter.

  1. A is the correct answer.

With C# script, JavaScript, PowerShell, Python, and TypeScript functions, the function.json file needs to be updated to configure triggers and bindings.

In C# and Java, you decorate methods and parameters in code.

This topic was covered in Chapter 4, Implementing Azure Functions.

Developing solutions that use Cosmos DB storage

  1. D is the correct answer.

The Cosmos DB change feed cannot be queried because it is a FIFO queue. Each modification is registered as a message in the queue and can be pulled by consumers.

The change feed is discussed in Chapter 5, Developing Solutions That Use Cosmos DB Storage, in the Leveraging a change feed for app integration section.

  1. B is the correct answer.

There are physical and logical partitions that exist for Cosmos DB. Its structure is controlled internally by Cosmos DB. The number of physical partitions depends on the throughput and storage capacity of the documents stored in the partition.

Partitions are discussed in Chapter 5, Developing Solutions That Use Cosmos DB Storage, in the Partitioning in Cosmos DB section.

  1. E is the correct answer.

The client-side encryption (Always Encrypt) meets the requirements because it protects data at rest and in transit and is decrypted only on the client side by using customer-managed keys from Azure Key Vault. When you apply encryption settings, you need to provide the path of a card number field. Full documents cannot be encrypted.

The encryption topic is discussed in Chapter 5, Developing Solutions That Use Cosmos DB Storage, in the Encryption settings section.

  1. B is the correct answer.

If the update needs to be applied for both of the regional instances at the same time to minimize possible data loss, it means a strong consistency level for operation. The operation consistency must be the same or more relaxed than the default consistency. So, increasing the default level to strong is only one solution.

Consistency levels are discussed in Chapter 5, Developing Solutions That Use Cosmos DB Storage, in the Consistency levels section.

  1. D is the correct answer.

The solution using of a single DB and consistency for a single DB makes no sense. Meanwhile, the indexing process affects the performance of inserting. One of the possible solutions is to set the index mode to none from the code before submitting the bulk. Then, return to the normal state.

Disabling indexes is discussed in Chapter 5, Developing Solutions That Use Cosmos DB Storage, in the Optimizing database performance and costs section.

Developing solutions that use Azure Blob storage

  1. C is the correct answer.

The static website hosted on the Azure storage account provides the following:

  • Availability in two regions if the GRS option is chosen.
  • Versioning for all files in the blob including pages of the website.
  • The option to register custom domains.
  • Minimal cost in comparison with VMs and App Services.

Websites hosted on a storage account are discussed in Chapter 6, Developing Solutions That Use Azure Blob Storage, in the Static websites section.

  1. D is the correct answer.

The premium storage account does not have access tiers. The Archive tier will be expensive because of writing transactions to append the log files. The Hot tier will be a more economically sound choice.

The storage account access tiers are discussed in Chapter 6, Developing Solutions That Use Azure Blob Storage, in the Life cycle management and optimizing cost section.

  1. A is the correct answer.

The SAS is generated by using one of the admin keys. Regenerating the admin keys will break all SAS generated with the previous keys. This is an acceptable solution because the application is still in the testing phase and other devices are available for refreshing its SASs. Only SASs generated with a stored access policy can be safely revoked.

The SASs for storage accounts is discussed in Chapter 6, Developing Solutions That Use Azure Blob Storage, in the Managing metadata and security settings for storage accounts section.

  1. A is the correct answer.

The code is taken from one of the demo projects for setting metadata. The code works without errors if the container exists. Before completing any operation with the container, the code should ensure that container exists and call the CreateIfNotExists function.

The code example is provided in Chapter 6, Developing Solutions That Use Azure Blob Storage, in the Retrieving metadata by using C# code section.

Implementing user authentication and authorization

  1. C is the correct answer.

For applications to integrate with Azure Active Directory (AAD), an app registration must be created with at least the User.Read delegated permission assigned.

This topic was covered in Chapter 7, Implementing User Authentication and Authorization.

  1. A is the correct answer.

Because this is an app that runs on user devices, it is therefore not trusted with application secrets and is only able to request access to resources on behalf of a logged-in user – this is a public client and not a confidential client.

This topic was covered in Chapter 7, Implementing User Authentication and Authorization.

  1. A, C, and D are correct.

Use the az storage container policy create command to create a new stored access policy. Then, run the az storage container generate-sas command to generate a new SAS token that uses the new policy. Finally, run the az webapp config appsettings set command to update the relevant application setting of the App Service with the new SAS token.

This topic was covered in Chapter 7, Implementing User Authentication and Authorization, although not each of these CLI commands was explicitly covered. There will be examples in the exam you won’t have seen before where you must use your judgment based on what you do know.

Implementing secure cloud solutions

  1. A and B are correct.

The az webapp identity assign command will assign a system-assigned managed identity to the web app in question, which will share the same life cycle as the web app. The az identity create command will create a user-assigned managed identity, which is a standalone resource and won’t share the same life cycle as the web app and therefore doesn’t meet the requirements.

The az keyvault set-policy command will set an access policy on the relevant Key Vault, so you can provide the system-assigned managed identity with the permissions required to access the data plane of the vault. The az policy assignment create command relates to Azure Policy, not a Key Vault access policy, so it won’t help in this scenario.

This topic was covered in Chapter 8, Implementing Secure Cloud Solutions.

  1. D is the correct answer.

An App Configuration resource allows you to centrally manage both the application configuration settings and feature flags. App Configuration isn’t a replacement for Key Vault for storing secrets, but you can also create a new App Configuration key that pulls the value from Key Vault (although it’s not a requirement in this scenario).

This topic was covered in Chapter 8, Implementing Secure Cloud Solutions.

  1. B is the correct answer.

You can import key-value pairs into App Configuration using the az appconfig kv import command. You can also export them from App Configuration into a JSON file using the az appconfig kv export command.

The az appconfig kv command was covered in Chapter 8, Implementing Secure Cloud Solutions, although the import command wasn’t specifically mentioned.

Integrating caching and content delivery within solutions

  1. A is the correct answer.

The TCP protocol is used for communication between the cache instance and the client.

The details about communication are provided in Chapter 9, Integrating Caching and Content Delivery within Solutions, in the Firewall and virtual network integration section.

  1. E is the correct answer.

From the list of tools, only Azure CLI can not be used for observing values, but it can be used for managing the cache instance and importing and exporting data.

The details about communication are provided in Chapter 9, Integrating Caching and Content Delivery within Solutions, in the Provisioning Azure Cache for Redis from the Azure CLI section.

  1. C is the correct answer.

Azure Cache for Redis suits server-side caching for reusing content better than other services from the list. It also provides flexibility with setting custom TTL for the response. Moreover, Azure Cache for Redis will provide better performance than a storage account because of TCP communication.

The cache-aside pattern and TTL were discussed in Chapter 9, Integrating Caching and Content Delivery within Solutions, in the Introducing caching patterns section.

  1. D is the correct answer.

The update file is static content and should be cached for clients with the appropriate service. Server-side caching (Azure Cache for Redis) does not help the client. The update of 50 MB does not suit general web hosting (App Service) well because the interruption to the connectivity will lead to downloads restarting. If the download speed is a requirement, the file should be located as close as possible to the clients. CDN servers (point-of-presence) will be the best choice. Downloading from an Azure storage account from another region will be slow and expensive. Outgoing traffic from the data center adds an extra charge to the subscription.

Details about caching static content can be found in Chapter 9, Integrating Caching and Content Delivery within Solutions, in the Exploring Azure Content Delivery Network section.

  1. D is the correct answer.

The CDN endpoint still cached the old version of the JS file. You need to wait until the cached copy expires or purge the content.

The same scenario with different messages (a web page where you press a button) is demonstrated in Chapter 9, Integrating Caching and Content Delivery within Solutions, in the Configuring a website to leverage the CDN section.

Instrumenting solutions to support monitoring and logging

  1. B is the correct answer.

Application Insights dependency tracking can collect SQL queries and their performance. You can get access to the collected information from the Application Map and Performance sections of the Application Insights page. Azure SQL Insights can only track query performance for all application requests. Profiler only helps for functions running from code.

SQL dependency is explained in Chapter 10, Troubleshooting Solutions by Using Metrics and Log Data, in the Application Map section.

  1. E is the correct answer.

The code snippet represents working code from the controller that handles the client list loading from the DB (Entity Framework). The code tracks custom metrics with the elapsed time of the dependency collected by the Stopwatch class.

The custom event was introduced in Chapter 10, Troubleshooting Solutions by Using Metrics and Log Data, in the C# – server section.

  1. D is the correct answer.

To collect logs from Azure VM, you need to connect the VM to the Log Analytics workspace by installing the agent. From the data collection settings, you need to configure the IIS log collection.

The Log Analytics workspace and its structure were introduced in Chapter 10, Troubleshooting Solutions by Using Metrics and Log Data, in the Using KQL for Log Analytics queries section.

  1. D is the correct answer.

Debugging snapshots will help you to investigate the crash in a less invasive way. You can observe the snapshot from the Application Insights page or download and open it in Visual Studio. Live debugging and setting breakpoints can help with investigating the issue but freezes the application when the breakpoint is reached which interrupts request processing.

Debugging snapshots was explained in Chapter 10, Troubleshooting Solutions by Using Metrics and Log Data, in the Exception troubleshooting section.

Implementing API Management

  1. B is the correct answer.

What is provided in the snippet policy will limit the requests and consist of the following parameters:

<rate-limit-by-key calls="number" renewal-period="seconds" increment-condition="condition" counter-key="key value" remaining-calls-variable-name="policy expression variable name"/>

APIM policy was explained in Chapter 11, Implementing API Management, in the Using advanced policies section.

  1. D is the correct answer.

APIM supports backend authentication with Basic and client certification and authentication with Managed Identity. The following policies can be configured for authentication with Basic:

<authentication-basic username="username" password="password" />

The following code snippet can be configured for authentication with client certificates:

<authentication-certificate thumbprint="thumbprint" certificate-id="resource name"/>

Windows authentication is not supported by APIM.

The authentication basics were discovered in Chapter 11, Implementing API Management, in the Exploring APIM configuration options section.

  1. D is the correct answer.

The backup of usage data is not supported by APIM.

The endpoint definitions that APIM supports importing were introduced in Chapter 11, Implementing API Management, in the Connecting existing web APIs to APIM section.

APIM supports a portal for developers and lets them test published APIs. The portal was explained in Chapter 11, Implementing API Management, in the Dev portal section.

APIM can expose publicly accessible APIs. Authentication settings were introduced in Chapter 11, Implementing API Management, in the Exploring APIM configuration options section.

APIM can generate and mock responses without requesting backend services. The techniques were discussed in Chapter 11, Implementing API Management, in the Mocking API responses section.

APIM supports integration with source control to track changes in the policies and settings. Integration was introduced in Chapter 11, Implementing API Management, in the Repository integration section.

Developing event-based solutions

  1. C is the correct answer.

To minimize cost and admin efforts, you should leverage out-of-box functionality such as event capturing. Event capturing is only available with the Standard and Premium tiers. The additional service Azure Data Lake should be deployed for storing captured events. The charges for Azure Data Lake are less than the consumption charges from Event Grid per each of the events and storing their content in Cosmos DB. Data Lake is a consumption-based service deployed as an extension for an Azure storage account. Capturing event content is impossible with Azure Monitor, Application Insights, or an Azure Log Analytics workspace.

The capturing feature was introduced in Chapter 12, Developing Event-Based Solutions, in the Capturing events section.

  1. D is the correct answer.

There is no out-of-the-box functionality that meets the requirements. The IoT Hub should be upgraded to the Standard SKU to support dynamic settings management (or twin settings). The response time will be decreased if the IoT device starts cooling as soon as the temperature hits the threshold.

The IoT edge will suit the requirements but will also require you to upgrade the IoT Hub and operate with Docker containers that increase the maintenance effort. The same is true for rebuilding containers.

B is a workable option but requires you to build expensive extra services.

The twin settings were mentioned in Chapter 12, Developing Event-Based Solutions, in the Developing applications for Azure IoT Hub section.

  1. C is the correct answer.

az eventhubs event hub authorization-rule create should be used with Send permission at most to follow the principle of least privilege.

The connection string for Event Hubs is discussed in Chapter 12, Developing Event-Based Solutions, in the Provisioning Azure Event Hubs section.

Developing message-based solutions

  1. A is the correct answer.

The correct name of the metric for Storage Queue is approximate message count because the exact message count is not available.

The approximate message count metric was discussed in Chapter 13, Developing Message-Based Solutions, in the Exploring Azure Storage Queue section.

  1. D is the correct answer.

Azure Service Bus Queue guaranteed the sequence of messages. The SDK automatically implements a retry pattern when the application experiences transient connection errors.

Retrying attempts were mentioned in Chapter 13, Developing Message-Based Solutions, in the Competing consumers section.

  1. C is the correct answer.

ServiceBusProcessor provides callback architecture that allows the processing of received messages in real time by registering on ProcessMessagesAsync events.

The ServiceBusProcessor class was demonstrated in Chapter 13, Developing Message-Based Solutions, in the Developing for Service Bus topics section.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.114.245