© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2022
P. UdayakumarDesign and Deploy Azure VMware Solutionshttps://doi.org/10.1007/978-1-4842-8312-7_3

3. Design Essentials of AVS

Puthiyavan Udayakumar1  
(1)
Abu Dhabi, United Arab Emirates
 

VMware’s Microsoft Azure VMware Solution is a first-party Microsoft Azure product that provides one-tenant, vSphere-based private clouds on Azure. There are four VMware technologies: vSphere, NSX-T, vSAN, and HCX. VMware environments can continue to run on-premises with Azure VMware Solution running in Azure datacenters. An environment can be set up in a few hours once the VM resources have been migrated. Azure VMware Solution provides management, networking, and storage services.

Migrating organizations can choose between options provided by Azure VMware Solution. Moving VMware resources to a dedicated Azure cloud environment reduces complexity, minimizes negative effects on business continuity, and accelerates the migration process. Businesses can adopt cloud technology at their own pace with Azure VMware Solution, and as their business evolves, cloud services can be added incrementally.

By the end of this chapter, you should understand the following:
  • Azure’s well-architected framework

  • The AVS Solution building block

  • AVS network topology and connectivity

  • AVS identity and access management

  • AVS security, governance, and compliance

  • AVS management and monitoring

  • AVS business continuity and disaster recovery

  • AVS platform automation

Azure’s Well-Architected Framework

Cloud computing has revolutionized the way businesses solve their business challenges and how workloads and security are designed. A solution architect is not solely responsible for delivering business value through the application’s functional requirements. The design must ensure that the solution is scalable, resilient, efficient, and secure.

The well-defined framework for AVS Solution provides services, availability, security, flexibility, recoverability, and performance required for AVS cloud consumers. The following are key design principles can be followed in your AVS design.

The architecture must be able to create IT adoption, implementation, and design frameworks that support delivering business processes. The decision frameworks that the architecture develops should be followed when planning, designing, implementing, and improving a technology system. A system architecture balances and aligns business requirements with the technical capabilities needed to implement those requirements. The system and its components are designed to balance risk, cost, and capability.

Developing high-quality solutions on Azure is made more accessible by Azure’s well-architected framework. When it comes to designing architecture, there is no one-size-fits-all solution. However, some universal concepts are applicable regardless of the cloud provider, the architecture, or the technology.

Concepts such as these are not all-inclusive. Focusing on these features will help AVS solutions architects build a solid, reliable, and flexible foundation.

Superficial design characteristics enable AVS architects/engineers to provide AVS solution with reliability, availability, flexibility, recoverability, and performance required for the solution. Figure 3-1 depicts the key design principles followed at each level of AVS Solution.

A diagram depicts the architectural framework of the Azure VMware solution. The roof is a triangle shape labeled Azure V M ware Solution. It has 6 pillars labeled Multitenancy, High availability, Run Efficiently, Security, Reliability, Performance Effectiveness, and Cost Optimization. The base is a thin rectangle labeled Azure Well-Architected Framework.

Figure 3-1

Key design principles for simplicity

Multitenancy: The design allows isolation of resources and networks to deliver applications with quality. It includes
  • Complete isolation of computing, storage, and network resources to be managed by dedicated clusters and resource pools. Logical isolation is obtained at the virtual machine or container level, using a distributed cluster or shared resource pool.

  • Application and data segregation is wanted for the multi-tenant atmosphere to ensure that one tenant cannot enter other tenants’ data. The deployment of modern protection to isolate data is needed to achieve the segregation.

High availability: The design should avoid any single point of failure across the design.
  • An IT function is highly available when it can withstand an individual’s crash or multiple elements. Deployment of automated recovery and lessening disruption at every panel of the IT function architecture is key.

  • In the design, the AVS administrator/architect needs to introduce redundancy by having varied resources for equal tasks. Redundancy is deployed in standby mode or active mode.

  • The design philosophy is that when one fails, the rest can consume a more considerable part of the IT workload.

  • It offers automated detection and response to the event as much as possible via a SDDC stack, physical storage, and the physical network.

  • It has durable data storage to protect data availability and integrity. Redundant copies of data are implemented by synchronous, asynchronous, or Quorum-based replication based on data requirements.

Run efficiently: The design should focus on running and monitoring systems to deliver client business value and continually improve processes and procedures. It should have the capability to manage and monitor IT functions to deliver business value and continually improve supporting processes and procedures

The operation and processes needed to keep workloads in production are run efficiently. Deployments must be reliable, predictable, and automated to reduce human error. Routine processes must not slow down the deployment of new features and bug fixes. Cloud users must quickly roll back or roll forward if an update has problems. This means the following:
  • AVS administrators need to have a yielded opinion of SRE workload, their role in it, and participated business goals to set the preferences that will empower business accomplishment. Distinct preferences will increase the benefits of SRE efforts.

  • AVS administrators need to determine internal and external end users’ needs involving key decision-makers, including business, DevOps, DevSecOps, and AVS teams, to decide where to concentrate purposes. Estimating end users’ needs will guarantee that AVS has an absolute belief of the support that is wanted to design business upshots.

  • AVS administrators need to estimate implications to the business and manage this information in risk records.

  • AVS administrators need to evaluate risks and tradeoffs amid clashing interests or alternative passageways.

  • AVS administrators need to promote innovation to expedite knowledge and engage the AVS team.

  • AVS administrators need to adopt passageways that develop feature enhancement into production, enabling rehost, refactoring, agile feedback on quality, and bug fixes.

Security: The design must focus on protecting client information and systems. Security is the ability to protect the information, systems, and assets while delivering business value through risk assessments and mitigation strategies.

Design thinking about security is required throughout the entire lifecycle, from design and implementation to deployment and operations. The Azure platform protects against various threats such as network intrusion and DDoS attacks. But it would be best if cloud consumers still built security into the AVS infrastructure, data, and applications. This involves the following:
  • Confidentiality, integrity, and availability of apps, data, and infrastructure

  • Preserving identities using roles, for illustration, IAM roles, implementing fine-grained authorization

  • Preventing traffic using ports/IP ranges, such as security groups and dividing internal and external traffic using a CIDR range

  • A nocturnal trend using network flow log analysis to detect behavior, such as VPC flow log analysis using Cloud Workload Protection

  • Endpoint safeguarding using anti-malware/antivirus agents, such as Endpoint Point safeguard and Cloud Workload Protection Agent

  • Centralized log analysis to detect exceptions or DDoS attacks by adopting traffic trends or unnatural sources of traffic

  • IDS/IPS approach by having all traffic forwarding within the gateway to such a method and then blocking/allowing traffic

  • Protection in transit and an at-rest method such as using SSL deploy and encrypting disk at the storage

  • An orderly process to handle any security incident and have the possible information to discover such incidents and a suitable response

Reliability: The design should focus on preventing and quickly recovering from failures to meet business and client demand. A system can recover from IT function disturbances, dynamically procuring computing support to meet the need, and moderating interruptions such as misconfigurations or temporary network issues.

A resilient system can recover from failures and continue to function. When a loss occurs, resiliency aims to restore the application to its previous state. Accessibility refers to whether cloud consumers can access their workload at any time. It includes
  • Design for availability since it is estimated as a percentage of uptime and describes the proportion of time an IT function is working as expected

  • Queue-based load leveling, which is a well-known design pattern that uses a queue as a buffer among a task and a setting that it uses to monitor increasing workloads

  • Availability zones to defend against data center failures and provide heightened high availability to end users

  • Circuit breaker patterns that take a variable volume of time to harden when connecting to a remote IT function

  • An efficient mechanism for discovering breakdowns and recovering speedily and efficiently, which is essential to maintaining resiliency

  • Leader Election, a well-known design pattern to organize the actions completed by a group of colluding assignment instances in a dispersed application by choosing one instance as the leader that assumes liability for managing the additional instances

Performance effectiveness: The design should focus on using computing resources efficiently. The capability to use computing resources efficiently meets system requirements and maintains that efficiency as demand changes and technologies evolve.

The performance efficiency for cloud consumers is their ability to scale their workload to meet their users’ demands efficiently. Scaling appropriately and deploying PaaS offerings with scaling capabilities are the two main ways to improve performance and efficiency. This involves
  • Provisioning sufficient computing resources based on capacity requirements

  • Deploying solutions in a hybrid cloud in multiple availability zones

  • Adopting serverless structures to exclude the necessity to manage and maintain IT resources

  • Directly carrying out stated testing with automatable resources with a wide variety of computing resources

  • Integrating functional and non-functional tests into the agile development process to know when bottlenecks become an issue in production

  • Integrating a content delivery network, distributing cached files across servers worldwide, and developing international end users

  • Deploying server-side caching to remove the number of calls hitting the database and dramatically increasing search queries

  • Using prefetching procedures to foretell what developments users are about to accept and begin the loading before they start the event

Cost optimization: The design focuses on avoiding unnecessary costs. It’s about the capacity to run IT functions to deliver business value at the most moderate price point. It involves
  • Knowing and measuring where the money is being spent

  • Deciding the most relevant and right amount of resource standards

  • Examining spend over a period of time

  • Scaling to engage business demands without overspending

  • Advantages from economies of scale

  • Examining attribute expenditure

  • Examining associate managed services to lessen the cost of ownership

Solution Building Block for Azure VMware Solutions

Solution Building Block for Azure VMware Solutions provides a references framework for the Azure VMware Solution. It’s building block consist of three foundational elements: Sizing Consideration for Azure VMware Solution, Azure Landing Zone for Azure VMware Solution, and Azure VMware Solution Networking. Figure 3-2 depicts the solution building block for AVS.

A diagram of a Solution Building Block of A V S depicts the sizing consideration, azure landing zone, and networking.

Figure 3-2

Solution Building Block of AVS

Let’s do deep dive into each building block.

Sizing Consideration for Azure VMware Solution

A VMware offering running on Azure infrastructure is called Azure VMware Solution. By using Azure VMware Solution, on-premises VMware environments can be extended to the cloud. A VMware environment on-premises can also be migrated to Azure VMware Solution. Azure VMware Solution can be connected to an on-premises environment via various options.

Cloud consumers need to understand how the capacity of the overall cloud solution will assist them in making both technical and business decisions. These sections focus on critical design considerations in capacity planning. The following potential areas demand focus on the sizing.
  • Assess an existing VMware environment: On-premises VMware environments typically grow organically over time. On-premises VMware environments are great for customers who want to know how great they are, such as RVTools, DICE tools, etc. You must assess the situation objectively to eliminate all guesswork in the decision-making process.

  • Determine the relationship between application components: Customers may only need to use Azure VMware for some workloads. When customers plan for a subset of workloads, they can ensure that all the dependencies are considered.

  • A VMware on-premises environment may have different configuration requirements than the Azure VMware Solution environment: A VMware on-premises environment might have a different set of software requirements. To help cloud consumers make the right decisions ahead of time, they consider whether the Azure VMware Solution can meet that requirement.

  • Understanding monthly and annual costs: Customers want to know how much they will pay annually. A capacity planning exercise can help provide them with potential costs.

Figure 3-3 depicts an overview of the planning and sizing approach for Azure VMware Solution.

A diagram displays the sizing approach of V Mware, which consist of 2 boxes connected by double arrows. The left box consists of V M ware V sphere, V M ware V san, V M ware N S X and V M ware H C X. Below is a box labeled Physical, Rack Servers, and Storage and Network. The right box consists of 4 elements, Discovery, Grouping, Assess, and Reporting.

Figure 3-3

Planning and sizing approach

The following are the critical phases of Azure VMware Solution sizing:
  • Discovery: This phase collects an inventory from an on-premises VMware installation.

  • Grouping: Cloud consumers use this phase for grouping logically related VMs (like an app and a database).

  • Assessment: During this phase, groups of virtual machines are assessed for suitability and potential remediation.

  • Reporting: In this phase, the assessment score is consolidated with the estimates of costs.

Discovery: Azure Migrate can be used in two ways by cloud consumers. The first mode of Azure Migrate generates OVA (Open Virtualization Appliance) templates. An Azure Migrate VM can be bootstrapped with this template on an on-premises VMware site. The Azure Migrate instance sends inventory data from on-premises to Azure once it is configured. Uploading on-premises inventory data can be done using a CSV file formatted in a predefined manner. In the CSV file, four mandatory fields are expected: name of the VM/server, number of cores, memory, and eligible operating system. In addition to disk count and disk IOPS, other optional fields (such as throughput, etc.) could improve sizing accuracy. A CSV file can be generated using VMware tools such as RVTools.

Grouping: The VMware inventory details can be grouped once they have been gathered. After VMs are discovered, cloud consumers can organize and manage them quickly by grouping them. Among the possible groupings are workloads (HR, eCommerce, and so forth, license groupings, RedHat, SQL, SAP Oracle, etc.), environments (production vs. non-production), locations (US, EU, and so on), and criticality (mission-critical, small-scale, etc.). Using Azure Migrate, VMware environments can be analyzed for dependency. It is also possible to group VMs together based on the information obtained via dependency analysis.

Assessment: VMs that are grouped can be assessed. It is possible to configure an assessment with parameters helpful in determining appropriate size/capacity. The parameters can include details about the target Azure VMware Solution site, such as location and node type. VMware Solution VMs running on Azure must have the following settings (FTT, RAID, and CPU oversubscription). An assessment can be performed based on two aspects.

The first is a performance-based assessment of on-premises VMware VMs based on their performance profiles. Performance history can be selected as far back as one month for capturing a performance profile. An assessment can be fine-tuned even further by selecting a specific percentile within an evaluation (such as 50th, 90th, 99th, etc.). The capacity margin can be increased by multiplying the capacity with a comfort factor.

Secondly, the solution must be on-premises. This criterion uses the existing specs of the VM (CPU, memory, etc.) for its evaluation. The capacity can be increased if necessary.

Reporting: Reporting provides final results after an assessment is complete, including cost and readiness. There is a summary of the number of VMware virtual machines assessed, the average estimated cost per VM, and the total estimated costs for all VMs.

The reporting also contains a clear breakdown of Azure VMware Solution readiness across multiple readiness states (Ready, Not Ready, Ready with Conditions, etc.). There are specific reasons why some VMs may require remediation before migration.

Managing and orchestrating the migration plan becomes very easy in this way. Several Azure VMware Solution nodes are also provided in the reporting, and the Azure VMware Solution also reports anticipated CPU, memory, and storage utilization.

Azure Landing Zone for Azure VMware Solution

In this section, you’ll explore Azure landing zones. Cloud computing is the basis and backbone of digital transformation in all its forms. Landing zones are the foundation for a successful shift to the cloud.

Any project that involves the cloud can be compared to laying a foundation. In designing and constructing any building, architects must consider everyday decisions. They have many things in common, like concrete, rebar, and conduits for bringing in utilities such as plumbing or electricity. There are some common elements and considerations among foundations, but other references can make them unique and wildly different. Unlike a house, a stadium has a larger and more complex foundation. Building a bridge’s foundation may require stricter governance and performance standards. A solid foundation begins with an understanding of what it will support.

In Microsoft Cloud adoption strategies, landing zones serve as a fundamental component. Besides scalability, security, governance, networking, and identity, it builds an Azure environment that accounts for and underpins those functions. Azure landing zones allow enterprises to migrate applications, modernize, and innovate.

Cloud landing zones ensures that cloud consumers’ target environments are well designed, deployed, secured, and governed to achieve consistency across their domains and that they fulfill every demanding dynamic business requirement in an agile manner and let the cloud consumers focus on staying compliant and controlling costs to invest where it matters.

Microsoft Azure landing zones result from a multi-subscription Azure environment that addresses scale, security governance, networking, and identity. With Azure landing zones, enterprises can migrate applications, modernize their infrastructure, and innovate. In these zones, all platform resources necessary to support the customer’s application portfolio are considered, without distinction between IaaS and PaaS.

One solution does not fit all technical environments. Cloud consumers have several options for implementing Azure landing zones to meet their deployment and operational needs as their cloud portfolios grow.

Scalable: Azure landing zones don’t care which workloads or resources are deployed to each landing zone; they all provide repeatable environments with consistent configurations and controls.

Modular: All Azure landing zones offer a modular approach to creating cloud consumer environments based on a standard set of design areas. Azure SQL Database, Azure Kubernetes Service, and Azure Virtual Desktop are only a few examples of the capability of each design area to support the distinct requirements of individual technology platforms.

Figure 3-4 depicts the cloud architecture portfolios.

A block diagram depicts cloud architecture portfolio flows to scalable and modular.

Figure 3-4

Cloud architecture portfolios

A modular architecture is designed for enterprise-scale deployments. By leveraging this technology, cloud consumers can start with a foundational landing zone control pane that supports their application portfolios, whether they are migrating or developing and deploying new applications to Azure. No matter what scale point cloud consumers are at, the architecture can scale alongside them.

The enterprise-scale architecture represents the design paths and target states of the Azure environments. Cloud consumer organizations must map their Azure journey by making various design decisions as the Azure platform evolves.

All enterprises do not adopt Azure the same way, and customer-specific Cloud Adoption Frameworks for Azure enterprise-scale landing zones exist. Depending on the cloud consumer organization’s situation, the technical considerations and design recommendations for enterprise-scale architecture may require trade-offs. As long as cloud consumers follow the core recommendations, they will be on a path to effectively scale their organization’s demands.

Landing zones are coupled to Microsoft’s Cloud Adoption Framework, which helps organizations make the proper governance, strategy, and security decisions when migrating to Azure. Figure 3-5 depicts the environment design elements.

A block diagram of cloud environment design elements depicts Resource Organization, Azure billing and Active Directory tenant, Network Connectivity, and Identity and Access Management.

Figure 3-5

Cloud environment design elements

  • Resource organization: The method by which cloud users organize their resources to allow for business growth, considering needs around management groups, subscriptions, business areas, and different teams.
    • Cloud subscription: The method by which cloud consumers adopt public clouds like Azure, considering they need to create three different landing zones.

  • Azure billing offers and Active Directory tenants: Azure billing and Azure Active Directory (Azure AD) are the two highest alignment levels across all cloud Azure deployments within this critical area.
    • Multi-tenancy: Ensures tagging policies are enforced across multiple cloud tenants and provides standardized tenants for different security profiles (dev/staging/prod).

  • Network connectivity: Network implementation provides high availability, resiliency, and scalability. By combining networking patterns with external data centers, cloud consumers can create hybrid systems and multi-cloud adoption models. Before designing network connectivity, consider the following questions: What will the topology of the network be? Where will the resources be located?

  • Identity and access management: The foundation of any fully compliant cloud architecture is IAM, the primary security boundary. Roles and access controls are defined to implement the principle of least privilege. All hosted production applications and cloud consoles are integrated with MFA SSO platforms.

Figure 3-6 depicts the compliance design elements.

A block diagram of compliance design elements depicts Security and Compliance, Management, Governance and Operations, and Platform Automation and Dev Ops.

Figure 3-6

Cloud compliance design elements

  • Security and compliance: Cloud users can enforce global and account-level security controls proactively and deductively by using landing zones. Landing zones can also be used to implement data residency and compliance policies across the enterprise.

  • Management: A foundation is established here for managing operations across Azure, hybrid, or multi-cloud environments.
    • Business continuity and disaster recovery: The smooth functioning of applications depends on resilience, and BCDR is a critical element of strength. By using BCDR, cloud consumers can protect cloud consumers’ data via backups and recover cloud consumers’ applications in the event of an outage.

  • Governance and operation: What are the cloud consumers’ plans for managing, monitoring, and optimizing their environment? How will cloud consumers maintain visibility within the environment and ensure it operates as required?

  • Platform automation and DevOps: Consumers of cloud services can increase productivity, scalability, and reliability by using automation. Cloud landing zones automate CI/CD pipelines based on Terraform, ARM, or cloud formation templates to deploy multi-account subscription structures in minutes.

The Azure VMware Solution can be deployed within a new or existing landing zone environment. By promoting a segregated Azure environment and shared services, landing zones help consumers avoid operational overhead and reduce costs.

In a landing zone environment, running the Azure VMware Solution addresses the following use cases:
  • Using an existing Azure tenancy infrastructure: VMware solutions can be integrated into existing Azure tenants. Customers can bill and account for their goods by using their existing ownership chain.

  • Using existing shared landing zones: Customers can reuse their existing shared landing zones, which run services such as network connectivity, monitoring, and so on, with Azure VMware Solution environments. In addition to reducing costs, reuse increases operational efficiency.

  • Separation of governance rules: Different governance requirements may exist in dev/test and production environments. It is possible to provide the desired level of control for VMware Azure Solution environments by setting up separate landing zones.

Azure’s enterprise-scale landing zones provide prescriptive deployment guidance for configuring Azure platform components (like identities, networks, and management) and application and workload components, such as Azure VMware Solution. It is easy to manage and scale Azure VMware Solution workloads due to a well-defined correlation between Azure platform components and Azure VMware Solution. Figure 3-7 depicts a landing zone overview and the Azure VMware Solution.

An illustration of zone components necessary for Azure V Mware. Below are those components labeled as Enterprise enrollment, Identity and access management, Management group and subscription management, Management subscription, Connectivity subscription, Azure V M ware Solution subscription, and Azure V M ware Solution sandbox plot subscription.

Figure 3-7

Azure Landing Zone for AVS

The Azure enterprise-scale landing zone components for Azure VMware Solution deployment are discussed below.

Enterprise enrollment: Microsoft Azure VMware Solutions subscriptions are enabled through enterprise enrollment’s hierarchical structure. Through this organization, they can reflect hierarchies within the organization (geographical, divisional, functional, and so on). Accounts that hold Azure VMware Solution subscriptions can be set up with a cost budget and associated alerts, and customers can also specify who owns the subscriptions. Figure 3-8 depicts the Azure landing zone enterprise enrollment.

A block diagram depicts the steps of Microsoft Azure V VMware Solutions subscription Enterprise enrollment, Enrollment, Department, Accounts, and Subscription, then connected to Azure Active Directory.

Figure 3-8

Azure landing zone - enterprise enrollment

As per Microsoft’s landing zone design best practices, enterprise enrollment is one of the two highest levels of alignment across all cloud consumer Azure deployments, offering Azure billing to consumers and connecting that offer to their Azure AD tenant.

This design area should evaluate design options for cloud consumers. Azure Active Directory tenant association is best suited for a cloud consumer’s overall environment.

Identity and access management: Multiple operations are available through the Azure VMware Solution Resource Provider (RP). Partners and customers want access to these operations controlled by roles. Cloud consumers can create such parts via identity and access management. Furthermore, these roles can be configured with additional functions such as just-in-time (JIT) access and access reviews. As part of identity and access management, Azure AD Domain Services (AAD DS) or Active Directory Domain Services (AD DS) can be configured for workloads requiring Windows authentication. Figure 3-9 depicts the Azure landing zone identity and access management.

A block diagram of Identity and access management depicts Access Reviews, M F A, Azure Domain Service conditional-access, P I M roles, Role Based Access Control, and Audit Reports. The entire functions are connected via double headed arrow to Azure Active Directory.

Figure 3-9

Azure landing zone- identity and access management

As per Microsoft’s landing zone design best practices, identity and access management is the next highest level of alignment across all cloud consumers’ Azure deployment. The design area sets a foundation for managing identity and access.

This design area should evaluate design options for cloud consumers’ identities and access foundations. The following are minimum requirements for cloud consumers to consider when synchronizing identities with Azure Active Directory:
  • User authentication

  • Granting access to resources

  • Determining any separation of duties requirements

Azure VMware Solution’s identity requirements vary depending on its implementation in Azure.

As soon as cloud consumers deploy Azure VMware Solution, the new environment’s vCenter contains a local user called CloudAdmin. The user CloudAdmin has several permissions in vCenter. Using the principle of least privilege, cloud consumers can also create custom roles in the cloud consumer Azure VMware Solution environment. Here are Microsoft’s recommendations for AVS design:
  • Deploy an Active Directory Domain Services (AD DS) domain controller as part of the enterprise-scale landing zone for identity and access management in the identity subscription.

  • Set a limit on how many cloud consumers can assign the CloudAdmin role for each subscription. For Azure VMware Solution users, use custom roles and least privilege.

  • Ensure that Azure VMware Solution role-based access control (RBAC) permissions are limited to users who need to manage Azure VMware Solution and the resource group where it’s deployed.

  • Permissions for vSphere must only be configured at the hierarchy level if they are needed. VM folder or resource pool permissions should be applied at the appropriate place. Datacenter permissions should not be used at the vSphere level or higher.

  • The domain controllers for Azure and Azure VMware Solution AD DS traffic must be updated in Active Directory Sites and Services.

  • vCenter and NSX-T can be managed using Active Directory groups and RBAC. Custom roles can be created and assigned to Active Directory groups by cloud consumers.

Management group and subscription management: Resources of VMware Solutions running in Azure can be deployed into an Azure subscription in a management group or a subscription management. The Azure enterprise-scale landing zone defines operational governance requirements and applies them to management groups. A management group allows Azure VMware Solution subscriptions to be subjected to Azure policy for any operational governance requirements. During Azure VMware Solution migration, an example of enforcing policy requirements is preventing the deployment of VPN connectivity. By distributing Azure VMware Solution workloads across multiple subscriptions, customers and partners can circumvent limits associated with Azure VMware Solution subscriptions. Figure 3-10 depicts the Azure landing zone management group and subscription management.

A block diagram of Management Group and Subscription Management. 3 groups, Tenant Root Group, Management Group, and Subscription Management, then connect to the azure active directory.

Figure 3-10

Azure landing zone - management group and subscription management

As per Microsoft’s landing zone design best practices, management group and subscription management is the next component of alignment across all cloud consumers’ Azure deployments. The design area sets a foundation for organizing resources in the cloud according to consistent design patterns.

This design area should set all compliance-related design decisions on resource organization decisions. Planning resource organization involves establishing consistent patterns in the following areas:
  • Naming standards

  • Tagging standards

  • Subscription design

  • Management group design

Focus on utilizing a subscription design that aligns with the Azure landing zone concept as a starting point. Assigning subscriptions or landing zones based on workloads or applications supports separation of duties and subscription democratization.

Management subscription: The platform management group includes the management subscription. Shared management and monitoring services can be consolidated by purchasing and monitoring subscriptions. Shared services, such as Log Analytics Workspace, allow Azure VMware Solution workloads to send diagnostic information, which can be correlated with logs from other Azure services, such as Azure Application Gateway. Debugging and log correlation become very easy through the centralization and consolidation of diagnostic data across multiple Azure services. Azure VMware Solution workloads can also use Azure Automation Update Management for various purposes, such as patch management, change tracking, configuration management, etc. Figure 3-11 depicts the Azure landing zone management subscription.

A block diagram of management subscription depicts Log Analytics Workspace, Automation Account, and Network Watcher.

Figure 3-11

Azure landing zone - management subscription

As per Microsoft’s landing zone design best practice, management subscription is the next alignment component across all cloud consumers’ Azure deployments. The design area sets a foundation for operations management across cloud consumer Azure, hybrid, or multi-cloud environments.

Throughout the cloud consumers’ cloud platforms, this design area focuses on operational management requirements and implements those requirements consistently across all workloads. This design area should primarily address operations tooling, and cloud consumers can manage the collective portfolio of workloads with a set of standard tools and processes. The operational baseline is referred to as this initial set of operations tools.

Management baselines are required to provide visibility, compliance, platform, and workload management and protect and recover capabilities in the cloud.

Connectivity subscription: Azure VMware Solution includes a connectivity subscription that centralizes network requirements across all Azure workloads. Azure VMware Solution can run workloads using shared services like Azure Virtual WAN, Application Gateway, etc. Reusing these services can help customers reduce costs (instead of creating new services exclusively for Azure VMware Solution). As part of a connectivity subscription, any network resources are available to limited roles, such as NetOps (Network Operations), for holistic network management. Debugging and troubleshooting networking issues become manageable and accountable when access to network resources is controlled. Figure 3-12 depicts the Azure landing zone connectivity subscription.

A block diagram of Connectivity Subscription depicts D D O T S Protection, D N S Zone, and Virtual W A N.

Figure 3-12

Azure landing zone - connectivity subscription

As per Microsoft’s landing zone design, the best practice connectivity subscription is the next alignment component across all cloud consumers’ Azure deployments. The design area sets a foundation for network connectivity across cloud consumers’ Azure environments.

Azure VMware Solution subscription: Subscribers of Azure VMware Solutions are part of the landing zone management group, which makes it possible to reap the benefits of Azure policies applied at that management group. For example, Azure VMware Solutions can be restricted to specific Azure regions; budgets must be enabled before deployment. Multiple subscriptions for Azure VMware Solution allow cloud consumers to manage and scale workloads without being constrained by Azure subscription limits. Figure 3-13 depicts the Azure landing zone subscription.

A block diagram of Azure V M ware Solution and Subscription depicts Resource Group, Azure V M ware Solution, and Express Route.

Figure 3-13

Azure landing zone- Azure VMware Solution subscription

Azure VMware Solution sandbox pilot subscription: Under the Sandbox management group, sandbox subscriptions are deployed as a playground for experimenting with Azure services. Likewise, a sandbox subscription prevents production workloads from being impacted. Sandbox subscriptions for VMware solutions on Azure have less restrictive policies, allowing cloud consumers to have greater control over the service. It’s impossible to use the VMware Solution sandbox subscription for production deployment if cloud consumers create a separate subscription. Figure 3-14 depicts the Azure landing zone pilot subscription.

A block diagram of features of Pilot Subscription depicts Resource Group, Azure V M ware Solution, and Express Route

Figure 3-14

Azure landing zone - pilot subscription

This completes the core design essential elements required for the Azure AVS landing zone design. In next section, you’ll explore AVS network topology and connectivity.

Azure VMware Solution Networking

The concept of terminology is crucial to understanding contexts and Azure’s specialized networking components. The transmission of information will be more effective if you understand the terminology used in technical and Azure contexts. Listed below are the key terms used most frequently throughout this section:
  • Azure virtual networks (VNets) are logically isolated sections of the Azure cloud from which you can launch Azure resources.

  • Private networks in Azure are built using Azure virtual networks. VMs and Azure virtual networks enable Azure resources to connect to the Internet and internal data centers securely. As with a traditional network, a virtual network has consumers in a cloud data center but offers the benefits of Azure, such as scalability, availability, and isolation.

  • Azure VNet routing describes how traffic is routed between subnetworks and subnetworks.

  • Network virtual appliances (NVAs) are network devices that perform functions such as connectivity, application delivery, WAN optimization, and security. They include Microsoft Azure Firewall and Microsoft Azure Load Balancer.

  • Microsoft Azure Virtual WAN is a network service that combines routing, security, and many networking functions under one operational interface.

  • In hub-spoke networks, a hub virtual network connects many spoke virtual networks. Datacenters on-premises can also be connected through the hub. Workloads can be isolated using the spoke virtual networks that peer with the hub.

  • Cloud networks can be scaled with the help of VXLAN (virtual extension LAN). Utilizing Layer 3 (L3) technology, VXLAN extends a local area network into a virtual network.

  • A virtual LAN or broadcast domain can be extended by extending the Layer 2 domain across two sites. Many names for L2 extensions include data center interconnect (DCI), data center extension (DCE), stretched Layer 2 network, stretched VLAN, extended VLAN, stretched deploy, or Layer 2 VPN.

  • In the open systems interconnection model (OSI), layer 4 (L4) represents the fourth layer. In L4, data is transmitted or transferred transparently between end systems, and error recovery end-to-end along with flow control is L4’s responsibility.

  • In the OSI model, layer 7 (L7) is the topmost and seventh layer known as the application layer. In Layer 7, the parties communicating are identified, and the quality of the service between them is evaluated. L7 handles privacy and user authentication, and L7 identifies any data format constraints. This layer handles application-specific data. It is responsible for API calls and responses. HTTP, HTTPS, and SMTP are the most common L7 protocols.

Azure VNet Concept and Microsoft Azure Best Practices

Azure VNets are private networks that you create in the cloud. Each Azure subscription has its own logically isolated and dedicated cloud. Virtual private networks (VPNs) can be configured and managed in Azure VNets. Alternative solutions can be created by linking Azure VNets to your on-premises IT infrastructure and creating hybrid or hybrid cross-premises solutions. Providing the CIDR blocks don’t overlap, you can link a VNet with another VNet and an on-premises network if the CIDR blocks don’t overlap. In addition, administrators can control VNet settings and segment subnets.

Azure connects resources securely to the Internet and on-premises networks. Azure Virtual Networking allows for communication between Azure resources, communication between Azure resources and on-premises resources, filtering network traffic, routing network traffic, and integration of Azure services.

By understanding the concepts and best practices, you can easily deploy virtual networks and connect cloud resources. The following section explains the key concepts.

Address space: IP addresses for VNets must be unique, either public or private (RFC 1918). Azure assigns each virtual network a private IP address from the address space you specify. There can be multiple virtual networks in the same subscription, and each virtual network may have its subnet.

Take into account when designing and deploying this list of non-routable addresses: 10.0.0.0 - 10.255.255.255 (10/8 prefix)
  • 172.16.0.0 - 172.31.255.255 (172.16/12 prefix)

  • 192.168.0.0 - 192.168.255.255 (192.168/16 prefix)

One of the most crucial configurations for a virtual network is the address space. The entire network would be divided into subnets using the entire IP range. The following address spaces cannot be added to your virtual network:
  • 224.0.0.0/4 is used for Azure multicast.

  • 255.255.255.255/32 is used for Azure broadcast.

  • 127.0.0.0/8 is used for Azure loopback.

  • 169.254.0.0/16 is used for Azure link-local.

  • 168.63.129.16/32 is used for Azure internal DNS.

Also, having overlapping address spaces will prevent you from connecting virtual networks.

Subnets: A subnet allows you to segment a virtual network into one or more subnetworks, each receiving a portion of the virtual network’s address space. Once a subnet has been created, Azure resources can be deployed within that subnet. As in a traditional network, you can segment your VNet address space using subnets. In this way, address allocation is also made more efficient.

There are five IP addresses per subnet reserved by Azure cloud services, from x.x.x.0 to x.x.x.3 and the last address.
  • x.x.x.0 is the network address used by Azure.

  • x.x.x.1 is Azure reserves for the default gateway.

  • x.x.x.2 and x.x.x.3 to map the Azure DNS IPs to the VNet space.

  • x.x.x.255 is a network broadcast address for subnets of size /25 and more prominent. This will be a different address in smaller subnets.

CIDR supports a maximum IPv4 subnet size of /2 and a minimum of /29 (CIDR IPv4 subnet definitions). For IPv6, /64 is the minimum subnet size.

Take in your design and deployment consideration that
  • VNets are Layer-3 overlays. The Azure platform does not support Layer-2 semantics.

  • Classless inter-domain routing (CIDR) must be used to specify each subnet’s address range.

  • Azure network engineers can build many subnets and allow a service endpoint for some subnets.

  • Azure VNet does not support multicast or broadcast.

  • A routing table can be deployed on the Azure and associated with a subnet

  • Subnets can be used for traffic management.

  • Virtual network service endpoints permit you to regulate access to Azure resources by subnet.

  • Azure network engineers can use Network Security Groups to segment your network further based on IP address classification.

  • The default limit per virtual network is 3,000 subnets, but that can be scaled up to 10,000 with Microsoft support.

Regions: For all Azure resources, a region is required. In a virtual network, resources can only be created in the same subscription and region as the resource, and it is possible to connect virtual networks across regions and subscriptions. When determining where to deploy your resources, consider the location of your customers (consumers). While Azure engineers can pair VNets with different virtual networks in different regions, they can only be created in one Azure region.

Subscriptions: Subscriptions are scoped for VNets. There is support for multiple virtual networks within Azure subscriptions and Azure regions.

Consider your naming conventions when designing your Azure network during the design and deployment process. Resource names should incorporate information about each resource. Resource types, workloads, deployment environments, and Azure regions that host the resource can all be identified by their proper names. Adding the entire VNet address space to your subnet is not necessary. It is best to plan and reserve address space in advance. Microsoft recommends a few large VNets instead of having multiple small VNets. Network Security Groups (NSGs) can also affect subnets under your VNets.

Now that you have explored VNets, let’s further understand how to use Azure native services (such as Azure ExpressRoute, Azure Traffic Manager, and Azure Application Gateway) as critical components of connecting Azure VMware Solution workloads to an on-premises environment as well as to external users.

These critical use cases can be enabled by network connectivity from a design perspective:
  • VMware on-premises environments can be extended to Azure.

  • You can move VMware workloads from on-premises to Azure.

  • You can securely connect Azure VMware Solution workloads to the public internet.

  • You can create disaster recovery (DR) processes for an on-premises and an Azure VMware environment, or between two Azure solutions.

Figure 3-15 depicts the network architecture for better understanding.

An image of the azure landing zone networking is divided into three categories, consumer, connectivity, and azure.

Figure 3-15

Azure landing zone - networking overview

In Figure 3-15, the architecture acts as a building block for the following requirements:
  • Network connectivity across cloud consumer on-premises and Azure VMware Solution

  • Network connectivity across the public internet and Microsoft Azure VMware Solution

  • Network connectivity across branch/VPN sites and Microsoft Azure VMware Solution

  • Network connectivity across Microsoft Azure and Microsoft Azure VMware Solution

Azure ExpressRoute Global Reach

A diagram of route A illustrates the connection between the Azure VMware Solution environment and the on-premises site. ExpressRoute Global Reach is used to establish the connection. There are two routers involved. MSEE (MS Enterprise Edge) is the first. An on-premises site is connected to Azure via this router. Dedicated Microsoft Enterprise Edge (D-MSEE) routers are the second type of router. Connectivity is established between Azure and the Azure VMware Solution Virtual Private Cloud (VPC). Microsoft manages both routers. Azure VMware Solution pricing includes D-MSEE router pricing. ExpressRoute Global Reach has a maximum network throughput based on the most miniature circuit between two. VMware migration via HCX can only be done using this connectivity option.

Azure Private Endpoint

Azure VMware Solution instances are connected via Route B through a private endpoint. Services such as Azure SQL DB and Azure Cosmos DB can project a private endpoint into an Azure VNet. From the VNet’s IP address space, this private endpoint gets a private IP address. Azure VMware Solution instances are connected to VNets using ExpressRoute, and these instances can access private endpoints in those VNets if there is connectivity to Azure. Azure VMware Solution represents a first step towards the gradual modernization of VMs. For example, this connectivity will enable a Web Server VM in Azure VMware Solution to connect with Azure SQL DB, a managed SQL database service.

Azure VNet Peering

Route C shows connectivity from other Azure regions to an Azure VMware Solution instance. Peering over Azure Virtual Networks enables this connectivity. A VMware solution instance can only be connected to one VNet when provisioning, and this VNet can be peer-to-peer with other VNets that run workloads inside of them. A VMware Solution VM running on Azure can exchange data with multiple workloads in both directions. Azure VMware Solution and other workloads on Azure run smoothly with VNet peering, which provides low-latency, high-throughput connectivity over a Microsoft backbone network.

Azure Application Gateway

Route D shows that Azure services can be integrated with the Azure VMware Solution. This route directs external user requests to Azure Application Gateway, commonly referred to as App Gateway. It exposes a public IP address that can be mapped to a DNS entry, and it is a Layer 7 load balancer. Azure VMware Solution VMs can be used to power App Gateway. Azure VMware Solution VMs combined with App Gateway as a front end ensure that no public IP address is exposed from the Azure VMware Solution environment. App Gateway provides Web Application Firewall (WAF) services that mitigate common vulnerabilities (SQL injection, cross-site request forgery, XSS, etc.) even before requests reach Azure VMware Solution. An Azure VMware Solution environment using App Gateway is excellent for any web-facing workload.

NVA from Azure Marketplace

Network virtual appliances (NVAs) function as network devices that provide services such as connectivity, application delivery, WAN optimization, and security. For example, Azure firewalls and Azure Load Balancers act as NVAs.

Using Azure Marketplace solutions is shown in Route E. Many partner solutions are available on the Azure Marketplace, including firewalls, NVAs, and load balancers. In this flow, customers can choose to accept external requests from their favorite vendor. The request is then forwarded to Azure VMware Solution VMs based on the configured routes and security check, as evaluated by the vendor solution. From an on-premises environment, customers can utilize the license mobility in Azure VMware Solution.

Azure Virtual WAN

An Azure Virtual WAN is a service that combines routing, security, and networking functions into one operational interface. Among these capabilities are the following:
  • Virtual private networks based on customer premises equipment (CPE) can automate branch connectivity.

  • VPN connectivity between sites is possible.

  • Remote users can connect to sites using VPNs.

  • Private Azure ExpressRoute connections are available.

  • Connectivity within the Azure cloud, such as transitive connectivity

  • VPN ExpressRoute interconnectivity

  • Routing

  • Azure Firewall

  • Encryption for private connectivity

Route F demonstrates using a public IP address with Azure virtual WAN (vWAN). Azure vWAN is configured with a public IP address associated with its hub in this route. Firewall rules are then configured with D-NAT rules. Using Azure Firewall Manager, additional firewall rules can also be configured to route requests that arrive at the public IP address to the private IP associated with the Azure VMware Solution VM. With Azure vWAN, cloud consumers can connect any device to any other. Using this feature, multiple on-premises sites/branch locations can gain access to Azure VMware Solution VMs.

Azure PaaS Endpoint

AVS is connected to Azure PaaS services using a public endpoint in route G. AVS and Azure are connected via route 2. It is the Azure PaaS service endpoint that differs between the two routes. Unlike route 2, route 7 uses a public endpoint. Since more and more Azure services offer connectivity over a private endpoint, Microsoft recommends that cloud consumers consume these services over a private endpoint. If the service does not yet have a private endpoint, AVS VMs can connect to their public endpoints to consume it.

Azure ExpressRoute Gateway

Connectivity from branch offices to AVS is shown in Route H. This connection uses a VPN gateway provided by Azure vWAN. Cloud consumers can, however, also switch to ExpressRoute gateway-based connectivity. When multiple branch offices access AVS workloads, this type of connectivity is recommended. As part of this setup, ER or VPN gateway and Azure vWAN establish transitive connectivity between the sites.

Azure VPN Gateway

Route I shows the connectivity between point-to-point VPN sites and site-to-site VPN sites. A VPN gateway is used to provide this connectivity. The VPN topology can be used to make AVS workloads available to VPN sites. VPN gateways in Azure vWAN are built with greater scalability and throughput than VPN gateways in conventional hub networks. AVS workloads can be reached more quickly from multiple VPN sites with this topology.

Azure Virtual Network

Private networks in Azure are built on top of Azure virtual networks. Many Azure resources, including Azure VMs, can communicate securely with each other, the Internet, and on-premises datacenters via Azure VNets. As a virtual network, cloud consumers operate it in their own data center, but Azure network engineers gain the scalability, availability, and isolation of Azure.

From other workloads running in Azure VNets, Route J shows connectivity to AVS workloads. In contrast to the hub-and-spoke topologies that rely on either Azure Firewall or third-party NVAs to establish transitive connectivity, Azure vWAN’s VNet-to-VNet connectivity is transitive. It doesn’t need either Azure Firewall or a third-party NVA.

AVS Network Topology and Connectivity

Cloud-native and hybrid scenarios present unique challenges when integrating VMware software-defined datacenters with Microsoft Azure Cloud ecosystem. This section examines critical reviews and best practices for networking and connectivity to the Azure Cloud and Azure VMware deployments.

Let’s further explore the fundamentals elements prior to designing:
  • In a hub-spoke topology, a hub virtual network connects many spoke virtual networks. Data centers on-premises can also be connected to the hub, and they can be used to isolate workloads and peers with the hub.

  • A virtual LAN or broadcast domain can be created by stretching (OSI) model Layer 2 across two sites. Datacenter interconnects (DCI), datacenter extensions (DCE), and a Layer 2 VPN are used to describe the L2 extension.

  • VXLANs (virtual extension LANs) enable cloud networks to scale. VXLANs extend a local area network using interconnection model Layer 3 technology to create a virtual network.

  • In the open systems interconnection (OSI) model, Layer 4 is the fourth layer. L4 facilitates transparent data transfer from one system to another. L4 handles end-to-end error recovery, as well as flow control. In addition to UDP (User Datagram Protocol), UDP-Lite (UDP-Lite), CUDP (Cyclic UDP), RUDP (Reliable UDP), ATP (AppleTalk Transaction Protocol), MPTCP (Multipath TCP), and SPX (sequential packet exchange), L4 has several other protocols.

  • Layer 7 is the seventh layer of the OSI model, referred to as the application layer. The communication parties are identified at Layer 7 and the quality of service between them. L7 handles the privacy and authentication of users and identifies any data syntax constraints. This layer pertains only to applications. It handles API calls and responses. HTTP, HTTPS, and SMTP are some of the most important L7 protocols.

Design thinking around network topology on Azure VMware Solution platforms can use this enterprise-scale design guidance. Foundational design elements include
  • Connectivity between on-premises, multi-cloud, edge, and global users via hybrid integration

  • Consistent, low-latency performance and scalability for workloads at scale

  • Security based on zero-trust for network perimeters and traffic flows

  • Extendibility without reworking the design

Best Practices and General Considerations for AVS Network Design

  • Consider using an Azure Bastion host in an Azure VNet to access the Azure VMware Solution environment during deployment.

  • In the Azure VMware Solution management network, once the routing to the on-premises environment is established, the 0.0.0.0/0 routes from the on-premises networks are not honored, so cloud consumers must advertise more specific routes for the on-premises networks.

  • The default gateway remains on-premises with VMware HCX migrations; VMware HCX migrations can use the HCX L2 extension. Migrations requiring Layer 2 extension require ExpressRoute, and VPN isn’t supported. To accommodate the overhead of HCX, a maximum transmission unit size of 1350 should be used.

  • Create and configure a firewall on your own premises to ensure that all components of the Azure VMware Solution private cloud are accessible.

  • Port mirrored traffic and traffic inspection are both used for network security. To inspect traffic between regions within the SDDC, NSX-T or NVAs are used and bidirectional traffic flows between Azure VMware Solutions and datacenters are inspected using north-south traffic inspection.

  • To optimize bandwidth between Azure VMware Solution and an Azure VNet, select an appropriate virtual network gateway SKU (either Standard, HighPerformance, UltraPerformance, ErGw1Az, ErGw2Az, or ErGw3Az) VMware Azure Solutions supports a maximum of four ExpressRoute circuits per region per ExpressRoute gateway.

  • For Microsoft Azure VMware Solution, Global Reach is a required ExpressRoute add-on to communicate with on-premises data centers, Azure VNets, and virtual WANs. Azure Route Server can also be used to design cloud users’ network connectivity.

  • Global Reach provides free peering between the Azure VMware Solution ExpressRoute circuit and other ExpressRoute circuits.

  • Cloud consumers can peer ExpressRoute circuits through an ISP and ExpressRoute Direct circuits using Global Reach.

  • The Global Reach feature is not available for ExpressRoute Local circuits. With ExpressRoute Local, third-party NVAs in Azure virtual networks enable transit from Azure VMware Solution to on-premises data centers, and Global Reach is not available in all regions.

  • Out-of-the-box, Azure VMware Solution provides one free ExpressRoute circuit. It connects Azure VMware Solution and D-MSEE.

  • All clusters can communicate in an Azure VMware Solution private cloud since they all share the same /22 address space.

  • Connectivity settings for all clusters are the same, including Internet, ExpressRoute, HCX, public IP, and ExpressRoute Global Reach. Basic networking settings can also be shared among application workloads, such as network segments, Dynamic Host Configuration Protocol (DHCP), and domain name systems (DNS).

  • VPN and ExpressRoute can be connected using a virtual WAN. However, hub-spoke topologies are not supported.

  • There are several options for enabling Internet outbound traffic, filtering it, and inspecting it, including
    • Using Azure Internet Access and Azure Virtual Network, NVA, and Azure Route Server

    • With on-premises Internet access

    • With Azure Firewall or NVA, and Azure Internet Access, a virtual WAN hub can be secured.

  • Content and applications can be delivered via inbound Internet methods, including
    • Azure Application Gateway with SSL termination, Azure Application Gateway with L7, and Azure Application Firewall

    • Via DNAT and load balancing

    • Through Azure VNets, Azure Network Virtual Agent, and Azure Route Server

    • Azure Firewall and L4 and DNAT on a virtual WAN hub

    • Multiple scenarios in which a virtual WAN hub is connected to NVA

Networking Deployment Scenarios

Design and deployment of networking capabilities for Azure VMware Solutions is vital. Various Azure networking options are available. A cloud consumer’s organization’s workload, governance, and requirements determine the architecture and structure of AVS networking services.

A VMware Azure Solution private cloud has no external connectivity when deployed. To establish a connection to an on-premises environment, Azure native services are required. By selecting a low-latency and high-bandwidth connection, Azure ExpressRoute enables a private and secure connection between an on-premises environment and Azure VMware Solution. To establish bidirectional connectivity between an Azure VMware Solution private cloud and on-premises environment, Azure ExpressRoute Global Reach should be used. Figure 3-16 depicts the Azure networking logical architecture.

A network flow diagram consists of Premises, connected to Connectivity and then Azure which consists of different elements, M S E E, Global Reach, and D-M S E E, E R Edge, Subscription, and Azure V M ware solution. A box labeled Management Tools is connected to Azure from the top right.

Figure 3-16

Azure Networking logical architecture

Connecting Azure VMware Solutions to a customer’s data center utilizing Azure ExpressRoute provides a highly available, secure private connection (not on the public Internet) and high bandwidth connections with low latency. Across all regions within a geographic area, this connectivity provides access to Azure native services. To access Azure native services globally, a premium Azure ExpressRoute add-on is required. The ExpressRoutes have built-in Layer 3 redundancy and use the Border Gateway Protocol (BGP) for route exchanges between the Azure VMware Solution and a customer’s on-premises data center. Azure ExpressRoute circuits consist of two connections to two MSEEs from the peering provider to the customer’s edge network. Each connection goes to an MSEE for highly available, resilient connections Azure VMware Solution.

Microsoft ExpressRoute Global Reach connects Azure VMware Solutions to on-premises environments through ExpressRoute circuits.

Key prerequisites and references influence Azure VMware Solution design and deployment decisions:
  • For Microsoft Azure ExpressRoute, a valid and active Azure account is required. An Azure Enterprise Agreement (EA) or a cloud solution provider (CSP) subscription is required to deploy Azure VMware Solution.

  • There are several ways to connect to Azure VMware Solution:
    • Any device in an IP VPN network can communicate with any other device.

    • Point-to-point Ethernet network

    • Connections via a service provider

  • The MSEE routers and each peering router on an ExpressRoute circuit must have redundant BGP sessions.

  • The implementation of network address translation (NAT) in a customer’s on-premises environment will require a source network address translation (SNAT) by the customer or provider. By peering, Azure ExpressRoute only accepts public IP addresses.

Key non-functional requirements and references that influence Azure VMware Solution design and deployment decisions are the following:
  • Microsoft guarantees that Azure ExpressRoute will be available at least 99.5% of the time.

  • There is a range of speeds supported by ExpressRoute circuits, from 50 Mbps to 10Gbps.

  • Cloud consumers can choose from three different billing models for Azure ExpressRoute: ExpressRoute Metered Data, ExpressRoute Unlimited Data, and the Global Reach Add-On. When a service key is provisioned, the ExpressRoute circuit is billed.
    • With ExpressRoute with Metered Data, all data transfers ingress (inbound) are free. A predetermined rate is charged for egress (outbound) data transfers, and this rate includes a fixed monthly port fee.

    • Unlimited data is available with ExpressRoute and all ingresses (inbounds) and egresses (outbounds) are free. Port fees are included monthly.

    • Global Reach Add-On integrates Azure ExpressRoute circuits to create a private network between an on-premises environment and VMware Cloud on Azure.

Using Azure PowerShell, Azure CLI, ARM templates or an Azure portal, cloud consumers/Azure VMware administrators can set up Azure ExpressRoutes.

The following essential requirements and references influence Azure VMware Solution design and deployment decisions:
  • Application requirements for HTTP/S or non-HTTP/S Internet ingress into Azure VMware Solutions

  • Considerations for the egress path to the Internet

  • L2 extension for migrations

  • Using NVA in the current architecture

  • Virtual WAN or standard hub virtual network connectivity for Azure VMware Solutions

  • Connectivity to Azure VMware Solution via a private ExpressRoute from on-premises datacenters, as well as whether Global Reach is enabled

  • Requirements for traffic inspection for
    • Ingress from the Internet into Azure VMware Solution applications

    • Egress access to the Internet from Azure VMware Solution

    • On-premises datacenter access through Azure VMware Solution

    • Enabling an Azure Virtual Network connection

    • Between the public and private clouds within Azure VMware Solution

Using Azure VMware Solution traffic inspection requirements, Table 3-1 provides recommendations and considerations for four of the most common networking scenarios.
Table 3-1

Recommendations and Considerations for Common Networking Scenarios

Traffic Inspection Needs

Solution

Design Considerations

Scenario 1: Internet ingress and egress

Virtual WAN secured hub with default gateway propagation is recommended.

Azure Application Gateway or Azure Firewall can be used for HTTP/HTTPS traffic.

In Azure VMware Solution, deploy the secured virtual WAN hub and enable public IP addresses.

On-premises filtering is not possible with this solution.

Global Reach bypasses the virtual WAN hub.

Scenario 2: Internet ingress and egress or to on-premises datacenter or to an Azure VNet

Third-party firewalls can be used in the hub virtual network with Azure Route Server.

Global Reach must be disabled.

For HTTP/HTTPS traffic, use Application Gateway and third-party firewalls for non-HTTP/S traffic.

It's the most popular option for customers who need to centralize all traffic inspection into a hub virtual network while using their existing NVA.

Scenario 3: Internet ingress and egress or to on-premises datacenter or to an Azure VNet or within Azure VMware Solution

NSX-T or an NVA firewall from a third party can be used with Azure VMware Solution.

HTTP(s) traffic should go through Application Gateway, while traffic not using HTTP(s) should go through Azure Firewall.

The virtual WAN hub should be configured, and public IP enabled in Azure VMware Solution.

Use this option if you need to inspect traffic between two or more Azure VMware Solution private clouds.

This option can be used with NSX-T native features or with NVAs running on Azure VMware Solution between L1 and L0.

Scenario 4: Internet ingress or to an Azure VNet

Secure cloud consumers’ WANs with virtual WANs.

Azure Application Gateway is best for HTTP and HTTPS traffic, and Azure Firewall is best for non-HTTP/S traffic.

Ensure a virtual WAN hub is deployed and public IP is enabled in Azure VMware Solution.

0.0.0.0/0 can be advertised from on-premises data centers using this option.

Now let’s do deep dive into design elements for each scenario.

Scenario 1: Default Route Propagation for a Secured Virtual WAN Hub

Requirements:

  • Cloud consumers of Azure VMware Solution workloads need to inspect traffic between those workloads and the Internet.

Assumption:
  • Cloud consumers of Azure VMware Solution and Azure Virtual Network do not require traffic inspection.

  • Cloud consumers of the Azure VMware Solution do not need traffic inspection between the cloud and on-premises datacenters.

  • PaaS offering for Azure VMware Solutions.

  • Cloud consumers of the Azure VMware Solution do not own public IP addresses. If needed, the cloud consumer is expected to add public-facing L4 and L7 inbound services.

  • Network connectivity between on-premises data centers and Azure might or might not be provided by ExpressRoute for cloud users.

Design overview:

Figure 3-17 illustrates how the solution could be implemented.

A network diagram depicts Premises, Connectivity, Internet, and Management Tools to Azure. Inside Azure is a complex system filled with different elements, M S E E, ExpressRoute Global Reach, Hub Virtual Network, A V S, 0.0.0.0 slash advertisement Secured V W A N H U B, and D M S E E.

Figure 3-17

Azure networking - default route propagation for a secured virtual WAN hub

Azure components to be considered for deployment:
  • For firewalls, Azure Firewall is part of the secured virtual WAN hub.

  • Load balancing with the Application Gateway

  • Translation and filtering of network ingress traffic using L4 DNAT (destination network address translation) and Azure Firewall

  • Inbound Internet outbound through Azure Firewall

  • Azure network engineers can use ExR, VPN, or SD-WAN to connect Azure VMware Solution to on-premises datacenters.

Design consideration:

Suppose Azure network engineers don’t want to receive the default route 0.0.0.0/0 advertisement from Azure VMware Solution because it conflicts with cloud consumers’ existing environments. In this case, Azure network engineers need to take more action.

Azure Firewall advertises 0.0.0.0/0 in the secured virtual WAN hub. Also, on-premises, Global Reach advertises 0.0.0.0/0. To prevent route learning from 0.0.0.0/0, implement an on-premises route filter. Cloud consumers will not experience this issue if they use SD-WAN or VPN.

By default, the 0.0.0.0/0 route from a virtual WAN hub propagates to an ExpressRoute gateway instead of directly to the virtual network hub, prioritizing the Internet system route built into the virtual network. 0.0.0.0/0 user-defined routes can be implemented in the virtual network to override the learned default route.

Additionally, VPNs, ExpressRoutes, and virtual network connections to the secure virtual WAN hub that do not require 0.0.0.0/0 advertisements will receive the advertisement. Azure network engineers can implement these strategies:
  • With an on-premises edge device, filter out 0.0.0.0/0 traffic.

Alternatively,
  • Disconnect the VPN or ExpressRoute.

  • Turn on 0.0.0.0/0 propagation.

  • Block its propagation on those connections.

  • Then reconnect.

Customers of cloud-based services can host Application Gateways on spoke virtual networks connected to hub or hub virtual networks.

Scenario 2: With Global Reach Disabled, Third-Party NVA in Azure Virtual Network with Azure Route Server

Requirements:
  • The Azure VMware Solution private cloud customers need fine-grain control over firewalls outside the cloud.

  • Consumers will need to use an appliance from their current vendor in Azure and other on-premises environments.

  • Cloud consumers usually require public-facing L4 and L7 inbound services for Azure VMware Solutions and outbound Internet connectivity.

Assumption:
  • Azure VMware Solution customers want to inspect traffic between on-premises datacenters and Azure VMware Solution, but Global Reach isn’t available for geopolitical reasons.

  • In Azure, customers require a block of predefined IP addresses for inbound service and a set of public IP addresses for outbound service. Customers do not own public IP addresses in this scenario.

Design overview:

Figure 3-18 illustrates how the solution could be implemented.

A network diagram depicts the connections of On Premises, Connectivity, Internet, and Management Tools to Azure. Inside Azure, two Hub Virtual Networks are peering. They are also connected by e B G P V X L A N tunnel. Hub Virtual Network 1 peers with Spoke V Net 1 and 2. A V S is connected to Hub Virtual Network 2.

Figure 3-18

Azure networking with Global Reach disabled, third-party NVA in an Azure virtual network with Azure Route Server

Azure components to be considered for deployment:
  • For firewalls and other network functions, third-party NVAs should be hosted in a virtual network.

  • Use the Azure Route Server to route traffic between the Azure VMware Solution, on-premises data centers, and virtual networks.

  • L7 HTTP/S load balancing is best achieved through Application Gateway.

  • To enable ExpressRoute Global Reach, Azure network engineers must disable it. Azure VMware Solution relies on NVAs to provide outbound Internet access.

Design consideration:

An Azure network engineer needs to disable ExpressRoute Global Reach in the hub virtual network that hosts the NVAs to accomplish this. Microsoft Enterprise Edge ExpressRoute routers direct traffic for Azure VMware Solutions directly between themselves, skipping the hub virtual network.

Make sure traffic is routed through the hub by implementing Azure Route Server. Implementing and managing an NVA solution or utilizing an existing one is the cloud consumer’s responsibility.

As a precaution, deploy the NVAs in an active-standby configuration if the cloud consumers need high availability for the NVAs.

The above architectural diagram depicts an NVA with VXLAN support.

Scenario 3: A VMware Egress Without NSX-T or NVA or With VMware Egress from Azure VMware Solution

Requirements:
  • NSX is native to Azure VMware Solution, so cloud consumers must use a PaaS deployment.

  • NSX tier-0/tier-1 routers or NVAs handle all the traffic between Azure VMware Solution and an Azure virtual network, Azure VMware Solution and the Internet, and Azure VMware Solution on-premises datacenters.

  • Cloud consumers require inbound HTTP/S or L4 services.

Assumption:
  • Cloud consumers already have ExpressRoute connectivity between their on-premises data centers and Azure.

  • For traffic inspection, it would be helpful if cloud consumers had a BYOL (bring your own license) NVA.

Design overview:

Figure 3-19 illustrates how the solution could be implemented.

A diagram depicts connection of Premises, Connectivity, Internet, and Management Tools to Azure. Inside Azure box are as follows. Hub Virtual Network peering with 2 Spoke V Net 1 and 2, an Azure Firewall only as public endpoint P I P for A V S N V A Secured V W A N Hub, A V S, M S E E, Express Route Global Reach and D M S E E.

Figure 3-19

Azure networking - a VMware egress without NSX-T or NVA or with VMware egress from Azure VMware Solution

Azure components to be considered for deployment:
  • In Azure VMware Solution, a NSX Distributed Firewall (DFW) or NVA tier-1 firewall is used.

  • For load balancing, use a L7 application gateway.

  • L4 DNAT by using Azure Firewall with the VMware Solution for Azure

Design consideration:

Internet access needs to be enabled in the Azure portal for cloud consumers. It’s not deterministic, and the outgoing IP address can change. Outside the NVA are public IP addresses, and the NVA in Azure VMware Solution still has private IP addresses and doesn’t determine the outgoing public IP address.

Cloud consumers are responsible for bringing the license and implementing high availability for NVAs, a BYOL license.

Scenario 4: A VMware Egress Without NSX-T or NVA or With VMware Egress from Azure VMware Solution

Requirements:
  • Inbound services should be available over HTTP/S or L4.

  • On-premises traffic inspection is required for outbound traffic. As the traffic between VMware Azure Solutions and Azure Virtual Network moves through the Azure Virtual WAN hub, it is inspected.

Assumption:
  • On-premises environments need to advertise 0.0.0.0/0 through the NVA and use the on-premises NVA.

  • With Global Reach, cloud consumers already have ExpressRoute between Azure and their on-premises datacenters.

Design overview:

Figure 3-20 illustrates how the solution could be implemented.

A diagram depicts connections of Premises, Connectivity, Internet, and Management Tools to Azure. Inside Azure box are as follows. Hub Virtual Network peering with 2 Spoke V Net 1 and 2, an Azure Firewall only as public endpoint P I P for A V S N V A Secured V W A N Hub, A V S, M S E E, Express Route Global Reach and D M S E E.

Figure 3-20

Azure networking - a VMware egress without NSX-T or NVA or with VMware egress from Azure VMware Solution

Azure components to be considered for deployment:
  • For load balancing, an L7 application gateway

  • The L4 DNAT by using Azure Firewall

  • On-premises breakout of Internet traffic

  • Azure VMware Solution and on-premises datacenters via ExpressRoute

Design consideration:

According to this design, public IP addresses are allocated to the on-premises NVA. The default 0.0.0.0/0 route from the Virtual WAN hub propagates to the ExpressRoute gateway if cloud consumers connect via an ExpressRoute gateway rather than directly to a hub-and-spoke topology, which takes precedence over the Internet system route built into the virtual network. Azure network engineers can overcome this issue by implementing a 0.0.0.0/0 user-defined route in the virtual network to override the learned default route.

AVS Identity and Access Management

The Azure VMware Solution private clouds are equipped with a vCenter Server and NSX-T Manager. A VM workload is managed with vCenter, and the private cloud is managed and extended with NSX-T Manager. As a CloudAdmin, cloud consumers have limited administrator rights for NSX-T Manager and vCenter.

Figure 3-21 depicts AVS identity and access management

A diagram of Identity and Access Management depicts v m ware V sphere and V M ware N S X -T.

Figure 3-21

Azure identity and access management

VMware vSphere and VMware vSAN (via vCenter):

The CloudAdmin role in vCenter is assigned to a local user, CloudAdmin, in Azure VMware Solution. Cloud consumers can configure users and groups in Active Directory for their private cloud using the CloudAdmin role. CloudAdmins are responsible for managing the workloads of private cloud consumers. Unlike other VMware cloud solutions and on-premises deployments, the CloudAdmin role has privileges related to vCenter in Azure VMware Solution.

The vCenter [email protected] account is used in vCenter and ESXi on-premises deployments. Users can be assigned to more AD groups and users as well. The administrator cannot access the administrator account during a VMware Solution deployment on Azure. CloudAdmin users and groups can be assigned to AD within vCenter. The CloudAdmin role does not have permission to add an identity source to vCenter such as an on-premises LDAP or LDAPS server. However, cloud consumers can add an identity source and assign CloudAdmin roles to users and groups using the Run command.

Microsoft does not support and cannot manage specific management components in private clouds. Among them are clusters, hosts, datastores, and virtual switches.

A managed resource for support of platform operations is the vsphere.local SSO domain provided in Azure VMware Solution. Local groups and users can’t be created or managed outside of those provided by default with cloud consumers’ private clouds.

Custom roles are available on vCenter but not on the Azure VMware Solution portal.

VMware NSX-T:

NSX-T Manager can be accessed using the admin account. Consumers have full privileges and can create and manage Tier-1 (T1) gateways, network segments (logical switches), and all cloud services. Furthermore, cloud consumers can access the NSX-T Tier-0 (T0) gateway. Degradation of network performance or no access to the private cloud may result from changes to the T0 gateway. Any changes to cloud consumers’ NSX-T T0 gateway can be requested through the Azure portal. NSX-T manager access can be managed via RBAC if external identity sources are used.

Key design considerations:

Cloud consumers can create a local user called CloudAdmin in vCenter after deploying Azure VMware Solution. The user is assigned to the CloudAdmin role with several permissions in vCenter. Azure VMware Solutions users can also create custom roles based on the principle of least privilege.

Key design best practices:
  • Deploy an AD DS domain controller in the identity subscription as part of the identity and access management enterprise-scale landing zone.

  • Cloud consumers should be able to assign only a limited number of CloudAdmin roles. Assign Azure VMware Solution users through custom roles and least privileges.

  • Microsoft recommends rotating CloudAdmin and NSX admin passwords with caution.

  • Azure VMware Solution role-based access control (RBAC) permissions should only be granted to users who need to manage Azure VMware Solution and to the resource group where it’s deployed.

  • Custom roles can only be configured at the hierarchy level if needed for vSphere permissions. Permissions should be granted at the appropriate VM folder or resource pool, and datacenter permissions should never be applied to vSphere.

  • To direct Azure and Azure services from Active Directory, update the Sites and Services.

  • AD DS traffic from VMware Solution is directed to appropriate domain controllers.

  • Run the following commands in cloud consumers’ private clouds:
    • NSX-T and vCenter require an AD DS domain controller as an identity source.

    • Manage the lifecycle of the vSphere. local/CloudAdmin group.

  • vCenter and NSX-T can be managed by creating groups in Active Directory and using RBAC. Cloud consumers can create custom roles and can be assigned to Active Directory groups.

AVS Security, Governance, and Compliance

An IT environment that is virtualized has software-based security solutions different from traditional network security, which is hardware-based and runs on devices like firewalls, routers, and switches.

Virtualized security’s flexibility and dynamic nature make it different from hardware-based security. The software can be deployed anywhere in the network and is often cloud-based rather than tied to a specific device. As operators create workloads and applications dynamically in virtualized networks, virtualized security allows security services and functions to move with those dynamically created workloads.

For virtualized security, it is crucial to isolate multitenant environments in public cloud environments. Hybrid and multi-cloud environments, where data and workloads move between multiple vendors, can be made more secure by virtualized security.

The functions of traditional hardware security appliances (such as firewalls and antivirus protection) can be virtualized and delivered via software. Additional security functions can also be performed via virtualization. Virtualization offers these features, and they are designed to address the unique security needs of virtualized environments.

This can be accomplished by implementing a virtualized security application directly on bare-metal hypervisors (a position that allows it to provide practical monitoring of applications) or by hosting it as a service on a virtual machine. Physical security, linked to a specific device, cannot be deployed quickly where it is most effective. Figure 3-22 depicts AVS security.

A block diagram depicts cost-effectiveness, flexibility, and operational efficiency, which are advantages of V VMware Security.

Figure 3-22

Azure security

In addition to being more flexible and efficient than traditional physical security, virtualized security is essential to meet the security demands of a virtualized network. Its specific advantages include the following:
  • Cost-effectiveness: By implementing virtualized security, companies can maintain a secure network without spending money on expensive proprietary hardware. Organizations that use resources efficiently can save more money by purchasing cloud-based virtualized security services. Pricing for virtualized security services is often determined by usage.

  • Flexibility: Workloads can be followed anywhere with virtualized security, which is very important in a virtualized environment. Multiple data centers and hybrid cloud environments can be protected, enabling companies to benefit from virtualization while keeping their data safe.

  • Operational efficiency: The setup and configuration of several hardware appliances are not required with virtualized security; it is much faster and easier to deploy than hardware-based security. Instead, they can scale security systems through the centralized software, enabling rapid deployment. Security tasks can also be automated using software, allowing IT teams to spend more time on other tasks.

You can securely implement and holistically govern for Azure VMware Solution throughout its lifecycle. This section covers specific design elements and provides targeted recommendations for Azure VMware Solution security, governance, and compliance.

The following factors should be considered when determining which systems, users, or devices can access Azure VMware Solution and how to secure the platform in general:
  1. 1.

    AVS security

     
  2. 2.

    AVS governance

     
  3. 3.

    AVS compliance

     

AVS Security

When an Azure architect designs the AVS solution from security aspects, four central components need to be considered: identity security, environmental and network security, guest application, and VM security. Figure 3-23 depicts the overall view of AVS security.

A diagram displays VMware security components. 1, Identity Security. 2, Environment and Network Security. 3, Guest Application and V M security.

Figure 3-23

Azure security - logical view

Let’s start with identity security.

Limits on permanent access: In the Azure VMware Solution private cloud, Azure VMware Solution uses the Contributor role. This limit is intended to prevent accidental or intentional misuse of contributor rights. Audit and limit cloud consumers’ time on highly privileged accounts by using privileged account management solutions.

Set up a privileged access group for Azure AD within Azure Privileged Identity Management (PIM) to manage Azure AD user and service principal accounts. The Azure VMware Solution Group can be used for creating and managing time-bound, justification-based access to clusters.

Audit history reports from Azure AD PIM can be used to track Azure VMware Solution administrative activities, operations, and assignments. Cloud users can archive the reports in Azure Storage for long-term audit retention.

Centralized identity management: Using the Azure VMware Solution will grant cloud consumers access to the VMware environment as a cloud administrator or network administrator. Azure VMware Solution contributors with role-based access control (RBAC) see these administrative accounts.

The VMware control plane RBAC capabilities can be used to properly manage role and account access to prevent overuse or abuse of CloudAdmin and network administrator users. Use least-privilege principles to create multiple targeted identity objects, such as users and groups. Azure VMware Solution administrators’ access should be limited, and accounts should be configured to be break-glass accounts. When all other administrative accounts are unavailable, use the built-in accounts.

With the CloudAdmin account provided by VMware, you can integrate AD DS and Azure AD DS with VMware vCenter and NSX-T control applications and domain services administrative identities. Manage and operate Azure VMware Solutions using domains, services-sourced users and groups, and don’t allow account sharing. Create custom vCenter roles and associate them with AD DS groups for fine-grained privileged access control to VMware control surfaces.

With Azure VMware Solution, it is possible to rotate and reset NSX-T and vCenter administrative account passwords. Rotate these accounts regularly whenever cloud consumers use the break-glass configuration.

Guest virtual machine identity management: Manage and protect business processes and data using centralized authentication and authorization for Azure VMware Solution guests. Manage applications and guests within the Azure VMware Solution framework. Set up guest VMs to use a centralized identity management solution for authentication and authorization.

Azure VMware Solution guest VMs and application identity management should be handled by AD DS or Lightweight Directory Access Protocol (LDAP). To ensure continued functionality during outages, domain services architecture accounts for outage scenarios. A seamless guest authentication and authorization experience is provided by connecting AD DS with Azure AD.

Next, let’s look at environment and network security.

Native network security capabilities: Implement network security software, such as traffic filtering, Open Web Application Security Project (OWASP) rule compliance, and unified firewall management, as well as distributed denial of service protection (DDoS).
  • Segment traffic is controlled by traffic filtering. NSX or Azure NVA capabilities can be used to implement guest network traffic filtering devices to limit access between guest network segments.

  • Cloud VMware Solution guest web applications are protected from generic web attacks by compliance with the OWASP Core Rule Set. Protect web applications hosted by Azure VMware Solution guests with the OWASP capabilities of Azure Application Gateway Web Application Firewall (WAF). Integrate WAF logs into your logging strategy and enable prevention mode using the latest policy.

  • It is possible to reduce the risk of unauthorized access by managing firewall rules. For Azure VMware Solution, the firewall architecture contributes to a more comprehensive approach to network management and environmental security. Stateful firewalls are those that allow for traffic flow, inspection, centralized rule management, and event collection.

  • Azure VMware Solution workloads are protected from DDoS attacks that could cause financial losses or poor user experiences. Apply DDoS protection on the Azure virtual network that hosts the ExpressRoute termination gateway for the Azure VMware Solution connection. Consider using Azure Policy for automatic enforcement of DDoS protection.

Controlled vCenter access: vCenter can be accessed through uncontrolled access, which increases the attack surface area. NSX Manager and Azure VMware Solution vCenter can be accessed securely using a dedicated privileged access workstation (PAW). Create a group of user accounts and assign them to this group.

Inbound internet request logging for guest workloads: Azure Firewall or an NVA that maintains an audit log of requests made to guest VMs is recommended. Integrate those logs into cloud consumers’ security incident and event management (SIEM) solution for appropriate monitoring and alerting. Azure event information and logging can be processed using Microsoft Sentinel before integrating existing SIEM solutions.

Session monitoring for outbound internet connection security: Detect unusual or suspicious outbound Internet activity by using rule control and session auditing from Azure VMware Solution. To ensure maximum security, decide when and where to perform outbound network inspections.

Rather than relying on the default Internet connectivity Azure VMware Solution provides, utilize specialized firewall, NVA, and virtual WAN services for outbound Internet connectivity.

When filtering egress traffic with Azure Firewall, use service tags like virtual networks and fully qualified domain names (FQDNs). Similar capabilities are available for other NVAs.

Secure backups managed centrally: Use RBAC and delayed delete to prevent accidental or intentional deletion of backup data. Protect backup data from being deleted using Azure Key Vault, which manages encryption keys and restricts access to backup storage locations.

Ensure data is encrypted both in transit and at rest using Azure Backup or another backup technology validated for VMware solutions. Use resource locks and the soft-delete feature to protect against accidental or intentional deletion of backups using Azure Recovery Services vaults.

Finally, let’s look at the last component: guest application and VM security.

Detection of advanced threats: To prevent various security risks and data breaches, use endpoint security protection, security alert configuration, change control processes, and vulnerability assessments. Cloud consumers can use Microsoft Defender for Cloud to manage threats, protect endpoints, monitor security alerts, patch operating systems, and enforce compliance regulations.

Onboard cloud consumers’ guest VMs using Azure Arc for servers. Create dashboards and alerts using Azure Log Analytics, Azure Monitor, and Microsoft Defender for Cloud once cloud consumers are onboarded. To protect and alert on threats associated with VM guests, use Microsoft Defender Security Center.

Before cloud consumers migrate or create new guest VMs, install the Log Analytics agent on VMware VMs. Configure the MMA agent to send metrics and logs to the Azure Log Analytics workspace. Check that the Azure VMware Solution VM reports alerts to Azure Monitor and Microsoft Defender for Cloud after migration.

In addition, an Azure VMware Solution certified partner can assist cloud consumers in assessing VM security postures and providing regulatory compliance against Center for Internet Security (CIS) requirements.

Security analytics: Use security analytics to detect cyber-attacks by collecting, correlating, and analyzing security event data from Azure VMware Solution VMs and other sources. Microsoft Sentinel can access data from Microsoft Defender for Cloud. Install and configure Azure Service Manager, Azure Resource Manager, Domain Name System (DNS), and other Azure services related to Azure VMware Solution deployment. Consider using a partner data connector.

Guest VM Encryption: Azure VMware Solution offers VSAN storage platform data-at-rest encryption as a guest VM feature. Depending on the workload and environment, it may be necessary to encrypt the data more to protect it. Consider encrypting the guest VM operating system (OS) and data in these situations. Encrypt guest VMs with native guest OS tools. Protect the encryption keys in Azure Key Vault.

Database encryption and activity monitoring: To prevent unauthorized access to SQL and other databases, cloud consumers can encrypt them using Azure VMware Solution. Database workloads should be encrypted at rest using transparent data encryption (TDE) or an equivalent native database feature. Ensure that workloads are on encrypted disks and that sensitive secrets are stored in a key vault dedicated to the resource group.

Cloud consumers can use Azure Key Vault for their keys in bring-your-own-key scenarios, such as BYOK for Azure SQL Database’s transparent data encryption. Maintain key and data management separately.

Monitoring for unusual database activity can reduce the risk of insider attacks. Microsoft recommends monitoring native databases with Activity Monitor or an Azure VMware Solution certified partner solution. Cloud consumers should use Azure database services if they want enhanced auditing controls.

Keys for extended security updates: Provide and configure keys for pushing and installing security updates to Azure VMware Solution VMs. Configure ESU keys using the Volume Activation Management Tool.

Code security: Ensure security measures are implemented in DevOps workflows to avoid security vulnerabilities in Azure VMware Solution workloads. Use modern authorization and authentication workflows like Open Authorization (OAuth) and OpenID Connect.

For a versioned repository that ensures the integrity of your codebase, use GitHub Enterprise Server on Azure VMware. Azure VMware Solution or Azure Secure Environment can be used to build and run agents.

AVS Governance

Plan for the AVS environment and workloads governance by designing and deploying the following recommendations. Figure 3-24 depicts the overall view of AVS governance.

A block diagram of A V S governance depicts Environment Governance and Guest Application and V M security.

Figure 3-24

Azure governance - logical view

Let’s start with environmental governance.

vSAN storage space: If vSAN storage space is insufficient, SLA guarantees may be affected. The SLA for Azure VMware Solution outlines customer and partner responsibilities. Percentage Datastore Disk Used alerts should be assigned proper priorities and owners. What percentage 80%?

VM template storage policy: Storage policies that reserve too much vSAN capacity can be problematic. VM templates should have thin-provisioned storage policies, which do not require space reservations. When VMs do not reserve the entire amount of storage up front, storage resources can be used more efficiently.

Host quota governance: When there is insufficient host capacity for growth or disaster recovery (DR) needs, there can be a delay of 5-7 days in getting more host capacity. Consider growth and disaster recovery in the solution design when requesting host quotas and periodically review maximums and growth rates to ensure appropriate lead times for expansion requests. For example, if a cluster of three nodes needs three more for DR, request a six-node host quota. There is no additional charge for host quotas.

Failure-to-tolerate (FTT) governance: Maintain the SLA of Azure VMware Solution by establishing FTT settings proportional to the cluster size. When changing the size of the cluster, make sure the vSAN storage policy is set to the appropriate FTT setting. Be aware that changing FTT can impact storage availability as data is replicated.

ESXi access: ESXi hosts in Azure VMware Solution are restricted. ESXi host access is required for third-party software. Identify any third-party software that needs access to the ESXi host that Microsoft Azure VMware Solution supports. If cloud consumers need ESXi host access, support requests must be made via the Azure portal support request.

ESXi host density and efficiency: Understand ESXi host utilization on an excellent return on investment (ROI). Monitor overall node utilization against a healthy density of guest VMs to maximize Azure VMware Solution investments. Providing sufficient lead time for node additions when monitoring indicates that the Azure VMware Solution environment needs to be resized.

Network monitoring: Make sure internal network traffic is monitored for malicious or unknown traffic. For a detailed insight into Azure VMware Solution networking operations, deploy VMware vRealize Network Insights (vRNI) and VMware vRealize Operations Manager.

Security planned maintenance and service health alerts: Prepare and respond appropriately to issues and outages by understanding and viewing service health. Set up alerts for Azure VMware Solution service issues, planned maintenance, security advisories, and health advisories. Microsoft’s recommended maintenance windows do not apply to workload activities in Azure VMware Solution.

Cost governance: Keep track of costs to ensure budget allocation and financial accountability. Implement a cost management solution to track costs, allocate costs, create budgets, monitor costs, and maintain good financial governance. Azure Cost Management and Billing tools make it easy to create a budget, create alerts, allocate costs, and produce reports for financial stakeholders.

Azure services integration: Avoid utilizing the public endpoint of the Azure PaaS, since this can cause traffic to leave cloud consumers network boundaries. Use a private endpoint to access Azure services such as Azure SQL Database and Azure Blob Storage so that traffic stays within a defined virtual network boundary.

Next, let’s move forward with guest applications and VM governance. Cloud consumers need to better understand cybersecurity readiness and response with security posture awareness for Azure VMware Solutions guest VMs and applications. The following components are essential.

Microsoft Defender enablement: For Azure services and Azure VMware Solution guest VM workloads, enable Microsoft Defender for Cloud.

Use Azure Arc-enabled servers: Azure Arc-enabled servers can be used to manage Azure VMware Solution guest VMs with tooling that replicates Azure native resource tooling, such as
  • Manage, report, and audit guest configurations and settings with Azure Policy

  • Configurations and supported extensions that simplify the deployment process

  • Using Update Management for managing updates for Azure VMware Solutions guest virtual machines

  • Managing guest virtual machines using tags

Guest VM logging and monitoring:
  • Debug guest VMs more effectively by enabling diagnostic metrics and logging.

  • Enhance debugging and troubleshooting capabilities by implementing logging and querying capabilities.

  • Getting VM insight information on guest VMs allows you to detect performance bottlenecks and operational issues.

  • VM boundary alerts can be configured to capture VM boundary events.

Cloud consumers need to deploy Log Analytics agents (MMA) on guest VMs before migrating or deploying new guest VMs into the Azure VMware Solution environment. Integrate Azure Log Analytics with the MMA and link the workspace with Azure Automation. Verify the status of any MMA agents deployed on guest VMs before migration with Azure Monitor.

Guest VM domain governance: To enable Azure VMware Solution guest VMs to auto-join an Active Directory domain without error-prone manual processes, use extensions such as the JsonADDomainExtension or equivalent automation options.

Guest VM update governance: The top attack vectors that can expose or compromise Azure VMware Solution guest VMs and applications are delayed or incomplete updates or patches. Make sure updates are promptly installed.

Guest VM backup governance: Plan regular backups to avoid missing backups or relying on old backups that could cause data loss. Schedule backups and monitor backup success with a backup solution. Monitoring and alerting backup events ensure that scheduled backups are successful.

Guest VM DR governance: During business continuity and disaster recovery (BCDR) events, poorly documented recovery point objectives (RPO) and recovery time objectives (RTO) can leave customers unhappy and operational goals unmet. Business continuity can be improved by implementing disaster recovery orchestration.

DR orchestration for Azure VMware solutions detects and reports any issues with continuous replication to disaster recovery sites. Determine the RPOs and RTOs for all Azure and VMware Solutions. Make disaster recovery and business continuity solutions that meet the orchestration’s verified RPO and RTO requirements.

AVS Compliance

Plan for the AVS environment and workload compliance by designing and deploying the following recommendations. Figure 3-25 depicts the overall view of AVS Compliance.

An image depicts logical view of Azure Compliance. Below are 7 blocks labeled Microsoft Defender for Cloud Monitoring, Country or industry-specific regulatory compliance, Data Retention and Residency Requirements, Guest V M D R compliance, Corporate policy compliance, Data processing, Guest V M backup compliance.

Figure 3-25

Azure compliance - logical view

Let’s start with Microsoft Defender for Cloud Monitoring.

Microsoft Defender for Cloud monitoring: Monitor security and regulatory compliance with Defender for Cloud’s regulatory compliance view. To track deviations from the expected compliance posture, configure Defender for Cloud workflow automation.

Guest VM DR compliance: Track the compliance of Azure VMware Solution guest VMs with DR configurations to ensure their mission-critical applications and workloads remain available during a disaster. Provide at-scale replication provisioning, noncompliance status monitoring, and automatic remediation by using Azure Site Recovery or an Azure VMware Solution-certified BCDR solution.

Guest VM backup compliance: Track and monitor compliance with Azure VMware Solution guest VM backups to ensure that the VMs are backed up. Azure VMware Solution certified partners can provide an at-scale perspective, drill-down analysis, and an actionable interface for tracking and monitoring VM backups.

Country- or industry-specific regulatory compliance: Achieve compliance with country- and industry-specific regulations by ensuring Azure VMware Solution guest workload compliance. Recognize how regulatory compliance is handled in the cloud. Cloud consumers can view or download the Azure VMware Solution and Azure Audit reports from the Service Trust Portal to support the compliance story.

Report firewall audits on HTTP/S and non-HTTP/S endpoints in order to comply with regulatory requirements.

Corporate policy compliance: To prevent company rules and regulations breaches, monitor Azure VMware Solution guest workload compliance. Microsoft recommends using Azure Arc-enabled servers and Azure Policy, or an equivalent third-party solution. Regularly assess and manage compliance with applicable internal and external regulations for Azure VMware Solution guest VMs and applications.

Data retention and residency requirements: Data stored in clusters cannot be retained or extracted by Azure VMware Solution. In addition to terminating all running workloads and components, deleting a cluster also obliterates all cluster data and configuration settings, including public IP addresses. There’s no way to recover data from a deleted cluster.

VMware Solution for Azure does not guarantee that all metadata and configuration data for running the service exists only within the deployed geographical region. For assistance with data residency requirements, contact Azure VMware Solution support.

Data processing: Before signing up for a cloud service, cloud consumers should read the terms and conditions. Microsoft Azure VMware Solution customers who have transferred for L3 support need to pay attention to the VMware data processing agreement. Microsoft shares VMware’s professional service data and associated personal data when VMware support is needed for a support issue. After that, Microsoft and VMware operate independently.

AVS Management and Monitoring

To achieve operational excellence in the design and deployment of Azure VMware Solution, you need a highly well-designed management and monitoring solution. Here are the key considerations for Azure VMware Solution platform management and monitoring:
  • Create alerts and dashboards on the most critical metrics for the cloud consumer’s operations team.

  • VMware vRealize Operations Manager and vRealize Network Insight are licensed VMware solutions. Together, they offer a detailed view of VMware solutions for Azure. The NSX-T distributed firewall can be monitored through vCenter events and flow logs. vRealize Log Insight for VMware Solution for Azure currently supports pull logging. Tasks, alerts, and possibilities are captured. Unstructured data from hosts cannot be pushed to vRealize via Syslog.

  • VMware vRealize Operations don’t support in-guest memory collection via VMware tools. In-guest memory consumption is still supported.

  • Azure VMware Solution uses a local identity provider. Utilize a single administrator account for the initial configuration of the Azure VMware Solution. Traceability of actions is facilitated by integrating Azure VMware Solution with Active Directory.

For platform management and monitoring of Azure VMware Solutions, review the following Microsoft recommendations for your design and deployment:

  • Through the Azure portal, monitor the baseline performance of the Azure VMware Solution infrastructure.

  • vSAN storage has a finite capacity, so managing it is imperative. VM workloads should be stored on vSAN only. Reduce unnecessary storage on vSAN by considering the following design considerations:
    1. 1.

      VM template storage can be moved off vSAN using Azure Blob Storage content libraries.

       
    2. 2.

      Back up to an Azure virtual machine using Microsoft tooling or a partner vendor.

       
  • Azure VMware Solution requires that 25% of vSAN slack space be kept available on the cluster to ensure SLAs.

  • Install an existing identity provider with the Azure VMware Solution vCenter.

  • Monitoring an ExpressRoute circuit from on-premises into Azure is required in a hybrid environment.

  • In Azure Network Watcher, configure two connection monitors to monitor connectivity.
    1. 1.

      Establish a Connection Monitor between an Azure resource and an Azure VMware Solution-based virtual machine. This monitor offers information about the network connection between Azure and Azure VMware Solution over ExpressRoute.

       
    2. 2.

      Connect an on-premises VM to an Azure VMware Solution-based VM by configuring a second Connection Monitor. This monitor displays the availability and performance of network connections between on-premises and Azure VMware Solution over ExpressRoute Global Reach.

       
  • The following KPIs need to be considered:
    1. 1.

      Monitoring and alerts for vSAN datastore disk usage >70%.

       
    2. 2.

      Monitor vSAN% datastore disk use >75% and alert.

       
    3. 3.

      The % CPU >80% warning will be monitored and alerted.

       
    4. 4.

      Monitoring and alerting on average memory usage >80%.

       
The following recommendations will help you manage and monitor Azure VMware Solution workloads for your design and deployment:
  • Managing and monitoring Azure VMware Solution workloads is worthwhile during workload migration. With Azure native solutions, consider using an Azure Arc server to manage and monitor workloads hosted by VMware Solutions on Azure.

  • With Azure VMware Solution, thick provisioning is enabled by default. VMs can be efficiently provisioned using thin provisioning on vSAN. Monitoring vSAN datastore capacity alerts with the above methodology reduces the risk factor.

  • Choose between thick- and thin-provisioned disks. Depending on the workload requirements, it’s possible to have thick or thin disks in a VM. The two extra components ensure vSAN storage is never over capacity by evaluating the storage configuration.

  • If you plan to use a network virtual appliance, consider monitoring trace logs between on-premises and Azure. Monitor the connection between Azure and the VMware Solution.

  • Following the hybrid guidance for Windows and Linux, configure guest monitoring for VMs running on Azure VMware Solution. This is true for Windows and Linux integrations for Azure:
    1. 1.

      Log Analytics: This tool aggregates queries and analyzes logs generated by Azure resources.

       
    2. 2.

      Microsoft Defender for Cloud: An infrastructure security management system that strengthens your security posture by providing advanced threat protection across hybrid and Azure resources. Azure VMware Solution VMs are continually assessed, and vulnerabilities are reported.

       
    3. 3.

      Microsoft Sentinel: A solution for securing information and events in the cloud. Across cloud and on-premises environments, this Azure resource provides security analytics, alert detection, and automated threat response.

       
    4. 4.

      Azure Update Management: Updates the operating systems of Windows and Linux machines on-premises and in the cloud.

       
    5. 5.

      Azure Monitor: This monitoring solution collects, analyzes, and acts upon telemetry from the cloud and on-premises environments.

       

AVS Business Continuity and Disaster Recovery

Business continuity and disaster recovery are primary factors when planning an SDDC in a datacenter or on a cloud hosting business-critical application. Thanks to the widespread use of virtualization, the availability of datacenters has moved up the priority list. Whenever a new datacenter is planned, or a datacenter is migrated to the cloud, this is an important consideration.

Disaster recovery is the fastest and safest way, with minimal data loss, to get a business up and running after a disaster. A properly planned solution allows for disaster recovery and provides for planned failovers and catastrophe prevention.

Cloud consumers who run VMware on-premises incorporate Azure VMware Solution into their hybrid cloud strategy due to the immense benefits of using the Azure Global Infrastructure.

As organizations plan hybrid cloud strategies, disaster recovery is vital to ensure business continuity in a disaster.

This section discusses the architectural considerations and best practices for implementing disaster recovery using VMware Cloud on Azure. We’ll focus mainly on VMware Cloud on Azure and the VMware Site Recovery Manager (SRM) add-on.

When disaster recovery is not designed correctly, it can result in loss of service and failure to meet SLAs. Customers can suffer financial and reputational damage as a result.

Disaster recovery solutions require planning for business continuity. These solutions include
  • Risk analysis of current critical applications

  • Business impact in the event of a disaster

  • Preparedness for disasters

  • RTO and RPO

  • Virtualization’s role in recovery

When recovering a production site, cloud consumers should consider their computing, storage, and networking requirements. In a virtualized recovery site, resources do not have to sit idle all the time so that the recovery site can operate in a distributed fashion. Consumers can run critical or non-critical workloads on the recovery site based on the available resources.

Cloud consumers will need additional storage resources to store the replicated data if they run the recovery site in distributed mode.

Here are the key design considerations and Microsoft recommendations for Azure VMware Solution business continuity:
  • Consider a validated backup solution for VMware VMs, such as Microsoft Azure Backup Server (MABS) or from one of Microsoft’s backup partners.

  • Azure VMware Solution is deployed with VMware vSAN storage policies. With three to five hosts, clusters can tolerate one host failure without data loss. Two host failures are tolerable when the cluster has six and 16 hosts. Each virtual machine can have its own VMware vSAN storage policy. Although these policies are the default, cloud consumers can change the policies used by VMware VMs to meet their needs.

  • Azure VMware Solution comes with VMware high availability (HA) enabled by default. A node’s memory and compute resources are reserved through the HA admittance policy. This reservation ensures sufficient reserve capacity to restart workloads on another node in an Azure VMware Solution cluster.

  • The Azure VMware Solution private cloud can be backed up using Microsoft Azure Backup Server.

  • MABS does not currently support backing up to a secondary Azure VMware Solution private cloud.

  • As part of the Azure VMware Solution private cloud, install the Azure Backup Server in the Azure region. Using this deployment method, traffic costs can be reduced, administration can be simplified, and the primary/secondary topology can be maintained.

  • Azure IaaS or Azure VMware Solution can be used to deploy MABS. It is highly recommended to deploy it in an Azure virtual network outside the Azure VMware Solution private cloud. Due to vSAN’s limited capacity within the Azure VMware Solution private cloud, this virtual network is connected to the same ExpressRoute to reduce consumption.

  • NSX Manager, HCX Manager, or vCenter can be reinstalled from a backup for Azure VMware Solution platform components.

Here are the key design considerations and Microsoft recommendations for Azure VMware solution disaster recovery:

  • For applications and VM tiers, align recovery time, capacity, and point objectives with business requirements. Design replication technologies appropriately to achieve these objectives.

  • The availability group can be application-native, such as SQL Always On, or non-native, such as VMware Site Recovery Manager (SRM) or Azure Site Recovery. The Azure VMware Solution private cloud’s disaster recovery site should be selected. Choosing the right disaster recovery tool depends on the site.

  • Scalable Site Recovery Manager supports migration from third-party locations to Azure VMware Solution.

  • VMware Site Recovery Manager offers cloud consumers the ability to back up their Microsoft Azure VMware Solution private cloud to a second Microsoft Azure VMware Solution private cloud.

  • Cloud consumers can use Azure Site Recovery to recover their Azure VMware Solution private cloud to Azure IaaS.

  • The Azure Site Recovery Deployment Planner can plan disaster recovery to Azure native services.

  • Workloads should be started in the correct order in the recovery plan after Azure Site Recovery failover.

  • Azure VMware Solution partners like JetStream Software and HCX also support disaster recovery scenarios.

  • It determines which Azure VMware Solution workloads should be protected during a disaster recovery event. To reduce costs associated with disaster recovery implementation, consider only protecting those critical workloads to business operations.

  • Configure functional domain roles in the secondary environment, such as Active Directory domain controllers.

  • Cloud consumers need to enable ExpressRoute Global Reach between both back-end ExpressRoute circuits to disaster recovery between Azure VMware Solution private clouds in distinct Azure regions. When required, the circuits can be used for disaster recovery solutions like VMware SRM and VMware HCX.

  • Disaster recovery allows cloud consumers to use the same IP address space on the secondary Azure region as the primary Azure region. The solution foundation will require engineers to add more overhead.
    1. 1.

      Cloud consumers can keep the same IP addresses for the recovered VM they used for the Azure VMware Solution VMs. In this case, create isolated VLANs or segments in the secondary site and ensure none of them are interconnected with the environment. The subnet moves to the secondary site, and the IP addresses change. Adapt disaster recovery routes for cloud consumers accordingly. This method works when aiming for minimal interaction, but it also adds engineering overhead.

       
    2. 2.

      Separate IP addresses can also be used for recovered VMs. Cloud consumers can use different IP addresses for recovered VMs. The VMware Site Recovery Manager’s custom IP map will be detailed in the recovery plan if the VM is moved to a secondary site. To change the IP address, select this map. The new IP address allocation is assigned to a defined virtual network using Azure Site Recovery.

       
  • Full and partially recover from disasters:
    1. 1.

      VMware SRM is available to cloud users for partial and full disaster recovery. Cloud consumers can fail some or all of their VMs from primary to secondary when running Azure VMware Solution in regions one and two.

       
    2. 2.

      The requirement determines partial disaster recovery vs. full disaster recovery for VM recovery and IP address retention.

       
    3. 3.

      You can maintain the IP address and achieve partial disaster recovery in SRM by moving the subnet’s gateway to the secondary Azure VMware solution.

       
  • VMware Site Recovery Manager can be used when working with Azure VMware Solution in primary and secondary sites. Protected and recovery sites are also called primary and secondary sites.

  • Microsoft recommends Azure Site Recovery if Azure IaaS is the disaster recovery target.

  • Utilize automated recovery plans within each of the solutions to minimize manual input. Disaster recovery plans for the Azure VMware Solution private cloud can be used with VMware Site Recovery Manager or Azure Site Recovery. In recovery plans, machines are grouped into recovery groups for failover, and the recovery process is defined by creating small independent units that can failover.

  • Microsoft recommends using the geopolitical region pair as a secondary disaster recovery environment considering the proximity of regions and reducing costs.

  • Don’t use the same address space twice. If you want to use region 1, use 192.168.0.0/16, and for region 2, use 10.0.0.0/16. In this way, you reduce the chances of IP addresses overlapping.

  • ExpressRoute Global Reach can be used to connect the Azure VMware Solution primary and secondary clouds.

Azure Platform Automation and DevOps

A series of best practices for automation and DevOps are implemented in the enterprise-scale landing zone. They can help deploy the VMware Solution Azure private cloud. This section discusses factors to consider when deploying Azure VMware Solution and provides guidelines for operational automation.

The Azure Cloud Adoption Framework architecture and best practices are used here, focusing on designing for scalability. The solution consists of two essential components. The first part is guidance about Azure VMware Solution deployment and automation. The second part consists of open-source components that can be adapted to help consumers build their private clouds. The goal of this solution is to begin an end-to-end automation journey. Still, the organizations can decide which components to deploy manually, based on the considerations in this section.

The enterprise scale for Azure VMware Solution automation repository provides templates and scripts that help cloud consumers deploy Azure VMware Solutions. Before you deploy on the cloud, Microsoft recommends reviewing the templates to understand the deployed resources and the associated costs. Figure 3-26 depicts Azure deployment methods.

A block diagram of the Azure deployment method depicts Manual deployment, Automated deployment, Automated connectivity, and Auto-scale.

Figure 3-26

Azure deployment methods

Manual deployment: Cloud consumers can configure and deploy private clouds running on VMware’s Azure platform through the Azure portal. Smaller-scale deployments are best suited to this option. Whenever users repeatedly want to deploy large-scale Azure VMware Solution topologies, they should consider an automated deployment. In addition to configuring connectivity to the private cloud, cloud consumers can use the Azure portal to scale it manually.

Here are the key design considerations and Microsoft recommendations for an Azure VMware Solution manual deployment strategy:
  • Manual deployments can be used for initial pilots and small-scale environments in the cloud. They can also be used by cloud consumers who do not have existing automation or infrastructure-as-code practices.

  • During the deployment of Azure VMware Solution through the Azure portal, Azure CLI, or Azure PowerShell modules, cloud consumers are shown terms and conditions about data protection in the solution. If they use ARM APIs directly or deploy via ARM or Bicep templates, they need to review these terms and conditions before deploying automation.

  • Automate the Azure VMware Solution private cloud creation process to limit the manual intervention required for on-demand environments.

  • Within the Azure portal, cloud consumers can monitor the private cloud creation process by using the deployments blade of the target resource group. Before proceeding, cloud consumers should confirm that the private cloud has successfully been deployed. The private cloud might not be able to connect to vCenter if the Failed level is displayed. This might require removing and redeploying the private cloud.

  • It’s essential for cloud consumers who opt for manual deployment to document the configurations they use to set up the private cloud. Users should download the template they used to document their deployment. There is a parameters file in this template artifact that includes configurations for selected cloud consumers and the ARM template used to deploy the private cloud.

  • Microsoft recommends placing a resource lock to restrict resource deletion if cloud consumers interact with the private cloud regularly through the Azure portal. To determine scale operations, Microsoft recommends using read-only resource locks.

Automated deployment: Azure VMware Solution environments can be deployed repeatedly using automated deployments, allowing cloud users to design and deploy the environments on-demand. In this way, multiple domains and regions can be deployed efficiently and at scale. Furthermore, they offer an on-demand, repeatable, and low-risk deployment process.

Here are the key design considerations and Microsoft recommendations for an Azure VMware Solution automated deployment strategy:
  • Private cloud deployment of Azure VMware Solution may take several hours. If cloud consumers are using the private cloud, they may monitor this process by viewing the deployment status on ARM. Ensure that appropriate timeout values are selected to accommodate the private cloud provisioning process. Cloud consumers may use a deployment pipeline or deploy with PowerShell or the Azure CLI.

  • As recommended in the network topology and connectivity recommendations, private clouds and workload networks can be preallocated address ranges ahead of time. Add these address ranges to the environment configuration. The address overlap isn’t validated during deployment. Having two private clouds with the same address range can create issues due to the lack of validation. Additionally, overlapping networks within Azure or on-premises can cause issues.

  • Cloud consumers can apply service principles for deployment to provide the least privileged access. Additionally, cloud consumers can restrict access to the deployment process using RBAC in Azure.

  • Private cloud users can deploy their applications using a DevOps strategy, using pipelines for automated and repeatable deployments without using local tools.

  • Build a minimal private cloud and then scale as needed.

  • Request quotas or capacity ahead of time to ensure a successful deployment.

  • Ensure the private cloud’s deployment process and status are monitored before deploying subresources. No configuration updates can be made once a private cloud is in a Succeeded state. Microsoft recommends that cloud consumers with failed private clouds stop further operations and open a support ticket.

  • Include relevant resource locks in the automated deployment or ensure their application through policy.

Automated networking: Connectivity can be set up via ExpressRoute after cloud consumers deploy the Azure VMware Solution private cloud. The following paths are vital for network connectivity:
  • A virtual network gateway connects a virtual network or an Azure Virtual WAN to the Internet.

  • Using Global Reach, connect Azure VMware Solution and an existing ExpressRoute.

Here are the key design considerations and Microsoft recommendations for Azure VMware Solution automated network connectivity:

  • Azure VMware Solution private clouds can be integrated with Azure virtual networks or ExpressRoutes. From the management and workload networks within the private cloud, this connection advertises routes automatically. Consider validating broadcast networks before connecting since there are no overlap checks.

  • ExpressRoute authorization keys can be aligned with existing naming schemes for resources consumers connect to. In doing so, associated resources can be easily identified.

  • Azure VMware Solution’s private cloud may not host ExpressRoute virtual network gateways and ExpressRoute circuits. Decide whether cloud consumers wish to access all of these resources through a single service principle.

  • By deploying NSX-T workload networking services through the Azure portal, NSX-T Manager makes it easier to manage NSX-T components. Consumers of the cloud should assess how much control they need over the network segments.
    1. 1.

      Configure DNS zones for private DNS integration using the NSX-T workload networking in the Azure portal.

       
    2. 2.

      Use NSX-T workload networking within the Azure portal for network topologies that only need a single tier-one gateway.

       
    3. 3.

      NSX-T Manager can be used directly for advanced configurations.

       
  • Use Azure Key Vault or a similar secret store to pass authorization keys between deployments if cloud consumers use separate service principles for Azure VMware Solution deployment and ExpressRoute configuration.

  • Azure VMware Solution private clouds can only carry out a limited number of parallel operations at a given time. Microsoft recommends using dependencies to deploy serially in templates defining many Azure VMware Solution private cloud subresources.

Auto Scaling: The number of hosts in an Azure VMware Solution cluster is determined by its scale. Users can automate scaling in and out of cloud clusters by modifying per-cluster scaling through programmatic means. An Azure Monitor alert may trigger this automation on-demand, on a schedule, or on a regular basis.

Here are the key design considerations and Microsoft recommendations for Azure VMware Solution auto scaling:
  • Automatic scaling out can provide more capacity on-demand, but the cost of adding more hosts must be considered. There should be a limit on this cost depending on the subscription quota, but manual limits should also be in place.

  • Consider the impact of the scale-in on running workloads and storage policies applied within the cluster before cloud consumers automate the process. RAID 5 workloads, for example, cannot be scaled into a three-node cluster. Additionally, memory and storage use should be considered since they could interfere with scaling up.

  • There can only be one single-scale operation at a time, so orchestrating scale operations between multiple clusters is necessary.

  • Adding a new node to an existing cluster is not instantaneous when using the Azure VMware Solution scale operation. Integrations and third-party solutions may not be able to handle host removal and addition in a continuous fashion. Make sure all third-party solutions are performing as expected. The validation ensures that removing or adding hosts will not require additional steps when the product is refreshed or reconfigured.

  • Put hard limits for both scale-in and scale-out operations outside of a quota.

  • Request a quota ahead of time so that it won’t affect scaling. Rather than guaranteeing capacity, quotas allow for deployment up to a specific limit. Keep an eye on the quota limit regularly to ensure there’s always room.

  • If cloud consumers use an automated scaling system, ensure it is monitored and alerts cloud consumers when it is done. This will prevent unexpected scales from occurring.

  • Make sure there is adequate headroom before scaling-in operations using Azure Monitor Metrics. During and after scaling operations, pay attention to CPU, memory, and storage. With this attention to capacity, you ensure that it does not affect SLAs.

Azure VMware Solution private clouds exist as resources within the ARM accessible through several automation tools. First-party Microsoft tooling usually supports ARM specifications within a few weeks of release. In this section, the considerations are presented in a way that can be applied to different toolkits from an automation perspective.

Here are the key design considerations and Microsoft recommendations for Azure VMware Solution automated tooling:
  • To define configuration as a single artifact, use declarative toolings like ARM and Bicep templates. Compared to manual deployment, command-line and script-based toolings like Azure CLI and PowerShell require a more step-by-step approach.

  • Third-party automation tools, such as Terraform, can be used to deploy Azure VMware Solution and Azure native services. Ensure that the Azure VMware Solution currently provides the features you wish to use.

  • Be mindful of failure-to-deploy implications when using script-based deployment and monitor appropriately. Assume monitoring both the deployment and the private cloud status of the Azure VMware Solution.

  • Automate the deployment of Azure VMware Solutions using Azure CLI, PowerShell, ARM, or Bicep declarative templates.

  • When possible, consider what-if scenarios before executing changes, pausing on resource deletion for verification.

  • Cloud consumers can use Azure Blueprints to deploy Infrastructure as Code for a single deployment operation. In Azure Blueprints, deployments are stamped and repeatable, so there is no need for automation pipelines.

Creating resources within vCenter and NSX-T Manager can also be automated as part of an Azure VMware Solution private cloud. Here are some considerations for designing automation at the VMware level:
  • PowerCLI for VMware vSphere automation

  • PowerCLI for VMware NSX-T automation

  • Providers such as Terraform for VMware NSX-T and VMware vSphere

  • vRealize Automation and vRealize Operations

Here are the key design considerations and Microsoft recommendations for Azure VMware Solution PowerCLI/VMware vSphere automation:
  • VMware vCenter can be completely controlled programmatically using PowerCLI to create VMs, resource pools, and templates.

  • Since VMware vCenter is only accessible through private connectivity or a private IP address, cloud consumers must install PowerCLI on a machine connected to the Azure VMware Solution management network. If possible, use a self-hosted pipeline execution agent. A virtual machine within a virtual network or NSX-T segment can run PowerCLI with this agent.

  • Cloud consumers might not be able to access certain cloud operations due to the CloudAdmin role’s limitations. Validate the permissions required for the automation you plan to implement with CloudAdmin permissions.

  • Consider using a service account for VMware vCenter level automation via Active Directory integration for least privilege access.

Here are the key design considerations and Microsoft recommendations for Azure VMware Solution PowerCLI/VMware NSX-T automation:
  • NSX-T is accessible to the admin user by default in an Azure VMware Solution private cloud. Changes made via PowerCLI or the NSX-T APIs will impact this default access. Modifying Microsoft-managed components like transport zones and tier-zero gateways is not permitted.

  • To communicate with NSX-T, the VM running PowerCLI needs to be connected to the Azure VMware Solution private cloud.

  • The workload network can be controlled by ARM. This control allows Azure CLI and PowerShell operations using Azure RBAC rather than NSX-T identity to be performed using ARM API.

Here are the key design considerations and Microsoft recommendations for Azure VMware Solution Terraform for VMware NSX-T and VMware vSphere:
  • Resources can be deployed to the cloud using vSphere and NSX-T providers for Terraform. A declarative approach is used to deploy these resources in the private cloud.

  • Terraform needs a private connection to the private cloud management network to communicate with vCenter and NSX-T Manager API endpoints. Think about deploying from a virtual machine on Azure that can route traffic to the private cloud.

Here are the key design considerations and Microsoft recommendations for Azure VMware Solution Terraform for vRealize Automation and vRealize Operations:
  • Cloud consumers can deploy virtual machines within Azure VMware Solution using vRealize Automation as in an on-premises environment.

  • Azure VMware Solution supports a limited number of deployment models. It is possible to host the vRealize Automation appliances on-premises or utilize vRealize Cloud Management.

  • vRealize Automation and vRealize Operations appliances require private connectivity to Azure VMware Solution, just as PowerCLI does.

Cloud consumers can set up automation at the VM level within Azure VMware Solution workloads. Ansible, Chef, and Puppet are examples of automation. On-premises agents are also available for Azure Automation for VM-level configuration in the cloud.

Summary

In this chapter, you read about Azure’s well-architected framework; the AVS Solution building blocks; AVS network topology and connectivity; AVS identity and access management; AVS security, governance, and compliance; AVS management and monitoring; AVS business continuity and disaster recovery; and AVS platform automation.

In the next chapter of the book, you will read about the planning methodology of AVS Solution and the assessment and deployment of Azure VMware Solution.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.32.230