Chapter 16
Scenarios and Reference Architectures

THE AWS CERTIFIED ADVANCED NETWORKING – SPECIALTY EXAM OBJECTIVES COVERED IN THIS CHAPTER MAY INCLUDE, BUT ARE NOT LIMITED TO, THE FOLLOWING:

  • Domain 1.0: Design and Implement Hybrid IT Network Architectures at Scale
  • images 1.2 Given a scenario, derive an appropriate hybrid IT architecture connectivity solution
  • Domain 2.0: Design and Implement AWS Networks
  • images 2.2 Given customer requirements, define network architectures on AWS
  • images 2.3 Propose optimized designs based on the evaluation of an existing implementation

images

Introduction to Scenarios and Reference Architectures

As you have seen throughout this guide, AWS provides many network services and features to help you build highly available, robust, scalable, and secure networks in the cloud. This chapter covers scenarios and reference architectures for combining many of these network components to meet common customer requirements. These scenarios include implementing network patterns that create hybrid networks and span multiple regions and locations. The exercises at the end of this chapter will help you design appropriate network architectures on AWS. Understanding how to architect networks to meet customer requirements is required to pass the exam, and we highly recommend that you complete the exercises in this chapter.

Hybrid Networking Scenario

Imagine that you work for a company that is looking to expand a flagship application from a company data center onto AWS. The application has been successfully serving your customers in Europe, and you have been asked to extend application functionality quickly into the eu-central-1 region. Your application’s current design is depicted in Figure 16.1.

Diagram shows network architecture that includes users on top followed by web servers, application servers, primary database, and backup database tiers.

FIGURE 16.1 Current application network design

As you can see, the application implements a traditional “N-tier” architecture with web, application, and database tiers. All user data is stored in a relational database. Your initial task is to scale the web and application tiers to support increased demand for web and application server resources. As a result, you propose the network architecture depicted in Figure 16.2.

Diagram shows network architecture which includes users, followed by Amazon route53, and two branches. One branch contains web server, app server and database tiers. Other branch is AWS containing VPC web and app servers. VPC app server is linked to database tier.

FIGURE 16.2 Web and application server network design

This design adds Amazon Route 53 to provide Domain Name System (DNS)-based routing between AWS and your existing on-premises resources. It hosts web and application tiers, with associated Elastic Load Balancing. Lastly, it provides back-end connectivity for the application to access data from its relational database.

For this network design, use of Amazon Route 53 Weighted Round Robin (WRR) routing and health checks are recommended to allow traffic to be dialed up and down based on what percentage of traffic you would like to send to AWS versus your on-premises resources. The use of other Amazon Route 53 routing options (for example, latency-based routing) are not recommended because they do not provide as much control over how much traffic will be sent to AWS versus your on-premises resources. This lack of control could lead to several undesirable scenarios, such as the following:

  • Excess traffic still getting directed to and overloading on-premises resources.
  • Too much traffic getting directed to AWS and overloading your AWS resources. Ideally, your application teams will have properly implemented AWS scaling features like Auto Scaling to take advantage of elasticity in the cloud. Hybrid networking scenarios such as this, however, often require higher degrees of network traffic control to ensure that network and application teams are aligned on scaling expectations.
  • Too much traffic getting directed to AWS and overloading your provisioned back-end network connectivity between AWS and the on-premises network.

The design shown in Figure 16.2 retains application data on-premises, and therefore it requires careful back-end connectivity consideration. Many customers start with a Virtual Private Network (VPN) connection because VPN connections can often be set up more quickly than can AWS Direct Connect connections. VPN connections can be useful for experimenting with cloud bursting, as a bridge for establishing AWS Direct Connect connections, or when back-end connectivity bandwidth is relatively low and can tolerate Internet-influenced variable latency and jitter. AWS Direct Connect connections should be leveraged for high bandwidth needs, such as when multiple 10 Gbps connections are required or for being able to provide consistent network latency with minimal network jitter to your applications.

This design could also be augmented in a number of different ways, depending on application requirements, including:

  • For simplicity, the diagram in Figure 16.2 does not depict the use of multiple Availability Zones and their associated subnets. It is an AWS best practice to leverage multiple Availability Zones for each application tier.
  • The database tier could be moved to AWS. This is especially useful if you would like to move to a managed database service such as Amazon Relational Database Service (Amazon RDS) or when an application is running into on-premises scaling challenges that could be addressed by migrating to a more scalable database such as Amazon Aurora.
  • A read replica of the database tier could be replicated between AWS and the on-premises network. Replicating the database could potentially reduce back-end network traffic or latency for application database read operations. Similarly, a database caching layer such as Amazon ElastiCache could be leveraged to improve application read performance.
  • Amazon CloudFront could be included to reduce latency for serving content to your users and offload requests to your application resources either in AWS or on-premises.
  • AWS WAF could be included to provide an additional layer of security to your application both in AWS and on-premises.

Multi-Location Resiliency

For this next scenario, consider a company that is looking to implement multi-location resiliency for a flagship application. The application must be able to scale up and down gracefully based on user demand, and it must be capable of surviving the failure of multiple data centers, including the loss of an entire region. In the event of a multi-region disaster, the company still wants to be able to serve a static version of the website to users. To accomplish this goal, we will break down the requirements by regional, multi-regional, and disaster recovery components.

Figure 16.3 depicts a highly available regional design. Users are directed by Amazon Route 53 to an Application Load Balancer configured with web application firewall rules, cross-zone load balancing, connection draining, and instance health checks. This load balancer is responsible for applying security rules to user traffic while also distributing valid request load evenly across all healthy instances in multiple Availability Zones. It also integrates with a Multi-AZ Auto Scaling group to ensure that in-flight requests are handled gracefully before an Amazon Elastic Compute Cloud (Amazon EC2) instance is removed from the load balancer. This combination protects the application from Availability Zone outages, ensures that a minimal number of Amazon EC2 instances are running, and can respond to load changes by scaling each group’s Amazon EC2 instances up or down as needed.

Diagram shows network architecture which includes users on top, followed by Amazon route53, internet gateway, and VPC. VPC includes auto scaling group and two availability zones.

FIGURE 16.3 Regional availability

Lastly, the Amazon EC2 instances are configured to connect to a Multi-AZ Amazon RDS database. Amazon RDS creates a master database and synchronously replicates all data to a slave database instance in another Availability Zone. Amazon RDS monitors the health of the master instance and will automatically fail over to the slave instance in the event of a failure.

Figure 16.4 expands this application’s network architecture to another region. In this example, the first region’s network infrastructure is replicated into a second region, including the application’s Virtual Private Cloud (VPC), subnets, Application Load Balancer and web application firewall rules, Amazon EC2 instances, and Auto Scaling configuration. Additionally, the Amazon Route 53-managed alias record for this domain is updated to include both load balancers with a health check and failover routing policy to reroute traffic from the primary region to a secondary region in the event of a regional failure. Additionally, the Amazon RDS configuration is updated to create an asynchronous read replica of the application’s database in the new region. In the event of a regional failure, the Amazon RDS read replica could be promoted to become the master database instance.

Diagram shows network architecture which includes users on top, followed by Amazon route53, internet gateway, and two branches of VPC. VPC includes auto scaling group and two availability zones.

FIGURE 16.4 Multi-regional resiliency

A variation of this design could include adding Amazon CloudFront and AWS WAF to manage centrally web application firewall rules for the application. Another variation could include creating an Amazon Route 53 latency-based routing policy instead of a failover policy. This approach would create an active-active environment that routes requests to the closest healthy load balancer based on minimizing network latency. This scenario requires tight coordination with the application team to ensure that additional database network connectivity requirements are met. Approaches for managing database connectivity include the following:

  • Configuring the application in the second region to leverage its local database replica for read operations.
  • Implementing cross-region network connectivity between your VPCs to allow Amazon EC2 instances in the second region to connect to the master database to perform writes. Refer to the following chapters for additional information.
    • VPC Peering in Chapter 2, “Amazon Virtual Private Cloud (Amazon VPC) and Networking Fundamentals”
    • VPN connections with Amazon EC2 instances in Chapter 4, “Virtual Private Networks (VPN)”
    • Transit VPC in Chapter 12, “Hybrid Architectures”
  • Implementing a write Application Programming Interface (API) that leverages an Amazon Route 53 failover routing policy to direct user writes to the region hosting the master database instance.

Figure 16.5 expands this architecture to include a final multi-region disaster recovery failover environment. In this example, two additional Amazon Route 53 aliases are created for the application. Users are directed to the application’s user-friendly domain name (such as www.domain.com), which is configured by Amazon Route 53 with a failover alias record pointing to the application’s production domain name (for example, prod.domain.com) as primary and the application’s static application domain name (such as static-app.domain.com) for failover.

Diagram shows architecture which includes users directed to www.domain.com, followed by Amazon Route 53 connected to VPCs through link prod.domain.com and to Amazon S3 through link static-app.domain.com and Amazon CloudFront.

FIGURE 16.5 Multi-region disaster planning

The production domain name maintains the previous configuration, which includes records pointing to each regional Application Load Balancer and health checks. The static domain name is configured with a CName record pointing to an Amazon CloudFront distribution with an Amazon Simple Storage Service (Amazon S3) bucket origin hosting a static version of the application. In this scenario, the application’s user-friendly domain name will direct traffic to the application’s production load balancers as long as at least one of them is healthy. In the event that all resources across multiple Availability Zones and regions are unhealthy, Amazon Route 53 will direct users to an Amazon CloudFront distribution and Amazon S3 bucket in yet another region.

Additionally, Amazon CloudFront could be used to serve both static and dynamic content to your customers. Using Amazon CloudFront allows your content to be delivered to users from edge locations distributed across the world, can reduce the load on your back-end resources, and provides many additional benefits. More details are available in Chapter 7, “Amazon CloudFront.”

Summary

In this chapter, you learned about some additional scenarios where multiple AWS network services and features can be combined to build highly available, robust, scalable, and secure networks in the cloud to meet common customer requirements. These scenarios included creating hybrid networks to support application scaling to AWS and implementing highly robust applications that span multiple regions and locations.

Resources to Review

For further learning, review the following URLs:

Exam Essentials

Understand the different types of Amazon Route 53 routing and know when you would use each one. Amazon Route 53 provides a number of different routing policies. These routing policies affect how network traffic is sent to your applications. Make sure that you understand the implications of each option so that you are able to map the most appropriate routing feature to different application requirements. Review Chapter 6, “Domain Name System and Load Balancing” for more information about Amazon Route 53 features.

Understand the different types of on-premises network connectivity requirements and know when you would use each one. AWS provides both VPN and AWS Direct Connect for connecting on-premises networks with AWS. Make sure that you are familiar with the implications of each option and can apply the appropriate solution to meet application connectivity requirements. Review Chapter 4, and Chapter 5, “AWS Direct Connect,” for details about each of these options.

Understand the health check capabilities for services such as Amazon Route 53 and Elastic Load Balancing. AWS provides many features for monitoring the health of your application. Make sure that you are familiar with not only these features, but also how they can be used together to provide end-to-end application health monitoring and dynamic routing around failed application components. Review Chapter 6 for more information about Amazon Route 53 features.

Exercises

You should have performed the exercises in previous chapters for all of the services covered in this chapter. Take the time to go back and review previous chapters and their associated exercises to make sure that you are familiar with the implications of using each individual service or feature. The following exercises are designed to help you think about additional scenarios and determine how you would architect network connectivity solutions.


Review Questions

  1. Which Amazon Route 53 routing policy would be the most appropriate for gradually migrating an application to AWS?

    1. Weighted
    2. Latency-based
    3. Failover
    4. Geolocation
  2. When connecting an on-premises network to AWS, which option reuses existing network equipment and Internet connections?

    1. VPN connection
    2. AWS Direct Connect
    3. VPC Private Endpoints
    4. Network Load Balancer
  3. Which Amazon Route 53 routing policy would be the most appropriate for directing users to application resources that offer payment in their local currency?

    1. Weighted
    2. Latency-based
    3. Failover
    4. Geolocation
  4. Your current web application’s network security architecture includes an Application Load Balancer, locked down Security Groups, and restrictive VPC route tables. You have been asked to implement additional controls for temporarily blocking hundreds of noncontiguous, malicious IP addresses. Which AWS service or features should you add to this architecture?

    1. AWS WAF
    2. Network ACLs
    3. AWS Shield
    4. Amazon VPC AWS PrivateLink
  5. A previous network administrator implemented a transit VPC architecture using Amazon EC2 instances with 10 GB networking to facilitate communication between multiple AWS VPCs in various regions and on-premises resources. Over time, the transit VPC Amazon EC2 instance network bandwidth has become saturated with on-premises traffic, causing application requests to fail. What design recommendations can you make to reduce application failures?

    1. Implement AWS Direct Connect and migrate to a AWS Direct Connect gateway.
    2. Enable SR-IOV on your transit VPC instance ENIs.
    3. Offload network traffic to AWS PrivateLink to facilitate connectivity with on-premises resources.
    4. Upgrade from 10 GB Amazon EC2 instances to 25 GB instances with ENA.
  6. A previous network administrator implemented a transit VPC architecture to facilitate communication between multiple AWS networks and on-premises resources. Over time, the transit VPC Amazon EC2 instance network bandwidth has become saturated with cross-region traffic. What highly available design change should you recommend for this network?

    1. Migrate cross-region traffic to a point-to-point VPN connection between an Amazon EC2 instance in each VPC.
    2. Disable route propagation on your VPC route tables to disable cross-region traffic.
    3. Leverage VPC Peering connections between VPCs across regions.
    4. Implement network ACLs to rate limit cross-region traffic.
  7. You support an application that is hosted in ap-northeast-1 and eu-central-1. Users from around the word sometimes complain about long page-load times. Which Amazon Route 53 routing policy would provide the best user experience?

    1. Weighted
    2. Latency-based
    3. Failover
    4. Geolocation
  8. When connecting an on-premises network to AWS APIs, which option provides the least amount of network jitter and latency?

    1. VPN connection
    2. AWS Direct Connect private VIF
    3. AWS Direct Connect public VIF
    4. VPC Endpoints
  9. Which combination of Amazon Route 53 policies provide location-specific services with redundant, backup connections? (Choose two.)

    1. Weighted
    2. Latency-based
    3. Failover
    4. Geolocation
    5. Simple
  10. What is a scalable way to provide Amazon EC2 instances in a private subnet with IPv4 egress access to the Internet with no need for network administration?

    1. Create a transit VPC with network address translation for all your VPCs.
    2. Create an egress-only Internet Gateway.
    3. Create multiple Amazon EC2 NAT instances in each Availability Zone.
    4. Create NAT Gateways.
  11. Your users have started to complain about poor application performance. You determine that your on-premises VPN connection is saturated with authentication and authorization traffic to the on-premises Microsoft Active Directory (AD) environment. Which option will reduce on-premises network traffic?

    1. Replicate Microsoft AD to Amazon EC2 instances in a shared service network and migrate to VPC Peering connections.
    2. Migrate from a VPN connection to multiple AWS Direct Connect connections.
    3. Create a trust relationship between AWS Directory Service and your on-premises Microsoft AD and migrate to VPC Peering connections.
    4. Offload network traffic to AWS PrivateLink to facilitate connectivity with Microsoft AD on-premises.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.170.63