Appendix
Answers to Review Questions

Chapter 1: Introduction to Cloud Computing and AWS

  1. B. Elastic Beanstalk takes care of the ongoing underlying deployment details for you, allowing you to focus exclusively on your code. Lambda will respond to trigger events by running code a single time, Auto Scaling will ramp up existing infrastructure in response to demand, and Route 53 manages DNS and network routing.
  2. A. CloudFront maintains a network of endpoints where cached versions of your application data are stored to provide quicker responses to user requests. Route 53 manages DNS and network routing, Elastic Load Balancing routes incoming user requests among a cluster of available servers, and Glacier provides high‐latency, low‐cost file storage.
  3. D. Elastic Block Store provides virtual block devices (think: storage drives) on which you can install and run filesystems and data operations. It is not normally a cost‐effective option for long‐term data storage.
  4. A, C. AWS IAM lets you create user accounts, groups, and roles and assign them rights and permissions over specific services and resources within your AWS account. Directory Service allows you to integrate your resources with external users and resources through third‐party authentication services. KMS is a tool for generating and managing encryption keys, and SWF is a tool for coordinating application tasks. Amazon Cognito can be used to manage authentication for your application users, but not your internal admin teams.
  5. C. DynamoDB provides a NoSQL (nonrelational) database service. Both are good for workloads that can be more efficiently run without the relational schema of SQL database engines (like those, including Aurora, that are offered by RDS). KMS is a tool for generating and managing encryption keys.
  6. D. EC2 endpoints will always start with an ec2 prefix followed by the region designation (eu‐west‐1 in the case of Ireland).
  7. A. An availability zone is an isolated physical data center within an AWS region. Regions are geographic areas that contain multiple availability zones, subnets are IP address blocks that can be used within a zone to organize your networked resources, and there can be multiple data centers within an availability zone.
  8. B. VPCs are virtualized network environments where you can control the connectivity of your EC2 (and RDS, etc.) infrastructure. Load Balancing routes incoming user requests among a cluster of available servers, CloudFront maintains a network of endpoints where cached versions of your application data are stored to provide quicker responses to user requests, and AWS endpoints are URIs that point to AWS resources within your account.
  9. C. The AWS service level agreement tells you the level of service availability you can realistically expect from a particular AWS service. You can use this information when assessing your compliance with external standards. Log records, though they can offer important historical performance metrics, probably won't be enough to prove compliance. The AWS Compliance Programs page will show you only which regulatory programs can be satisfied with AWS resources, not whether a particular configuration will meet their demands. The AWS Shared Responsibility Model outlines who is responsible for various elements of your AWS infrastructure. There is no AWS Program Compliance tool.
  10. B. The AWS Command Line Interface (CLI) is a tool for accessing AWS APIs from the command‐line shell of your local computer. The AWS SDK is for accessing resources programmatically, the AWS Console works graphically through your browser, and AWS Config is a service for editing and auditing your AWS account resources.
  11. A. Unlike the Basic and Developer plans (which allow access to a support associate to no or one user, respectively), the Business plan allows multiple team members.

Chapter 2: Amazon Elastic Compute Cloud and Amazon Elastic Block Store

  1. A, C. Many third‐party companies maintain official and supported AMIs running their software on the AWS Marketplace. AMIs hosted among the community AMIs are not always official and supported versions. Since your company will need multiple such instances, you'll be better off automating the process by bootstrapping rather than having to configure the software manually each time. The Site‐to‐Site VPN tool doesn't use OpenVPN.
  2. B, C. The VM Import/Export tool handles the secure and reliable transfer for a virtual machine between your AWS account and local data center. A successfully imported VM will appear among the private AMIs in the region you selected. Direct S3 uploads and SSH tunnels are not associated with VM Import/Export.
  3. D. AMIs are specific to a single AWS region and cannot be deployed into any other region. If your AWS CLI or its key pair was not configured properly, your connection would have failed completely. A public AMI being unavailable because it's “updating” is theoretically possible but unlikely.
  4. A. Only Dedicated Host tenancy offers full isolation. Shared tenancy instances will often share hardware with operations belonging to other organizations. Dedicated instance tenancy instances may be hosted on the same physical server as other instances within your account.
  5. A, E. Reserve instances will give you the best price for instances you know will be running 24/7, whereas on‐demand makes the most sense for workloads that will run at unpredictable times but can't be shut down until they're no longer needed. Load balancing controls traffic routing and, on its own, has no impact on your ability to meet changing demand. Since the m5.large instance type is all you need to meet normal workloads, you'll be wasting money by running a larger type 24/7.
  6. B. Spot market instances can be shut down with only a minimal (two‐minute) warning, so they're not recommended for workloads that require reliably predictable service. Even if your AMI can be relaunched, the interrupted workload will still be lost. Static S3 websites don't run on EC2 infrastructure in the first place.
  7. A. You can edit or even add or remove security groups from running instances and the changes will take effect instantly. Similarly, you can associate or release an elastic IP address to/from a running instance. You can change an instance type as long as you shut down the instance first. But the AMI can't be changed; you'll need to create an entirely new instance.
  8. B. The first of two (and not three) strings in a resource tag is the key—the group to which the specific resource belongs. The second string is the value, which identifies the resource itself. If the key looks too much like the value, it can cause confusion.
  9. D. Provisioned‐IOPS SSD volumes are currently the only type that comes close to 20,000 IOPS. In fact, they can deliver up to 64,000 IOPS.
  10. B, C, E. Options B, C, and E are steps necessary for creating and sharing such an image. When an image is created, a snapshot is automatically created from which an AMI is built. You do not, however, create a snapshot from an image. The AWS Marketplace contains only public images: hopefully, no one will have uploaded your organization's private image there!
  11. A, C. The fact that instance volumes are physically attached to the host server and add nothing to an instance cost is a benefit. The data on instance volumes is ephemeral and will be lost as soon as the instance is shut down. There is no way to set termination protection for instance volumes because they're dependent on the lifecycle of their host instances.
  12. C, D. By default, EC2 uses the standard address blocks for private subnets, so all private addresses will fall within these ranges: 10.0.0.0 to 10.255.255.255, 172.16.0.0 to 172.31.255.255, and 192.168.0.0 to 192.168.255.255.
  13. A, B, D. Ports and source and destinations addresses are considered by security group rules. Security group rules do not take packet size into consideration. Since a security group is directly associated with specific objects, there's no need to reference the target address.
  14. A, D. IAM roles define how resources access other resources. Users cannot authenticate as an instance role, nor can a role be associated with an instance's internal system process.
  15. B, D. NAT instances and NAT gateways are AWS tools for safely routing traffic between private and public subnets and from there, out to the Internet. An Internet gateway connects a VPC with the Internet, and a virtual private gateway connects a VPC with a remote site over a secure VPN. A stand‐alone VPN wouldn't normally be helpful for this purpose.
  16. D. The client computer in an encrypted operation must always use the private key to authenticate. For EC2 instances running Windows, you retrieve the password you'll use for the GUI login using your private key.
  17. B. Placement groups allow you to specify where your EC2 instances will live. Load balancing directs external user requests between multiple EC2 instances, Systems Manager provides tools for monitoring and managing your resources, and Fargate is an interface for administering Docker containers on Amazon ECS.
  18. A. Lambda can be used as such a trigger. Beanstalk launches and manages infrastructure for your application that will remain running until you manually stop it, ECS manages Docker containers but doesn't necessarily stop them when a task is done, and Auto Scaling can add instances to an already running deployment to meet demand.
  19. C. VM Import/Export will do this. S3 buckets are used to store an image, but they're not directly involved in the import operation. Snowball is a physical high‐capacity storage device that Amazon ships to your office for you to load data and ship back. Direct Connect uses Amazon partner providers to build a high‐speed connection between your servers and your AWS VPC.
  20. B. You can modify a launch template by creating a new version of it; however, the question indicates that the Auto Scaling group was created using a launch configuration. You can't modify a launch configuration. Auto Scaling doesn't use CloudFormation templates.
  21. A. Auto Scaling strives to maintain the number of instances specified in the desired capacity setting. If the desired capacity setting isn't set, Auto Scaling will attempt to maintain the number of instances specified by the minimum group size. Given a desired capacity value of 5, there should be five healthy instances. If you manually terminate two of them, Auto Scaling will create two new ones to replace them. Auto Scaling will not adjust the desired capacity or minimum group size.
  22. B, C. Scheduled actions can adjust the minimum and maximum group sizes and the desired capacity on a schedule, which is useful when your application has a predictable load pattern. To add more instances in proportion to the aggregate CPU utilization of the group, implement step scaling policies. Target tracking policies adjust the desired capacity of a group to keep the threshold of a given metric near a predefined value. Simple scaling policies simply add more instances when a defined CloudWatch alarm triggers, but the number of instances added is not proportional to the value of the metric.
  23. B. Automation documents let you perform actions against your AWS resources, including taking EBS snapshots. Although called automation documents, you can still manually execute them. A command document performs actions within a Linux or Windows instance. A policy document works only with State Manager and can't take an EBS snapshot. There's no manual document type.

Chapter 3: AWS Storage

  1. A, C. Storage Gateway and EFS provide the required read/write access. S3 can be used to share files, but it doesn't offer low‐latency access—and its eventual consistency won't work well with filesystems. EBS volumes can be used only for a single instance at a time.
  2. D. In theory, at least, there's no limit to the data you can upload to a single bucket or to all the buckets in your account or to the number of times you upload (using the PUT command). By default, however, you are allowed only 100 S3 buckets per account.
  3. A. HTTP (web) requests must address the s3.amazonaws.com domain along with the bucket and filenames.
  4. C. A prefix is the name common to the objects you want to group, and a slash character (/) can be used as a delimiter. The bar character (|) would be treated as part of the name rather than as a delimiter. Although DNS names can have prefixes, they're not the same as prefixes in S3.
  5. A, C. Client‐side encryption occurs before an object reaches the bucket (i.e., before it comes to rest in the bucket). Only AWS KMS‐Managed Keys provide an audit trail. AWS End‐to‐End managed keys doesn't exist as an AWS service.
  6. A, B, E. S3 server access logs don't report the source bucket's current size. They don't track API calls—that's something covered by AWS CloudTrail.
  7. C, E. The S3 guarantee only covers the physical infrastructure owned by AWS. Temporary service outages are related to “availability” and not “durability.”
  8. A. One Zone‐IA data is heavily replicated but only within a single availability zone, whereas Reduced Redundancy data is only lightly replicated.
  9. B. The S3 Standard‐IA (Infrequent Access) class is guaranteed to be available 99.9 percent of the time.
  10. D. S3 can't guarantee instant consistency across their infrastructure for changes to existing objects, but there aren't such concerns for newly created objects.
  11. C. Object versioning must be manually enabled for each object to prevent older versions of the object from being deleted.
  12. A. S3 lifecycle rules can incorporate specifying objects by prefix. There's no such thing as a lifecycle template.
  13. A. Glacier offers the least expensive and most highly resilient storage within the AWS ecosystem. Reduced Redundancy is not resilient and, in any case, is no longer recommended. S3 One Zone and S3 Standard are relatively expensive.
  14. B, C. ACLs are a legacy feature that isn't as flexible as IAM or S3 bucket polices. Security groups are not used with S3 buckets. KMS is an encryption key management tool and isn't used for authentication.
  15. D. In this context, a principal is an identity to which bucket access is assigned.
  16. B. The default expiry value for a presigned URL is 3,600 seconds (one hour).
  17. A, D. The AWS Certificate Manager can (when used as part of a CloudFront distribution) apply an SSL/TLS encryption certificate to your website. You can use Route 53 to associate a DNS domain name to your site. EC2 instances and RDS database instances would never be used for static websites. You would normally not use KMS for a static website—websites are usually meant to be public and encrypting the website assets with a KMS key would make it impossible for clients to download them.
  18. B. As of this writing, a single Glacier archive can be no larger than 40 TB.
  19. C. Direct Connect can provide fast network connections to AWS, but it's very expensive and can take up to 90 days to install. Server Migration Service and Storage Gateway aren't meant for moving data at such scale.
  20. A. FSx for Lustre and Elastic File System are primarily designed for access from Linux file systems. EBS volumes can't be accessed by more than a single instance at a time.

Chapter 4: Amazon Virtual Private Cloud

  1. A. The allowed range of prefix lengths for a VPC CIDR is between /16 and /28 inclusive. The maximum possible prefix length for an IP subnet is /32, so /56 is not a valid length.
  2. C. A secondary CIDR may come from the same RFC 1918 address range as the primary, but it may not overlap with the primary CIDR. 192.168.0.0/24 comes from the same address range (192.168.0.0–192.168.255.255) as the primary and does not overlap with 192.168.16.0/24; 192.168.0.0/16 and 192.168.16.0/23 both overlap with 192.168.16.0/24; and 172.31.0.0/16 is not in the same range as the primary CIDR.
  3. A, D. Options A and D (10.0.0.0/24 and 10.0.0.0/23) are within the VPC CIDR and leave room for a second subnet; 10.0.0.0/8 is wrong because prefix lengths less than /16 aren't allowed; and 10.0.0.0/16 doesn't leave room for another subnet.
  4. B. Multiple subnets may exist in a single availability zone. A subnet cannot span availability zones.
  5. A. Every ENI must have a primary private IP address. It can have secondary IP addresses, but all addresses must come from the subnet the ENI resides in. Once created, the ENI cannot be moved to a different subnet. An ENI can be created independently of an instance and later attached to an instance.
  6. D. Each VPC contains a default security group that can't be deleted. You can create a security group by itself without attaching it to anything. But if you want to use it, you must attach it to an ENI. You also attach multiple security groups to the same ENI.
  7. A. An NACL is stateless, meaning it doesn't track connection state. Every inbound rule must have a corresponding outbound rule to permit traffic, and vice versa. An NACL is attached to a subnet, whereas a security group is attached to an ENI. An NACL can be associated with multiple subnets, but a subnet can have only one NACL.
  8. D. An Internet gateway has no management IP address. It can be associated with only one VPC at a time and so cannot grant Internet access to instances in multiple VPCs. It is a logical VPC resource and not a virtual or physical router.
  9. A. The destination 0.0.0.0/0 matches all IP prefixes and hence covers all publicly accessible hosts on the Internet. ::0/0 is an IPv6 prefix, not an IPv4 prefix. An Internet gateway is the target of the default route, not the destination.
  10. A. Every subnet is associated with the main route table by default. You can explicitly associate a subnet with another route table. There is no such thing as a default route table, but you can create a default route within a route table.
  11. A. An instance must have a public IP address to be directly reachable from the Internet. The instance may be able to reach the Internet via a NAT device. The instance won't necessarily receive the same private IP address because it was automatically assigned. The instance will be able to reach other instances in the subnet because a public IP is not required.
  12. B. Assigning an EIP to an instance is a two‐step process. First you must allocate an EIP, and then you must associate it with an ENI. You can't allocate an ENI, and there's no such thing as an instance's primary EIP. Configuring the instance to use an automatically assigned public IP must occur at instance creation. Changing an ENI's private IP to match an EIP doesn't actually assign a public IP to the instance, because the ENI's private address is still private.
  13. A. Internet‐bound traffic from an instance with an automatically assigned public IP will traverse an Internet gateway that will perform NAT. The source address will be the instance's public IP. An instance with an automatically assigned public IP cannot also have an EIP. The NAT process will replace the private IP source address with the public IP. Option D, 0.0.0.0, is not a valid source address.
  14. A. The NAT device's default route must point to an Internet gateway, and the instance's default route must point to the NAT device. No differing NACL configurations between subnets are required to use a NAT device. Security groups are applied at the ENI level. A NAT device doesn't require multiple interfaces.
  15. D. A NAT gateway is a VPC resource that scales automatically to accommodate increased bandwidth requirements. A NAT instance can't do this. A NAT gateway exists in only one availability zone. There are not multiple NAT gateway types. A NAT instance is a regular EC2 instance that comes in different types.
  16. A. An Internet gateway performs NAT for instances that have a public IP address. A route table defines how traffic from instances is forwarded. An EIP is a public IP address and can't perform NAT. An ENI is a network interface and doesn't perform NAT.
  17. A. The source/destination check on the NAT instance's ENI must be disabled to allow the instance to receive traffic not destined for its IP and to send traffic using a source address that it doesn't own. The NAT instance's default route must point to an Internet gateway as the target. You can't assign a primary private IP address after the instance is created.
  18. A. You cannot route through a VPC using transitive routing. Instead, you must directly peer the VPCs containing the instances that need to communicate. A VPC peering connection uses the AWS internal network and requires no public IP address. Because a peering connection is a point‐to‐point connection, it can connect only two VPCs. A peering connection can be used only for instance‐to‐instance communication. You can't use it to share other VPC resources.
  19. A, D. Each peered VPC needs a route to the CIDR of its peer; therefore, you must create two routes with the peering connection as the target. Creating only one route is not sufficient to enable bidirectional communication. Additionally, the instances' security groups must allow for bidirectional communication. You can't create more than one peering connection between a pair of VPCs.
  20. C. Interregion VPC peering connections aren't available in all regions and support a maximum MTU of 1,500 bytes. You can use IPv4 across an inter‐region peering connection but not IPv6.
  21. B. VPN connections are always encrypted.
  22. A, C, D. VPC peering, transit gateways, and VPNs all allow EC2 instances in different regions to communicate using private IP addresses. Direct Connect is for connecting VPCs to on‐premises networks, not for connecting VPCs together.
  23. B. A transit gateway route table can hold a blackhole route. If the transit gateway receives traffic that matches the route, it will drop the traffic.
  24. D. Tightly coupled workloads include simulations such as weather forecasting. They can't be broken down into smaller, independent pieces, and so require the entire cluster to function as a single supercomputer.

Chapter 5: Database Services

  1. A, C. Different relational databases use different terminology. A row, record, and tuple all describe an ordered set of columns. An attribute is another term for column. A table contains rows and columns.
  2. C. A table must contain at least one attribute or column. Primary and foreign keys are used for relating data in different tables, but they're not required. A row can exist within a table, but a table doesn't need a row in order to exist.
  3. D. The SELECT statement retrieves data from a table. INSERT is used for adding data to a table. QUERY and SCAN are commands used by DynamoDB, which is a nonrelational database.
  4. B. Online transaction processing databases are designed to handle multiple transactions per second. Online analytics processing databases are for complex queries against large data sets. A key/value store such as DynamoDB can handle multiple transactions per second, but it's not a relational database. There's no such thing as an offline transaction processing database.
  5. B. Although there are six database engines to choose from, a single database instance can run only one database engine. If you want to run more than one database engine, you will need a separate database instance for each engine.
  6. B, C. MariaDB and Aurora are designed as binary drop‐in replacements for MySQL. PostgreSQL is designed for compatibility with Oracle databases. Microsoft SQL Server does not support MySQL databases.
  7. C. InnoDB is the only storage engine Amazon recommends for MySQL and MariaDB deployments in RDS and the only engine Aurora supports. MyISAM is another storage engine that works with MySQL but is not compatible with automated backups. XtraDB is another storage engine for MariaDB, but Amazon no longer recommends it. The PostgreSQL database engine uses its own storage engine by the same name and is not compatible with other database engines.
  8. A, C. All editions of the Oracle database engine support the bring‐your‐own‐license model in RDS. Microsoft SQL Server and PostgreSQL only support the license‐included model.
  9. B. Memory‐optimized instances are EBS optimized, providing dedicated bandwidth for EBS storage. Standard instances are not EBS optimized and top out at 10,000 Mbps disk throughput. Burstable performance instances are designed for development and test workloads and provide the lowest disk throughput of any instance class. There is no instance class called storage optimized.
  10. A. MariaDB has a page size of 16 KB. To write 200 MB (204,800 KB) of data every second, it would need 12,800 IOPS. Oracle, PostgreSQL, or Microsoft SQL Server, which all use an 8 KB page size, would need 25,600 IOPS to achieve the same throughput. When provisioning IOPS, you must specify IOPS in increments of 1,000, so 200 and 16 IOPS—which would be woefully insufficient anyway—are not valid answers.
  11. A. General‐purpose SSD storage allocates three IOPS per gigabyte, up to 10,000 IOPS. Therefore, to get 600 IOPS, you'd need to allocate 200 GB. Allocating 100 GB would give you only 300 IOPS. The maximum storage size for gp2 storage is 16 TB, so 200 TB is not a valid value. The minimum amount of storage you can allocate depends on the database engine, but it's no less than 20 GB, so 200 MB is not valid.
  12. C. When you provision IOPS using io1 storage, you must do so in a ratio no greater than 50 IOPS for 1 GB. Allocating 240 GB of storage would give you 12,000 IOPS. Allocating 200 GB of storage would fall short, yielding just 10,000 IOPS. Allocating 12 TB would be overkill for the amount of storage required.
  13. A. A read replica only services queries and cannot write to a database. A standby database instance in a multi‐AZ deployment does not accept queries. Both a primary and a master database instance can service queries and writes.
  14. D. Multi‐AZ deployments using Oracle, PostgreSQL, MariaDB, MySQL, or Microsoft SQL Server replicate data synchronously from the primary to a standby instance. Only a multi‐AZ deployment using Aurora uses a cluster volume and replicates data to a specific type of read replica called an Aurora replica.
  15. A. When you restore from a snapshot, RDS creates a new instance and doesn't make any changes to the failed instance. A snapshot is a copy of the entire instance, not just a copy of the individual databases. RDS does not delete a snapshot after restoring from it.
  16. B. The ALL distribution style ensures every compute node has a complete copy of every table. The EVEN distribution style splits tables up evenly across all compute nodes. The KEY distribution style distributes data according to the value in a specified column. There is no distribution style called ODD.
  17. D. The dense compute type can store up to 326 TB of data on magnetic storage. The dense storage type can store up to 2 PB of data on solid state drives. A leader node coordinates communication among compute nodes but doesn't store any databases. There is no such thing as a dense memory node type.
  18. A, B. In a nonrelational database, a primary key is required to uniquely identify an item and hence must be unique within a table. All primary key values within a table must have the same data type. Only relational databases use primary keys to correlate data across different tables.
  19. B. An order date would not be unique within a table, so it would be inappropriate for a partition (hash) key or a simple primary key. It would be appropriate as a sort key, as DynamoDB would order items according to the order date, which would make it possible to query items with a specific date or within a date range.
  20. A. A single strongly consistent read of an item up to 4 KB consumes one read capacity unit. Hence, reading 11 KB of data per second using strongly consistent reads would consume three read capacity units. Were you to use eventually consistent reads, you would need only two read capacity units, as one eventually consistent read gives you up to 8 KB of data per second. Regardless, you must specify a read capacity of at least 1, so 0 is not a valid answer.
  21. B. The dense storage node type uses fast SSDs, whereas the dense compute node uses slower magnetic storage. The leader node doesn't access the database but coordinates communication among compute nodes. KEY is a data distribution strategy Redshift uses, but there is no such thing as a key node.
  22. D. When you create a table, you can choose to create a global secondary index with a different partition and hash key. A local secondary index can be created after the table is created, but the partition key must be the same as the base table, although the hash key can be different. There is no such thing as a global primary index or eventually consistent index.
  23. B. NoSQL databases are optimized for queries against a primary key. If you need to query data based only on one attribute, you'd make that attribute the primary key. NoSQL databases are not designed for complex queries. Both NoSQL and relational databases can store JSON documents, and both database types can be used by different applications.
  24. D. A graph database is a type of nonrelational database that discovers relationships among items. A document‐oriented store is a nonrelational database that analyzes and extracts data from documents. Relational databases can enforce relationships between records but don't discover them. A SQL database is a type of relational database.

Chapter 6: Authentication and Authorization—AWS Identity and Access Management

  1. C. Although each of the other options represents possible concerns, none of them carries consequences as disastrous as the complete loss of control over your account.
  2. B. The * character does, indeed, represent global application. The Action element refers to the kind of action requested (list, create, etc.), the Resource element refers to the particular AWS account resource that's the target of the policy, and the Effect element refers to the way IAM should react to a request.
  3. A, B, C. Unless there's a policy that explicitly allows an action, it will be denied. Therefore, a user with no policies or with a policy permitting S3 actions doesn't permit EC2 instance permissions. Similarly, when two policies conflict, the more restrictive will be honored. The AdministratorAccess policy opens up nearly all AWS resources, including EC2. There's no such thing as an IAM action statement.
  4. B, C. If you don't perform any administration operations with regular IAM users, then there really is no point for them to exist. Similarly, without access keys, there's a limit to what a user will be able to accomplish. Ideally, all users should use MFA and strong passwords. The AWS CLI is an important tool, but it isn't necessarily the most secure.
  5. D. The top‐level command is iam, and the correct subcommand is get‐access‐key‐last‐used. The parameter is identified by ‐‐access‐last‐key‐id. Parameters (not subcommands) are always prefixed with ‐‐ characters.
  6. B. IAM groups are primarily about simplifying administration. It has no direct impact on resource usage or response times and only an indirect impact on locking down the root user.
  7. C. X.509 certificates are used for encrypting SOAP requests, not authentication. The other choices are all valid identities within the context of an IAM role.
  8. A. AWS CloudHSM provides encryption that's FIPS 140‐2 compliant. Key Management Service manages encryption infrastructure but isn't FIPS 140‐2 compliant. Security Token Service is used to issue tokens for valid IAM roles, and Secrets Manager handles secrets for third‐party services or databases.
  9. B. AWS Directory Service for Microsoft Active Directory provides Active Directory authentication within a VPC environment. Amazon Cognito provides user administration for your applications. AWS Secrets Manager handles secrets for third‐party services or databases. AWS Key Management Service manages encryption infrastructure.
  10. A. Identity pools provide temporary access to defined AWS services to your application users. Sign‐up and sign‐in is managed through Cognito user pools. KMS and/or CloudHSM provide encryption infrastructure. Credential delivery to databases or third‐party applications is provided by AWS Secrets Manager.
  11. A, D, E. Options A, D, and E are appropriate steps. Your IAM policies will be as effective as ever, even if outsiders know your policies. Since even an account's root user would never have known other users' passwords, there's no reason to change them.
  12. B. IAM policies are global—they're not restricted to any one region. Policies do, however, require an action (like create buckets), an effect (allow), and a resource (S3).
  13. B, C. IAM roles require a defined trusted entity and at least one policy. However, the relevant actions are defined by the policies you choose, and roles themselves are uninterested in which applications use them.
  14. D. STS tokens are used as temporary credentials to external identities for resource access to IAM roles. Users and groups would not use tokens to authenticate, and policies are used to define the access a token will provide, not the recipient of the access.
  15. C. Policies must be written in JSON format.
  16. B, D. The correct Resource line would read "Resource": "*". And the correct Action line would read "Action": "*". There is no "Target" line in an IAM policy. "Permit" is not a valid value for "Effect".
  17. B. User pools provide sign‐up and sign‐in for your application's users. Temporary access to defined AWS services to your application users is provided by identity pools. KMS and/or CloudHSM provide encryption infrastructure. Credential delivery to databases or third‐party applications is provided by AWS Secrets Manager.
  18. C, D. An AWS managed service takes care of all underlying infrastructure management for you. In this case, that will include data replication and software updates. On‐premises integration and multi‐AZ deployment are important infrastructure features, but they're not unique to “managed” services.
  19. B, C, D. Options B, C, and D are all parts of the key rotation process. In this context, key usage monitoring is only useful to ensure that none of your applications is still using an old key that's set to be retired. X.509 certificates aren't used for access keys.
  20. A. You attach IAM roles to services in order to give them permissions over resources in other services within your account.

Chapter 7: CloudTrail, CloudWatch, and AWS Config

  1. B, D. Creating a bucket and subnet are API actions, regardless of whether they're performed from the web console or AWS CLI. Uploading an object to an S3 bucket is a data event, not a management event. Logging into the AWS console is a non‐API management event.
  2. C. Data events include S3 object‐level activity and Lambda function executions. Downloading an object from S3 is a read‐only event. Uploading a file to an S3 bucket is a write‐only event and hence would not be logged by the trail. Viewing an S3 bucket and creating a Lambda function are management events, not data events.
  3. C. CloudTrail stores 90 days of event history for each region, regardless of whether a trail is configured. Event history is specific to the events occurring in that region. Because the trail was configured to log read‐only management events, the trail logs would not contain a record of the trail's deletion. They might contain a record of who viewed the trail, but that would be insufficient to establish who deleted it. There is no such thing as an IAM user log.
  4. B. CloudWatch uses dimensions to uniquely identify metrics with the same name and namespace. Metrics in the same namespace will necessarily be in the same region. The data point of a metric and the timestamp that it contains are not unique and can't be used to uniquely identify a metric.
  5. C. Basic monitoring sends metrics every five minutes, whereas detailed monitoring sends them every minute. CloudWatch can store metrics at regular or high resolution, but this affects how the metric is timestamped, rather than the frequency with which it's delivered to CloudWatch.
  6. A. CloudWatch can store high‐resolution metrics at subminute resolution. Therefore, updating a metric at 15:57:08 and again at 15:57:37 will result in CloudWatch storing two separate data points. Only if the metric were regular resolution would CloudWatch overwrite an earlier data point with a later one. Under no circumstances would CloudWatch ignore a metric update.
  7. D. Metrics stored at one‐hour resolution age out after 15 months. Five‐minute resolutions are stored for 63 days. One‐minute resolution metrics are stored for 15 days. High‐resolution metrics are kept for 3 hours.
  8. A. To graph a metric's data points, specify the Sum statistic and set the period equal to the metric's resolution, which in this case is five minutes. Graphing the Sum or Average statistic over a one‐hour period will not graph the metric's data points but rather the Sum or Average of those data points over a one‐hour period. Using the Sample count statistic over a five‐minute period will yield a value of 1 for each period, since there's only one data point per period.
  9. B. CloudWatch uses a log stream to store log events from a single source. Log groups store and organize log streams but do not directly store log events. A metric filter extracts metrics from logs but doesn't store anything. The CloudWatch agent can deliver logs to CloudWatch from a server but doesn't store logs.
  10. A, D. Every log stream must be in a log group. The retention period setting of a log group controls how long CloudWatch retains log events within those streams. You can't manually delete log events individually, but you can delete all events in a log stream by deleting the stream. You can't set a retention period on a log stream directly.
  11. A, C. CloudTrail will not stream events greater than 256 KB in size. There's also a normal delay, typically up to 15 minutes, before an event appears in a CloudWatch log stream. Metric filters have no bearing on what log events get put into a log stream. Although a misconfigured or missing IAM role would prevent CloudTrail from streaming logs to CloudWatch, the question indicates that some events are present. Hence, the IAM role is correctly configured.
  12. B, D. If an EBS volume isn't attached to a running instance, EBS won't generate any metrics to send to CloudWatch. Hence, the alarm won't be able to collect enough data points to alarm. The evaluation period can be no more than 24 hours, and the alarm was created two days ago, so the evaluation period has elapsed. The data points to monitor don’t have to cross the threshold for CloudWatch to determine the alarm state.
  13. B. To have CloudWatch treat missing data as exceeding the threshold, set the Treat Missing Data As option to Breaching. Setting it to Not Breaching will have the opposite effect. Setting it to As Missing will cause CloudWatch to ignore the missing data and behave as if those evaluation periods didn't occur. The Ignore option causes the alarm not to change state in response to missing data. There's no option to treat missing data as Not Missing.
  14. C, D. CloudWatch can use the Simple Notification Service to send a text message. CloudWatch refers to this as a Notification action. To reboot an instance, you must use an EC2 action. The Auto Scaling action will not reboot an instance. SMS is not a valid CloudWatch alarm action.
  15. A. The recover action is useful when there's a problem with an instance that requires AWS involvement to repair, such as a hardware failure. The recover action migrates the same instance to a new host. Rebooting an instance assumes the instance is running and entails the instance remaining on the same host. Recovering an instance does not involve restoring any data from a snapshot, as the instance retains the same EBS volume(s).
  16. B. If CloudTrail were logging write‐only management events in the same region as the instance, it would have generated trail logs containing the deletion event. Deleting a log stream containing CloudTrail events does not delete those events from the trail logs stored in S3. Deleting an EC2 instance is not an IAM event. If AWS Config were tracking changes to EC2 instances in the region, it would have recorded a timestamped configuration item for the deletion, but it would not include the principal that deleted the instance.
  17. B, C, D. The delivery channel must include an S3 bucket name and may specify an SNS topic and the delivery frequency of configuration snapshots. You can't specify a CloudWatch log stream.
  18. D. You can't delete configuration items manually, but you can have AWS Config delete them after no less than 30 days. Pausing or deleting the configuration recorder will stop AWS Config from recording new changes but will not delete configuration items. Deleting configuration snapshots, which are objects stored in S3, will not delete the configuration items.
  19. C, D. CloudWatch can graph only a time series. METRICS()/AVG(m1) and m1/m2 both return a time series. AVG(m1)‐m1 and AVG(m1) return scalar values and can't be graphed directly.
  20. B. Deleting the rule will prevent AWS Config from evaluating resources configurations against it. Turning off the configuration recorder won't prevent AWS Config from evaluating the rule. It's not possible to delete the configuration history for a resource from AWS Config. When you specify a frequency for periodic checks, you must specify a valid frequency, or else AWS Config will not accept the configuration.
  21. B. EventBridge can take an action in response to an event, such as an EC2 instance launch. CloudWatch Alarms can take an action based only on a metric. CloudTrail logs events but doesn't generate any alerts by itself. CloudWatch Metrics is used for graphing metrics.

Chapter 8: The Domain Name System and Network Routing: Amazon Route 53 and Amazon CloudFront

  1. A. Option A is the correct answer. Name servers resolve IP addresses from domain names, allowing clients to connect to resources. Domain registration is performed by domain name registrars. Routing policies are applied through record sets within hosted zones.
  2. C. A domain is a set of resources identified by a single domain name. FQDN stands for fully qualified domain name. Policies for resolving requests are called routing policies.
  3. D. The rightmost section of an FQDN address is the TLD. aws. would be a subdomain or host, amazon. is the SLD, and amazon.com/documentation/ points to a resource stored at the web root of the domain server.
  4. A. CNAME is a record type. TTL, record type, and record data are all configuration elements, not record types.
  5. C. An A record maps a hostname to an IPv4 address. NS records identify name servers. SOA records document start of authority data. CNAME records define one hostname as an alias for another.
  6. A, C, D. Route 53 provides domain registration, health checks, and DNS management. Content delivery network services are provided by CloudFront. Secure and fast network connections to a VPC can be created using AWS Direct Connect.
  7. C. Geolocation can control routing by the geographic origin of the request. The simple policy sends traffic to a single resource. Latency sends content using the fastest origin resource. Multivalue can be used to make a deployment more highly available.
  8. A. Latency selects the available resource with the lowest latency. Weighted policies route among multiple resources by percentage. Geolocation tailors request responses to the end user's location but isn't concerned with response speed. Failover incorporates backup resources for higher availability.
  9. B. Weighted policies route among multiple resources by percentage. Failover incorporates backup resources for higher availability. Latency selects the available resource with the lowest latency. Geolocation tailors request responses to the end user's location.
  10. D. Failover incorporates backup resources for higher availability. Latency selects the available resource with the lowest latency. Weighted policies route among multiple resources by percentage. Geolocation tailors request responses to the end user's location.
  11. A, D. Public and private hosting zones are real options. Regional, hybrid, and VPC zones don't exist (although private zones do map to VPCs).
  12. A, B. To transfer a domain, you'll need to make sure the domain isn't set to locked. You'll also need an authorization code that you'll provide to Route 53. Copying name server addresses is necessary only for managing domains that are hosted on but not registered with Route 53. CNAME record sets are used to define one hostname as an alias for another.
  13. B. You can enable remotely registered domains on Route 53 by copying name server addresses into the remote registrar‐provided interface (not the other way around). Making sure the domain isn't set to locked and requesting authorization codes are used to transfer a domain to Route 53, not just to manage the routing. CNAME record sets are used to define one hostname as an alias for another.
  14. C. You specify the web page that you want used for testing when you configure your health check. There is no default page. Remote SSH sessions would be impossible for a number of reasons and wouldn't definitively confirm a running resource in any case.
  15. A. Geoproximity is about precisely pinpointing users, whereas geolocation uses geopolitical boundaries.
  16. A, D. CloudFront is optimized for handling heavy download traffic and for caching website content. Users on a single corporate campus or accessing resources through a VPN will not benefit from the distributed delivery provided by CloudFront.
  17. C. API Gateway is used to generate custom client SDKs for your APIs to connect your backend systems to mobile, web, and server applications or services.
  18. A. Choosing a price class offering limited distribution is the best way to reduce costs. Non‐HTTPS traffic can be excluded (thereby saving some money) but not through the configuration of an SSL certificate (you'd need further configuration). Disabling Alternate Domain Names or enabling Compress Objects Automatically won't reduce costs.
  19. C. Not every CloudFront distribution is optimized for low‐latency service. Requests of an edge location will only achieve lower latency after copies of your origin files are already cached. Therefore, a response to the first request might not be fast because CloudFront still has to copy the file from the origin server.
  20. B. RTMP distributions can manage content only from S3 buckets. RTMP is intended for the distribution of video content.

Chapter 9: Simple Queue Service and Kinesis

  1. C, D. After a consumer grabs a message, the message is not deleted. Instead, the message becomes invisible to other consumers for the duration of the visibility timeout. The message is automatically deleted from the queue after it's been in there for the duration of the retention period.
  2. B. The default visibility timeout for a queue is 30 seconds. It can be configured to between 0 seconds and 12 hours.
  3. D. The default retention period is 4 days but can be set to between 1 minute and 14 days.
  4. B. You can use a message timer to hide a message for up to 15 minutes. Per‐queue delay settings apply to all messages in the queue unless you specifically override the setting using a message timer.
  5. B. A standard queue can handle up to 120,000 in‐flight messages. A FIFO queue can handle up to about 20,000. Delay and short are not valid queue types.
  6. A. FIFO queues always deliver messages in the order they were received. Standard queues usually do as well, but they're not guaranteed to. LIFO, FILO, and basic aren't valid queue types.
  7. C. Standard queues may occasionally deliver a message more than once. FIFO queues will not. Using long polling alone doesn't result in duplicate messages.
  8. B. Short polling, which is the default, may occasionally fail to deliver messages. To ensure delivery of these messages, use long polling.
  9. D. Dead‐letter queues are for messages that a consumer is unable to process. To use a dead‐letter queue, you create a queue of the same type as the source queue, and set the maxReceiveCount to the maximum number of times a message can be received before it's moved to the dead‐letter queue.
  10. C. If the retention period for the dead‐letter queue is 10 days, and a message is already 6 days old when it's moved to the dead‐letter queue, it will spend at most 4 days in the dead‐letter queue before being deleted.
  11. B. Kinesis Video Streams is designed to work with time‐indexed data such as RADAR images. Kinesis ML doesn't exist.
  12. A, C. You can't specify a retention period over 7 days, so your only option is to create a Kinesis Data Firehose delivery stream that receives data from the Kinesis Data Stream and sends the data to an S3 bucket.
  13. C. Kinesis Data Firehose requires you to specify a destination for a delivery stream. Kinesis Video Streams and Kinesis Data Streams use a producer‐consumer model that allows consumers to subscribe to a stream. There is no such thing as Kinesis Data Warehouse.
  14. B. The Amazon Kinesis Agent can automatically stream the contents of a file to Kinesis. There's no need to write any custom code or move the application to EC2. The CloudWatch Logs Agent can't send logs to a Kinesis Data Stream.
  15. C. SQS and Kinesis Data Streams are similar. But SQS is designed to temporarily hold a small message until a single consumer processes it, whereas Kinesis Data Streams is designed to provide durable storage and playback of large data streams to multiple consumers.
  16. B, C. You should stream the log data to Kinesis Data Streams and then have Kinesis Data Firehose consume the data and stream it to Redshift.
  17. C. Kinesis is for streaming data such as stock feeds and video. Static websites are not streaming data.
  18. B. Shards determine the capacity of a Kinesis Data Stream. A single shard gives you writes of up to 1 MB per second, so you'd need two shards to get 2 MB of throughput.
  19. A. Shards determine the capacity of a Kinesis Data Stream. Each shard supports 2 MB of reads per second. Because consumers are already receiving a total of 3 MB per second, it implies you have at least two shards already configured, supporting a total of 4 MB per second. Therefore, to support 5 MB per second you need to add just one more shard.
  20. A. Kinesis Data Firehose is designed to funnel streaming data to big data applications, such as Redshift or Hadoop. It's not designed for videoconferencing.

Chapter 10: The Reliability Pillar

  1. C. Availability of 99.95 percent translates to about 22 minutes of downtime per month, or 4 hours and 23 minutes per year. Availability of 99.999 percent is less than 30 seconds of downtime per month, but the question calls for the minimum level of availability. Availability of 99 percent yields more than 7 hours of downtime per month, whereas 99.9 percent is more than 43 minutes of downtime per month.
  2. A. The EC2 instances are redundant components, so to calculate their availability, you multiply the component failure rates and subtract the product from 100 percent. In this case, 100% – (10% × 10%) = 99%. Because the database represents a hard dependency, you multiply the availability of the EC2 instances by the availability of the RDS instance, which is 95 percent. In this case, 99% × 95% = 94.05%. A total availability of 99 percent may seem intuitive, but because the redundant EC2 instances have a hard dependency on the RDS instance, you must multiple the availabilities together. A total availability of 99.99 percent is unachievable since it's well above the availability of any of the components.
  3. B. DynamoDB offers 99.99 percent availability and low latency. Because it's distributed, data is stored across multiple availability zones. You can also use DynamoDB global tables to achieve even higher availability: 99.999 percent. Multi‐AZ RDS offerings can provide low latency performance, particularly when using Aurora, but the guaranteed availability is capped at 99.95 percent. Hosting your own SQL database isn't a good option because, although you could theoretically achieve high availability, it would come at the cost of significant time and effort.
  4. B, D. One cause of application failures is resource exhaustion. By scoping out large enough instances and scaling out to make sure you have enough of them, you can prevent failure and thus increase availability. Scaling instances in may help with cost savings but won't help availability. Storing web assets in S3 instead of hosting them from an instance can help with performance but won't have an impact on availability.
  5. B. You can modify a launch template by creating a new version of it; however, the question indicates that the Auto Scaling group was created using a launch configuration. You can't modify a launch configuration. Auto Scaling doesn't use CloudFormation templates.
  6. A. Auto Scaling strives to maintain the number of instances specified in the desired capacity setting. If the desired capacity setting isn't set, Auto Scaling will attempt to maintain the number of instances specified by the minimum group size. Given a desired capacity of 5, there should be five healthy instances. If you manually terminate two of them, Auto Scaling will create two new ones to replace them. Auto Scaling will not adjust the desired capacity or minimum group size.
  7. A, D, E. Auto Scaling monitors the health of instances in the group using either ELB or EC2 instance and system checks. It can't use Route 53 health checks. Dynamic scaling policies can use CloudWatch Alarms, but these are unrelated to checking the health of instances.
  8. B, C. Scheduled actions can adjust the minimum and maximum group sizes and the desired capacity on a schedule, which is useful when your application has a predictable load pattern. To add more instances in proportion to the aggregate CPU utilization of the group, implement step scaling policies. Target tracking policies adjust the desired capacity of a group to keep the threshold of a given metric near a predefined value. Simple scaling policies simply add more instances when a defined CloudWatch alarm triggers, but the number of instances added is not proportional to the value of the metric.
  9. A, D. Enabling versioning protects objects against data corruption and deletion by keeping before and after copies of every object. The Standard storage class replicates objects across multiple availability zones in a region, guarding against the failure of an entire zone. Bucket policies may protect against accidental deletion, but they don't guard against data corruption. Cross‐region replication applies to new objects, not existing ones.
  10. C. The Data Lifecycle Manager can automatically create snapshots of an EBS volume every 12 or 24 hours and retain up to 1,000 snapshots. Backing up files to EFS is not an option because a spot instance may terminate before the cron job has a chance to complete. CloudWatch Logs doesn't support storing binary files.
  11. D. Aurora allows you to have up to 15 replicas. MariaDB, MySQL, and PostgreSQL allow you to have only up to five.
  12. B. When you enable automated snapshots, RDS backs up database transaction logs about every five minutes. Configuring multi‐AZ will enable synchronous replication between the two instances, but this is useful for avoiding failures and is unrelated to the time it takes to recover a database. Read replicas are not appropriate for disaster recovery because data is copied to them asynchronously, and there can be a significant delay in replication, resulting in an RPO of well over five minutes.
  13. A, C. AWS sometimes adds additional availability zones to a region. To take advantage of a new zone, you'll need to be able to add a new subnet in it. You also may decide later that you may need another subnet or tier for segmentation or security purposes. RDS doesn't require a separate subnet. It can share the same subnet with other VPC resources. Adding a secondary CIDR to a VPC doesn't require adding another subnet.
  14. A, D. Fifty EC2 instances, each with two private IP addresses, would consume 100 IP addresses in a subnet. Additionally, AWS reserves five IP addresses in every subnet. The subnet therefore must be large enough to hold 105 IP addresses. 172.21.0.0/25 and 10.0.0.0/21 are sufficiently large. 172.21.0.0/26 allows room for only 63 IP addresses. 10.0.0.0/8 is large enough, but a subnet prefix length must be at least /16.
  15. A, D. Direct Connect offers consistent speeds and latency to the AWS cloud. Because Direct Connect bypasses the public Internet, it's more secure. For speeds, you can choose 1 Gbps or 10 Gbps, so Direct Connect wouldn't offer a bandwidth increase over using the existing 10 Gbps Internet connection. Adding a Direct Connect connection wouldn't have an effect on end‐user experience, since they would still use the Internet to reach your AWS resources.
  16. B. When connecting a VPC to an external network, whether via a VPN connection or Direct Connect, make sure the IP address ranges don't overlap. In‐transit encryption, though useful for securing network traffic, isn't required for proper connectivity. IAM policies restrict API access to AWS resources, but this is unrelated to network connectivity. Security groups are VPC constructs and aren't something you configure on a data center firewall.
  17. A, C. CloudFormation lets you provision and configure EC2 instances by defining your infrastructure as code. This lets you update the AMI easily and build a new instance from it as needed. You can include application installation scripts in the user data to automate the build process. Auto Scaling isn't appropriate for this scenario because you're going to sometimes terminate and re‐create the instance. Dynamic scaling policies are part of Auto Scaling,
  18. D. By running four instances in each zone, you have a total of 12 instances in the region. If one zone fails, you lose four of those instances and are left with eight. Running eight or 16 instances in each zone would allow you to withstand one zone failure, but the question asks for the minimum number of instances. Three instances per zone would give you nine total in the region, but if one zone fails, you'd be left with only six.
  19. C. Availability of 99.99 percent corresponds to about 52 minutes of downtime per year; 99 percent, 99.9 percent, and 99.95 percent entail significantly more downtime.
  20. A, C. Because users access a public domain name that resolves to an elastic load balancer, you'll need to update the DNS record to point to the load balancer in the other region. You'll also need to fail the database over to the other region so that the read replica can become the primary. Load balancers are not cross‐region, so it's not possible to point the load balancer in one region to instances in another. Restoring the database isn't necessary because the primary database instance asynchronously replicates data to the read replicas in the other region.

Chapter 11: The Performance Efficiency Pillar

  1. A, B, D. ECUs, vCPUs, and the Intel AES‐NI encryption set are all instance type parameters. Aggregate cumulative cost per request has nothing to do with EC2 instances but is a common key performance indicator (KPI). Read replicas are a feature used with database engines.
  2. A, B, C. A launch configuration pointing to an EC2 AMI and an associated load balancer are all, normally, essential to an Auto Scaling operation. Passing a startup script to the instance at runtime may not be necessary, especially if your application is already set up as part of your AMI. OpsWorks stacks are orchestration automation tools and aren't necessary for successful Auto Scaling.
  3. B. Defining a capacity metric, minimum and maximum instances, and a load balancer are all done during Auto Scaling configuration. Only the AMI is defined by the launch configuration.
  4. A. Elastic Container Service is a good platform for microservices. Lambda functions executions are short‐lived (having a 15‐minute maximum) and wouldn't work well for this kind of deployment. Beanstalk operations aren't ideal for microservices. ECR is a repository for container images and isn't a deployment platform on its own.
  5. D. RAID optimization is an OS‐level configuration and can, therefore, be performed only from within the OS.
  6. C. Cross‐region replication can provide both low‐latency and resilience. CloudFront and S3 Transfer Acceleration deliver low latency but not resilience. RAID arrays can deliver both, but only on EBS volumes.
  7. A. S3 Transfer Acceleration makes use of CloudFront locations. Neither S3 Cross‐Region Replication nor EC2 Auto Scaling uses CloudFront edge locations, and the EBS Data Transfer Wizard doesn't exist (although perhaps it should).
  8. B. Scalability is managed automatically by RDS, and there is no way for you to improve it through user configurations. Indexes, schemas, and views should be optimized as much as possible.
  9. D, E. Automated patches, out‐of‐the‐box Auto Scaling, and updates are benefits of a managed service like RDS, not of custom‐built EC2‐based databases.
  10. B, D. Integrated enhanced graphics and Auto Scaling can both help here. Amazon Lightsail is meant for providing quick and easy compute deployments. Elasticsearch isn't likely to help with a graphics workload. CloudFront can help with media transfers, but not with graphics processing.
  11. C. The network load balancer is designed for any TCP‐based application and preserves the source IP address. The application load balancer terminates HTTP and HTTPS connections, and it's designed for applications running in a VPC, but it doesn't preserve the source IP address. The Classic load balancer works with any TCP‐based application but doesn't preserve the source IP address. There is no such thing as a Dynamic load balancer.
  12. A, B, D. The CloudFormation wizard, prebuilt templates, and JSON formatting are all useful for CloudFormation deployments. CloudDeploy and Systems Manager are not good sources for CloudFormation templates.
  13. A. There is no default node name in a CloudFormation configuration—nor is there a node of any sort.
  14. B, E. Chef and Puppet are both integrated with AWS OpsWorks. Terraform, SaltStack, and Ansible are not directly integrated with OpsWorks.
  15. A, C. Dashboards and SNS are important elements of resource monitoring. There are no tools named CloudWatch OneView or AWS Config dashboards.
  16. A, B. Advance permission from AWS is helpful only for penetration testing operations. A complete record of your account's resource configuration changes would make sense in the context of AWS Config, but not CloudWatch. Service Catalog helps you audit your resources but doesn't contribute to ongoing event monitoring.
  17. D. Config is an auditing tool. CloudTrail tracks API calls. CloudWatch monitors system performance. CodePipeline is a continuous integration/continuous deployment (CI/CD) orchestration service.
  18. B, C. ElastiCache executions can use either Redis or Memcached. Varnish and Nginx are both caching engines but are not integrated into ElastiCache.
  19. A, D. Redis is useful for operations that require persistent session states and or greater flexibility. If you're after speed, Redis might not be the best choice; in many cases, Memcached will provide faster service. Redis configuration has a rather steep learning curve.
  20. B. Read replicas based on the Oracle database are not possible.

Chapter 12: The Security Pillar

  1. A, C. A password policy can specify a minimum password length but not a maximum. It can prevent a user from reusing a password they used before but not one that another user has used. A password policy can require a password to contain numbers. It can also require administrator approval to reset an expired password.
  2. B. The Condition element lets you require MFA to grant the permissions defined in the policy. The Resource and Action elements define what those permissions are but not the conditions under which those permissions are granted. The Principal element is not used in an identity‐based policy.
  3. A, D. IAM keeps five versions of every customer managed policy. When CloudTrail is configured to log global management events, it will record any policy changes in the request parameters of the CreatePolicyVersion operation. There is no such thing as a policy snapshot. CloudTrail data event logs will not log IAM events.
  4. B. When an IAM user assumes a role, the user gains the permissions assigned to that role but loses the permissions assigned to the IAM user. The RunInstances action launches a new instance. Because the role can perform the RunInstances action in the us‐east‐1 region, the user, upon assuming the role, can create a new instance in the us‐east‐1 region but cannot perform any other actions. StartInstances starts an existing instance but doesn't launch a new one.
  5. A. Granting a user access to use a KMS key to decrypt data requires adding the user to the key policy as a key user. Adding the user as a key administrator is insufficient to grant this access, as is granting the user access to the key using an IAM policy. Adding the user to a bucket policy can grant the user permission to access encrypted objects in the bucket but doesn't necessarily give the user the ability to decrypt those objects.
  6. C. VPC flow logs record source IP address information for traffic coming into your VPC. DNS query logs record the IP addresses of DNS queries, but those won't necessarily be the same IP addresses accessing your application. Because users won't directly connect to your RDS instance, RDS logs won't record their IP addresses. CloudTrail logs can record the source IP address of API requests but not connections to an EC2 instance.
  7. C, D. Athena lets you perform advanced SQL queries against data stored in S3. A metric filter can increment based on the occurrence of a value in a CloudWatch log group but can't tell you the most frequently occurring IP address.
  8. A. The Behavior finding type is triggered by an instance sending abnormally large amounts of data or communicating on a protocol and port that it typically doesn't. The Backdoor finding type indicates that an instance has resolved a DNS name associated with a command‐and‐control server or is communicating on TCP port 25. The Stealth finding type is triggered by weakening password policies or modifying a CloudTrail configuration. The ResourceConsumption finding type is triggered when an IAM user launches an EC2 instance when they've never done so.
  9. A, C. The AWS Config timeline will show every configuration change that occurred on the instance, including the attachment and detachment of security groups. CloudTrail management event logs will also show the actions that detached and attached the security group. Although AWS Config rules use Lambda functions, the Lambda logs for AWS managed rules are not available to you. VPC flow logs capture traffic ingressing a VPC, but not API events.
  10. D. The Security Best Practices rules package has rules that apply to only Linux instances. The other rules contain rules for both Windows and Linux instances.
  11. C, D. You can use an IAM policy or SQS access policy to restrict queue access to certain principals or those coming from a specified IP range. You cannot use network access control lists or security groups to restrict access to a public endpoint.
  12. A, C. HTTPS traffic traverses TCP port 443, so the security group should allow inbound access to this protocol and port. HTTP traffic uses TCP port 80. Because users need to reach the ALB but not the instances directly, the security group should be attached to the ALB. Removing the Internet gateway would prevent users from reaching the ALB as well as the EC2 instances directly.
  13. B. A security group to restrict inbound access to authorized sources is sufficient to guard against a UDP‐based DDoS attack. Elastic load balancers do not provide UDP listeners, only TCP. AWS Shield is enabled by default and protects against those UDP‐based attacks from sources that are allowed by the security group.
  14. A, C. WAF can block SQL injection attacks against your application, but only if it's behind an application load balancer. It's not necessary for the EC2 instances to have an elastic IP address. Blocking access to TCP port 3306, which is the port that MySQL listens on for database connections, may prevent direct access to the database server but won't prevent a SQL injection attack.
  15. B, D. Both WAF and Shield Advanced can protect against HTTP flood attacks, which are marked by excessive or malformed requests. Shield Advanced includes WAF at no charge. Shield Standard does not offer protection against Layer 7 attacks. GuardDuty looks for signs of an attack but does not prevent one.
  16. A, D. You can revoke and rotate both a customer‐managed CMK and a customer‐provided key at will. You can't revoke or rotate an AWS‐managed CMK or an S3‐managed key.
  17. C, D. Customer‐managed customer master keys (CMKs) can be rotated at will, whereas AWS‐managed CMKs are rotated only once a year. RDS and DynamoDB let you use a customer‐managed CMK to encrypt data. RedShift is not designed for highly transactional databases and is not appropriate for the application. KMS stores and manages encryption keys but doesn't store application data.
  18. B, D. To encrypt data on an unencrypted EBS volume, you must first take a snapshot. The snapshot will inherit the encryption characteristics of the source volume, so an unencrypted EBS volume will always yield an unencrypted snapshot. You can then simultaneously encrypt the snapshot as you copy it to another region.
  19. B. You can enable encryption on an EFS filesystem only when you create it; therefore, the only option to encrypt the data using KMS is to create a new EFS filesystem and copy the data to it. A third‐party encryption program can't use KMS keys to encrypt data. Encrypting the EBS volume will encrypt the data stored on the volume, but not on the EFS filesystem.
  20. A, D. You can install an ACM‐generated certificate on a CloudFront distribution or application load balancer. You can't export the private key of an ACM‐generated certificate, so you can't install it on an EC2 instance. AWS manages the TLS certificates used by S3.
  21. C. Security Hub checks the configuration of your AWS services against AWS best practices.

Chapter 13: The Cost Optimization Pillar

  1. C. The Free Tier provides free access to basic levels of AWS services for a new account's first year.
  2. A. Standard provides the most replicated and quickest‐access service and is, therefore, the most expensive option. Storage rates for Standard‐Infrequent and One Zone‐Infrequent are lower than Standard but are still more expensive than Glacier.
  3. B. Cost Explorer provides usage and spending data. Organizations lets you combine multiple AWS accounts under a single administration. TCO Calculator lets you compare the costs of running an application on AWS versus locally.
  4. D. Cost Explorer provides usage and spending data, but without the ability to easily incorporate Redshift and QuickSight that Cost and Usage Reports offers. Trusted Advisor checks your account for best‐practice compliance. Budgets allows you to set alerts for problematic usage.
  5. A, B, D. As efficient as Organizations can be, so does the threat they represent grow. There is no such thing as a specially hardened organization‐level VPC. Security groups don't require any special configuration.
  6. B, C. Trusted Advisor monitors your EC2 instances for lower than 10 percent CPU and network I/O below 5 MB on four or more days. Trusted Advisor doesn't monitor Route 53 hosted zones or the status of S3 data transfers. Proper OS‐level configuration of your EC2 instances is your responsibility.
  7. B. The Pricing Calculator is the most direct tool for this kind of calculation. TCO Calculator helps you compare costs of on‐premises to AWS deployments. Trusted Advisor checks your account for best‐practice compliance. Cost and Usage Reports helps you analyze data from an existing deployment.
  8. A. Monitoring of EBS volumes for capacity is not within the scope of budgets.
  9. A, B. Tags can take up to 24 hours to appear and they can't be applied to legacy resources. You're actually allowed only two free budgets per account. Cost allocation tags are managed from the Cost Allocation Tags page.
  10. D. The most effective approach would be to run three reserve instances 12 months/year and purchase three scheduled reserve instances for the summer. Spot instances are not appropriate because they shut down automatically. Since it's possible to schedule an RI to launch within a recurring block of time, provisioning other instance configurations for the summer months will be wasteful.
  11. C. Interruption polices are relevant to spot instances, not reserved instances. Payment options (All Upfront, Partial Upfront, or No Upfront), reservation types (Standard or Convertible RI), and tenancy (Default or Dedicated) are all necessary settings for RIs.
  12. C. No Upfront is the most expensive option. The more you pay up front, the lower the overall cost. There's no option called Monthly.
  13. B, D. Containers are more dense and lightweight. Containers do tend to launch more quickly than EC2 instances and do make it easy to replicate server environments, but those are not primarily cost savings.
  14. B. Standard reserve instances make the most sense when they need to be available 24/7 for at least a full year, with even greater savings over three years. Irregular or partial workdays are not good candidates for this pricing model.
  15. D. A spot instance pool is made up of unused EC2 instances. There are three request types: Request, Request And Maintain, and Reserve For Duration. A spot instance interruption occurs when the spot price rises above your maximum. A spot fleet is a group of spot instances launched together.
  16. A. A spot instance interruption occurs when the spot price rises above your maximum. Workload completions and data center outages are never referred to as interruptions. Spot requests can't be manually restarted.
  17. B. Target capacity represents the maximum instances you want running. A spot instance pool contains unused EC2 instances matching a particular set of launch specifications. Spot maximum and spot cap sound good but aren't terms normally used in this context.
  18. A. The EBS Lifecycle Manager can be configured to remove older EBS snapshots according to your needs. Creating a script is possible, but it's nowhere near as simple and it's not tightly integrated with your AWS infrastructure. There is no “EBS Scheduled Reserve Instance” but there is an “EC2 Scheduled Reserve Instance.” Tying a string? Really? EBS snapshots are stored in S3, but you can't access the buckets that they're kept in.
  19. D. The command is request‐spot‐fleet. The ‐‐spot‐fleet‐request‐config argument points to a JSON configuration file.
  20. C. The availability zone, target capacity, and AMI are all elements of a complete spot fleet request.

Chapter 14: The Operational Excellence Pillar

  1. C, D. It's a best practice to organize stacks by lifecycle (e.g., development, test, production) and ownership (e.g., network team, development team). You can store templates for multiple stacks in the same bucket, and there's no need to separate templates for different stacks into different buckets. Organizing stacks by resource cost doesn't offer any advantage since the cost is the same regardless of which stack a resource is in.
  2. A, B. Parameters let you input custom values into a template when you create a stack. The purpose of parameters is to avoid hard‐coding those values into a template. An AMI ID and EC2 key pair name are values that likely would not be hard‐coded into a template. Although you define the stack name when you create a stack, it is not a parameter that you define in a template. The logical ID of a resource must be hard‐coded in the template.
  3. C. When using nested stacks, the parent stack defines a resource of the type AWS::CloudFormation::Stack, which points to the template used to generate the nested stack. Because of this, there's no need to define a VPC resource directly in the template that creates the parent stack. There is also no need to export stack output values because the nested stacks do not need to pass any information to stacks outside of the nested stack hierarchy. For this same reason, you don't need to use the Fn::ImportValue intrinsic function, since it is used to import values exported by another stack.
  4. A. A change set lets you see the changes CloudFormation will make before updating the stack. A direct update doesn't show you the changes before making them. There's no need to update or override the stack policy before using a change set to view the changes that CloudFormation would make.
  5. C. To use Git to access a repository as an IAM user, the developer must use a Git username and password generated by IAM. Neither an AWS access key and secret key combination nor an IAM username and password will work. Although SSH is an option, the developer would need a private key. The public key is what you'd provide to IAM.
  6. D. You can allow repository access for a specific IAM user by using an IAM policy that specifies the repository ARN as the resource. Specifying the repository's clone URL would not work, since the resource must be an ARN. Generating Git credentials also would not work, because the user still needs permissions via IAM. There is no such thing as a repository policy.
  7. A. CodeCommit offers differencing, allowing you (and the auditors) to see file‐level changes over time. CodeCommit offers at‐rest encryption using AWS‐managed KMS keys but not customer‐managed keys. S3 offers versioning and at‐rest encryption, but not differencing.
  8. B. The git clone command clones or downloads a repository. The git push command pushes or uploads changes to a repository. The git add command stages files for commit to a local repository but doesn't commit them or upload them to CodeCommit. The aws codecommit get‐repository command lists the metadata of a repository, such as the clone URL and ARN, but doesn't download the files in it.
  9. D. CodeDeploy can deploy from an S3 bucket or GitHub repository. It can't deploy from any other Git repository or an EBS snapshot.
  10. B. A blue/green instance deployment requires an elastic load balancer (ELB) in order to direct traffic to the replacement instances. An in‐place instance deployment can use an ELB but doesn't require it. A blue/green Lambda deployment doesn't use an ELB because ELB is for routing traffic to instances. There's no such thing as an in‐place Lambda deployment.
  11. C. The AllAtOnce deployment configuration considers the entire deployment to have succeeded if the application is deployed successfully to at least one instance. HalfAtATime and OneAtATime require the deployment to succeed on multiple instances. There's no preconfigured deployment configuration called OnlyOne.
  12. B. The AfterAllowTraffic lifecycle event occurs last in any instance deployment that uses an elastic load balancer. ValidateService and BeforeAllowTraffic occur before CodeDeploy allowing traffic to the instances. AllowTraffic is a lifecycle event, but you can't hook into it to run a script.
  13. A. CodePipeline stores pipeline artifacts in an S3 bucket. An artifact can serve as an input to a stage, an output from a stage, or both. A provider is a service that performs an action, such as building or testing. An asset is a term that often refers to the supporting files for an application, such as images or audio. S3 doesn't offer snapshots, but it does offer versioning for objects.
  14. B, C. You can implement an approval action to require manual approval before transitioning to the deploy stage. Instead of or in addition to this, you can disable the transition to the deploy stage, which would require manually enabling the transition to deploy to production. Because CodePipeline uses one bucket for all stages of the pipeline, you can't create a separate bucket for the deploy stage. Even if you could, disallowing developers access to that bucket would not prevent a deployment, since CodePipeline obtains its permission to the bucket by virtue of its IAM service role.
  15. A, D, E. A pipeline must consist of at least two stages. The first stage must contain only source actions. Since the templates are stored in CodeCommit, it must be the provider for the source action. The second stage of the pipeline should contain a deploy action with a CloudFormation provider, since it's the service that creates the stack. There's no need for a build stage, because CloudFormation templates are declarative code that don't need to be compiled. Hence, the pipeline should only be two stages. CodeCommit is not a valid provider for the deploy action.
  16. B. A pipeline can have anywhere from two to 10 stages. Each stage can have one to 20 actions.
  17. B. Automation documents let you perform actions against your AWS resources, including taking EBS snapshots. Although they're called automation documents, you can still manually execute them. A command document performs actions within a Linux or Windows instance. A policy document works only with State Manager and can't take an EBS snapshot. There's no manual document type.
  18. A. The AmazonEC2RoleforSSM managed policy contains permissions allowing the Systems Manager agent to interact with the Systems Manager service. There's no need to install the agent because Amazon Linux comes with it preinstalled. There's also no need to open inbound ports to use Systems Manager.
  19. A, D. Setting the patch baseline's auto‐approval delay to 0 and then running the AWS‐RunPatchBaseline document would immediately install all available security patches. Adding the patch to the list of approved patches would approve the specific patch for installation but not any other security updates released within the preceding seven days. Changing the maintenance window to occur Monday at midnight wouldn't install the patch until the following Monday.
  20. A, B. Creating a global inventory association will immediately run the AWS‐GatherSoftwareInventory policy document against the instance, collecting both network configuration and software inventory information. State Manager will execute the document against future instances according to the schedule you define. Simply running the AWS‐GatherSoftwareInventory policy document won't automatically gather configuration information for future instances. Of course, an instance must be running in order for the Systems Manager agent to collect data from it. The AWS‐SetupManagedInstance document is an automation document and thus can perform operations on AWS resources and not tasks within an instance.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.98.13