- B
High I/O instances are storage optimized. Examples of this family are H1 (high disk throughput), I3 (high random I/O performance), and D2 (dense storage). EBS-optimized instances have exclusive capacity for I/O operations.
RDS is not compatible with Oracle Enterprise High Availability options because it uses Multi-AZ and access to the OS is limited. Also, note that burstable performance instances are not compatible with EBS optimizations.
References:
-
- Amazon EC2 Instance Types: https://aws.amazon.com/ec2/instance-types/
- Amazon RDS Instance Types: https://aws.amazon.com/rds/instance-types/
- Amazon RDS for Oracle Database FAQs: https://aws.amazon.com/rds/oracle/faqs/
- Instances optimized for Amazon EBS: https://docs.aws.amazon.com/es_es/AWSEC2/latest/UserGuide/EBSOptimized.html
- Determining the IOPS needs for Oracle Database on AWS: https://d1.awsstatic.com/whitepapers/determining-iops-needs-for-oracle-database-on-aws.pdf
- B
Short polling is the default behavior of an SQS queue (ReceiveMessageWaitTimeSeconds=0). Any value greater than zero will enable long polling and less CPU overhead. This can also be overridden at ReceiveMessage by increasing the WaitTimeSeconds attribute. Extending the visibility timeout will only avoid double processing of messages by multiple workers.
SQS FIFO queues can only handle a throughput of 300 messages per second (soft limit) and do not accept duplicates; duplicate messages are allowed in this design because there is no restriction in the question, standard .fifo queue migration is not possible. Since the message volume per second is higher, we are talking about standard SQS queues.
References:
-
- Amazon SQS FAQs: https://aws.amazon.com/sqs/faqs/
- Amazon SQS FIFO (First-In-First-Out) queues: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html
- Amazon SQS Long Polling: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-long-polling.html
- Amazon SQS Visibility Timeout: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html
- C
S3 Standard provides the highest durability by default and it replicates objects in three AZs; FS is the most critical information. CS is the derived data and in the case of data loss, it can be recreated again, which is a low-cost option.
RS requires FS and CS files to Glacier is not an option for RS because it doesn't provide frequent access to data. Objects in Glacier Standard must reside for at least 60 days in S3 Standard.
References:
-
- Amazon S3 Storage Classes: https://aws.amazon.com/s3/storage-classes/
- Object Lifecycle Management: https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html
- Amazon S3 FAQs: https://aws.amazon.com/s3/faqs/
- Amazon S3 Reduced Redundancy Storage: https://aws.amazon.com/s3/reduced-redundancy/
- C
Option A could work, but it is not scalable, since bucket policies have a size limit of 20 KB. CloudFront URLs will improve the performance of the web app but does not guarantee confidentiality because the resource is still public and static.
Signing URLs will allow an expiration time that can match the web session of the user, and accelerating through CloudFront will solve the performance improvements required but no confidentiality. CloudFront signed cookies will enable authentication and authorization for the platform users.
References:
-
- Uploading Objects Using Presigned URLs: https://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html
- Choosing Between Signed URLs and Signed Cookies: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-choosing-signed-urls-cookies.html
- IAM Policies, Bucket Policies, and ACLs!: https://aws.amazon.com/blogs/security/iam-policies-and-bucket-policies-and-acls-oh-my-controlling-access-to-s3-resources/
- C
Option A is not the best fit because the load balancer only acts as a proxy service for balancing and resilience capabilities; this is a horizontally scaled service, so doesn't represent a bottleneck.
Option B will have the opposite effect: dual-homed servers will result in bandwidth bisection.
Option C is the right answer, since a NAT Gateway is a horizontally scaled service and the NAT instance is limited by the instance networking capabilities; multiple NAT instances, balancing, and scripted failover can be architected but this is not the best option because a managed service can be used offloading all the ops.
Option D is a good option but solves only part of the problems and works in the short term.
References:
-
- NAT Instances: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Instance.html
- Comparison of NAT Instances and NAT Gateways: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-comparison.html
- Elastic Network Interfaces: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html
- Enhanced Networking on Linux: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html
- D
Regarding option A, S3 One Zone Infrequent Access is not a good option because the redundancy level is low and it could lead to a total loss of data because the AZ comprehends a single fault domain. In option B, Glacier by definition is not a good choice for DR because it doesn't give you real-time access to your data. This has to be considered from an RTO perspective.
Option C is the same as Option B. Option D is the best option because several backup services are mentioned as EBS Snapshots, Storage Gateway and S3 with 99.999999999% durability and 99.99% availability.
References:
-
- AWS Storage Gateway FAQs: https://aws.amazon.com/storagegateway/faqs/?nc1=h_ls
- Configuring DNS Failover: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-configuring.html
- AWS Direct Connect Resiliency Recommendations: https://aws.amazon.com/directconnect/resiliency-recommendation/
- C
Option A: SSE- KMS gives you the flexibility to choose your encryption keys generated via KMS, but bucket ACLs don't give the option to use conditions.
Option B: SSE - S3 is the right answer because if offloads all the key management to KMS via S3. The bucket policy will reinforce to use a StringNotEquals condition of "s3:x-amz-server-side-encryption": "AES256" denying Put operations and versioning to prevent accidental deletion.
Option C: Object lifecycle does not enforce security or security management.
Option D: This is a way to only provide confidentiality and integrity to the data from the customer side taking full responsibility of the process
References:
-
- PCI DSS: https://aws.amazon.com/compliance/pci-dss-level-1-faqs/
- How to Prevent Uploads of Unencrypted Objects to Amazon S3: https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/
- Using Versioning: https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html
- Protecting Data Using Encryption: https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.html
- D
Option A: The AWS CLI can solve the problem but it doesn't scale to petabytes to achieve the transfer in one week. Option B: The VM Import/Export is designed to work with virtual images and storage gateway and has a different use case; it is not for one-time-only data transfer.
Option C: Direct Connect can do the job, but the 1 Gbps has a low bandwidth to finish in one week. Option D: This is the correct choice, as several Snowballs can be used in parallel to copy the petabyte and achieve the full job in one week.
References:
-
- Multipart Upload Overview: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html
- FAQs about AWS Direct Connect: https://aws.amazon.com/es/directconnect/faqs/
- How to Transfer Petabytes of Data Efficiently: https://docs.aws.amazon.com/snowball/latest/ug/transfer-petabytes.html
- C
Option A: Read replicas perform asynchronous replication, so eventual consistency and replication lag are common scenarios.
Option B: The Multi-AZ deployment is designed to provide 99.95% availability. RDS can detect common hardware failures and perform failovers to provide business continuity.
Option C: RDS cannot be provisioned with less than 100 GB of SSD.
Option D: Transparent Data Encryption can be enabled only for SQL Server and Oracle Databases.
References:
-
-
- Amazon RDS Multi-AZ Deployments: https://aws.amazon.com/rds/details/multi-az/?nc1=h_ls
- Amazon RDS Read Replicas: https://aws.amazon.com/rds/details/read-replicas/
- Limits for Amazon RDS: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Limits.html
- Microsoft SQL Server Transparent Data Encryption Support: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.SQLServer.Options.TDE.html
- Oracle Transparent Data Encryption: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.Options.AdvSecurity.html
-
- C
Option A: RAID 5 cannot be used in EC2 because it uses parity and it consumes IOPS available.
Option B: RAID 1 is a good option but the array will be limited to the worst performing volume.
Option C: EBS snapshots are the best option to provide fault tolerance and performance. EBS automatically replicates every block operation at the AZ level.
Option D: RAID 0 is designed to improve performance at the cost of fault tolerance.
References:
-
- RAID Configuration on Linux: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/raid-config.html
- Automating the Creation of Consistent Amazon EBS Snapshots with Amazon EC2 Systems Manager (Part 1): https://aws.amazon.com/blogs/compute/automating-the-creation-of-consistent-amazon-ebs-snapshots-with-amazon-ec2-systems-manager-part-1/
- Amazon EBS FAQs: https://aws.amazon.com/ebs/faqs/?nc1=h_ls
- C
Option A: The classic load balancer can work but it doesn't provide routing flexibility.
Option B: The network load balancer works on the Layer 4 of the OSI model, this is not compatible with HTTP.
Option C: The application load balancer is the perfect fit for this scenario because it provides control to route requests to containers using ports and can be enabled for sticky sessions to maintain affinity between the client and the server.
Option D: The requirement doesn't specify routing at the global level so Route 53 is not necessary.
References:
-
- Load Balancer Types: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html
- Configure Sticky Sessions for Your Classic Load Balancer: https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-sticky-sessions.html
- Service Load Balancing: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html
- C
Option A: The application script can be modified and used to access other users' data.
Option B: The RDS table does not store the data in an encrypted format and can be vulnerable.
Option C: The instance metadata server is a great choice because it provides confidentiality and automation in the process. The metadata server can only be reached from inside the instance.
Option D: Email can be compromised, exposing sensitive data.
References:
-
- Instance Metadata and User Data: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
- Encrypting Amazon RDS Resources: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html
- A
Option A: This is the best choice because CloudTrail maintains auditability and AWS Config Rules allow you to use Lambda functions to automate events.
Option B: CloudWatch events with filters are not the most scalable and effortless option.
Option C: A Marketplace instance is not scalable because it stores audit trails and system changes in the local disk.
Option D: CloudTrail is only half of the job and CloudTrail does not provide an API.
References:
-
- Best Practices for Monitoring: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring_best_practices.html
- AWS Config Rules – Dynamic Compliance Checking for Cloud Resources: https://aws.amazon.com/blogs/aws/aws-config-rules-dynamic-compliance-checking-for-cloud-resources/
- AWS Config vs. CloudTrail: https://www.sumologic.com/blog/amazon-web-services/aws-config-vs-cloudtrail/
- A
Option A: RDS is a container service and privileged access to the operating system is limited.
Option B: EMR allows users to SSH into the master instance to manage the core and task nodes in the cluster.
Option C: EC2 is an Infrastructure as a Service resource and SSH access is permitted.
Option D: Elastic Beanstalk is a container service but SSH access is permitted.
References:
-
- Amazon RDS FAQs: https://aws.amazon.com/rds/faqs/
- Working with Option Groups: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithOptionGroups.html
- Working with DB Parameter Groups: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html
- Connect to the Master Node Using SSH: https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-connect-master-node-ssh.html
- eb ssh: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-ssh.html
- A, B, and C
References:
-
- Overview of Security Processes: https://aws.amazon.com/whitepapers/overview-of-security-processes/
- Shared Responsibility Model: https://aws.amazon.com/compliance/shared-responsibility-model/
- C
Option A: SNS is used to deliver notifications but S3 file copy is not enabled for Glacier.
Option B: When data is aggregated to vaults in Glacier, notification is not available.
Option C: Vault Inventory and Retrieval Job Complete are the only available SNS notifications from Glacier.
References:
-
- Configuring Vault Notifications in Amazon Glacier: https://docs.aws.amazon.com/amazonglacier/latest/dev/configuring-notifications.html
- Amazon Glacier FAQs: https://aws.amazon.com/glacier/faqs/
- A, B, and C
Option D: The VPC is not highly available in nature; it is the customer's responsibility to architect for availability using multiple AZs.
References:
-
- VPCs and Subnets: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html#vpc-sizing-ipv4
- Amazon VPC FAQs: https://aws.amazon.com/vpc/faqs/
- IP Addressing in Your VPC: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html
- C
Option A: Placement Groups are a great way to achieve maximum packets per second between instances.
Option B: Jumbo frames increase the payload of every message.
Option C: Spot instances have more compute power but do not increase network throughput.
Option D: Enhanced networking will improve the instance capabilities by using Virtual Function or Elastic Network Interfaces (ENA).
References:
-
- Network Maximum Transmission Unit (MTU) for Your EC2 Instance: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html
- Placement Groups: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
- Spot Instances: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html
- Enabling Enhanced Networking with the Elastic Network Adapter (ENA) on Linux Instances: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking-ena.html
- D
Option A: Using RDS will solve the problem temporarily and couple the application.
Option B: Compression solves only part of the problem now; in the future, a new design must be proposed.
Option C: The problem is not on DynamoDB so this option is useless.
Option D: SQS provides functionality to store the blob message in S3; this is a great solution to store even bigger messages in the future.
References:
-
- Managing Large Amazon SQS Messages Using Amazon S3: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-s3-messages.html
- Limits in DynamoDB: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html
- AWS Storage Options: https://aws.amazon.com/whitepapers/storage-options-aws-cloud/
- B
Option A: Elasticache requires additional complexity in order to fine-tune the hotkeys and requires application re-engineering.
Option B: CloudFront is the less intrusive solution, creating a distribution with cache behaviors and TTL will do the job.
Option C: At some point, the database size will be a constraint and it won't scale to meet demand.
Option D: Read replicas will require application re-engineering.
References:
-
- Amazon RDS Read Replicas: https://aws.amazon.com/rds/details/read-replicas/
- Overview of Distributions: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-overview.html
- Amazon ElastiCache FAQs: https://aws.amazon.com/elasticache/faqs/