Mock Test 1

  1. B

High I/O instances are storage optimized. Examples of this family are H1 (high disk throughput), I3 (high random I/O performance), and D2 (dense storage). EBS-optimized instances have exclusive capacity for I/O operations.

RDS is not compatible with Oracle Enterprise High Availability options because it uses Multi-AZ and access to the OS is limited. Also, note that burstable performance instances are not compatible with EBS optimizations.

This question is focused on the portability of current features and performance only. Costs or managed services are not mentioned in the question.

References:

  1. B

Short polling is the default behavior of an SQS queue (ReceiveMessageWaitTimeSeconds=0). Any value greater than zero will enable long polling and less CPU overhead. This can also be overridden at ReceiveMessage by increasing the WaitTimeSeconds attribute. Extending the visibility timeout will only avoid double processing of messages by multiple workers.

SQS FIFO queues can only handle a throughput of 300 messages per second (soft limit) and do not accept duplicates; duplicate messages are allowed in this design because there is no restriction in the question, standard .fifo queue migration is not possible. Since the message volume per second is higher, we are talking about standard SQS queues.

A and B are correct answers, but the question is focused on the simplest way of achieving long polling; option A will require the reconfiguration of every worker client, while option B only requires a change at the queue level so all clients will use this configuration. B is the simplest way.

References:

  1. C

S3 Standard provides the highest durability by default and it replicates objects in three AZs; FS is the most critical information. CS is the derived data and in the case of data loss, it can be recreated again, which is a low-cost option.

RS requires FS and CS files to Glacier is not an option for RS because it doesn't provide frequent access to data. Objects in Glacier Standard must reside for at least 60 days in S3 Standard.

This question is focused on durability and economics; trade-offs must be made to achieve low costs, durability, and resiliency of data.

References:

  1. C

Option A could work, but it is not scalable, since bucket policies have a size limit of 20 KB. CloudFront URLs will improve the performance of the web app but does not guarantee confidentiality because the resource is still public and static.

Signing URLs will allow an expiration time that can match the web session of the user, and accelerating through CloudFront will solve the performance improvements required but no confidentiality. CloudFront signed cookies will enable authentication and authorization for the platform users.

References:

  1. C

Option A is not the best fit because the load balancer only acts as a proxy service for balancing and resilience capabilities; this is a horizontally scaled service, so doesn't represent a bottleneck. 

Option B will have the opposite effect: dual-homed servers will result in bandwidth bisection.

Option C is the right answer, since a NAT Gateway is a horizontally scaled service and the NAT instance is limited by the instance networking capabilities; multiple NAT instances, balancing, and scripted failover can be architected but this is not the best option because a managed service can be used offloading all the ops.

Option D is a good option but solves only part of the problems and works in the short term.

References:

  1. D

Regarding option A, S3 One Zone Infrequent Access is not a good option because the redundancy level is low and it could lead to a total loss of data because the AZ comprehends a single fault domain. In option B, Glacier by definition is not a good choice for DR because it doesn't give you real-time access to your data. This has to be considered from an RTO perspective.

Option C is the same as Option B. Option D is the best option because several backup services are mentioned as EBS Snapshots, Storage Gateway and S3 with 99.999999999% durability and 99.99% availability.

References:

  1. C

Option A: SSE- KMS gives you the flexibility to choose your encryption keys generated via KMS, but bucket ACLs don't give the option to use conditions. 

Option B: SSE - S3 is the right answer because if offloads all the key management to KMS via S3. The bucket policy will reinforce to use a StringNotEquals condition of  "s3:x-amz-server-side-encryption": "AES256" denying Put operations and versioning to prevent accidental deletion.

Option C: Object lifecycle does not enforce security or security management.

Option D: This is a way to only provide confidentiality and integrity to the data from the customer side taking full responsibility of the process

References:

  1. D 

Option A: The AWS CLI can solve the problem but it doesn't scale to petabytes to achieve the transfer in one week. Option B: The VM Import/Export is designed to work with virtual images and storage gateway and has a different use case; it is not for one-time-only data transfer.

Option C: Direct Connect can do the job, but the 1 Gbps has a low bandwidth to finish in one week. Option D: This is the correct choice, as several Snowballs can be used in parallel to copy the petabyte and achieve the full job in one week.

References:

  1. C

Option A: Read replicas perform asynchronous replication, so eventual consistency and replication lag are common scenarios.

Option B: The Multi-AZ deployment is designed to provide 99.95% availability. RDS can detect common hardware failures and perform failovers to provide business continuity.

Option C: RDS cannot be provisioned with less than 100 GB of SSD.

Option D: Transparent Data Encryption can be enabled only for SQL Server and Oracle Databases.

References:

  1. C

Option A: RAID 5 cannot be used in EC2 because it uses parity and it consumes IOPS available. 
Option B: RAID 1 is a good option but the array will be limited to the worst performing volume. 
Option C: EBS snapshots are the best option to provide fault tolerance and performance. EBS automatically replicates every block operation at the AZ level.
Option D: RAID 0 is designed to improve performance at the cost of fault tolerance.

References:

  1. C

Option A: The classic load balancer can work but it doesn't provide routing flexibility. 
Option B: The network load balancer works on the Layer 4 of the OSI model, this is not compatible with HTTP.
Option C: The application load balancer is the perfect fit for this scenario because it provides control to route requests to containers using ports and can be enabled for sticky sessions to maintain affinity between the client and the server. 
Option D: The requirement doesn't specify routing at the global level so Route 53 is not necessary.

References:

  1. C

Option A: The application script can be modified and used to access other users' data. 
Option B: The RDS table does not store the data in an encrypted format and can be vulnerable. 
Option C: The instance metadata server is a great choice because it provides confidentiality and automation in the process. The metadata server can only be reached from inside the instance.
Option D: Email can be compromised, exposing sensitive data.

References:

  1. A

Option A: This is the best choice because CloudTrail maintains auditability and AWS Config Rules allow you to use Lambda functions to automate events. 
Option B: CloudWatch events with filters are not the most scalable and effortless option. 
Option C: A Marketplace instance is not scalable because it stores audit trails and system changes in the local disk. 
Option D: CloudTrail is only half of the job and CloudTrail does not provide an API.

References:

  1. A

Option A: RDS is a container service and privileged access to the operating system is limited. 
Option B: EMR allows users to SSH into the master instance to manage the core and task nodes in the cluster. 
Option C: EC2 is an Infrastructure as a Service resource and SSH access is permitted.
Option D: Elastic Beanstalk is a container service but SSH access is permitted.

References:

  1. A, B, and C

References:

  1. C

Option A: SNS is used to deliver notifications but S3 file copy is not enabled for Glacier. 
Option B: When data is aggregated to vaults in Glacier, notification is not available. 
Option C: Vault Inventory and Retrieval Job Complete are the only available SNS notifications from Glacier.

References

  1. A, B, and C

Option D: The VPC is not highly available in nature; it is the customer's responsibility to architect for availability using multiple AZs.

References:

  1. C

Option A: Placement Groups are a great way to achieve maximum packets per second between instances. 
Option B: Jumbo frames increase the payload of every message. 
Option C: Spot instances have more compute power but do not increase network throughput. 
Option D: Enhanced networking will improve the instance capabilities by using Virtual Function or Elastic Network Interfaces (ENA).

References:

  1. D

Option A: Using RDS will solve the problem temporarily and couple the application. 
Option B: Compression solves only part of the problem now; in the future, a new design must be proposed. 
Option C: The problem is not on DynamoDB so this option is useless.
Option D: SQS provides functionality to store the blob message in S3; this is a great solution to store even bigger messages in the future.

References:

  1. B

Option A: Elasticache requires additional complexity in order to fine-tune the hotkeys and requires application re-engineering. 
Option B: CloudFront is the less intrusive solution, creating a distribution with cache behaviors and TTL will do the job. 
Option C: At some point, the database size will be a constraint and it won't scale to meet demand. 
Option D: Read replicas will require application re-engineering.

References:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.182.62