Chapter 15
Risk and Compliance

THE AWS CERTIFIED ADVANCED NETWORKING – SPECIALTY EXAM OBJECTIVES COVERED IN THIS CHAPTER MAY INCLUDE, BUT ARE NOT LIMITED TO, THE FOLLOWING:

  • Domain 5.0: Design and Implement for Security and Compliance
  • images 5.1 Evaluate design requirements for alignment with security and compliance objectives
  • images 5.2 Evaluate monitoring strategies in support of security and compliance objectives
  • images 5.3 Evaluate AWS security features for managing network traffic
  • images 5.4 Utilize encryption technologies to secure network communications

images

It All Begins with Threat Modeling

Before building any kind of workload-bearing environment on AWS (or anywhere else), it is vitally important to consider the nature of the data you want to process from the perspective of regulatory, classification, and attendant requirements—usually in the context of Confidentiality, Integrity, and Availability—and determine whether the proposed environment is suitable. The human mind is poor at assessing risk objectively, so you should use or extend one of the numerous risk characterization and assessment frameworks that are available.

AWS has its own threat model for the environment on the AWS side of the shared responsibility model demarcation line, which drives many design decisions for the underlying AWS environment. For example, while AWS Application Programming Interface (API) endpoints are Internet-facing, they are highly scaled and tightly monitored, and their inbound network connections are traffic-shaped. All Internet-connected AWS production networks also have packet scoring and filtering capabilities that are invisible to customers. These capabilities are in place to protect the AWS infrastructure, and all customers benefit from them by processing their data on AWS. These packet-scoring and filtering capabilities form a significant part of the AWS Shield service, which is described in more detail later in this chapter.

Some of the ways in which AWS addresses its threat model, which we recommend that you use, include the following:

Separation of duty This is used in the context of specific, security-sensitive operations, and requires multiple individuals to work together in concert in order perform an operation. Actions that require such collaboration between people are sometimes referred to as being subject to “multi-eyes rules.”

Least privilege This involves only giving permissions to people and system processes necessary to perform the actions they need to perform and at the time they need to perform them. In the context of AWS Identity and Access Management (IAM), this typically involves granting users and processes a minimal default set of permissions and requiring them to authenticate to a role in order to perform more privileged operations only when they need to do so.

Need to know This can be considered an extension of separation of duty. If you only need to interact with environments in certain ways to do your job, you only need to know enough about those environments to interact with them successfully in those ways, while having an escalation path in the event that something goes wrong. AWS Cloud services are driven by APIs, so for abstract services, you only need to know the API calls, their responses, and logging and alerting events to use them successfully, rather than needing the full details of “what goes on behind the scenes.”

Compliance and Scoping

After you have determined that your workload is broadly suitable for deployment on AWS, the next step is to determine which services may be used and how they may be used in order to meet the requirements of any legislative or regulatory frameworks that pertain to the workload. AWS maintains compliance with a number of external standards for many services. A matrix of which services are audited and in scope for which standards is maintained at https://aws.amazon.com/compliance/services-in-scope/. In this context, Amazon Virtual Private Cloud (Amazon VPC) elements such as security groups, network Access Control Lists (NACLs), subnets, Virtual Private Gateways (VGWs), Internet gateways, Network Address Translation (NAT) Gateways, VPC private endpoints, and the Domain Name System (DNS) service are all subsumed under Amazon VPC.

If a service is not in scope for a particular standard, this does not necessarily mean that it must be excluded from an environment that needs to be compliant with the standard. Rather, it cannot be used to process data that the standard defines as being sensitive. A common and recommended approach is to isolate environments that are in scope for specific compliance requirements from environments that are not, not only by containing them in separate VPCs, but also in separate AWS accounts. Accounts and VPCs also serve as clearly defined technical scope boundaries for your auditor to consider.

If you have been involved in compliance for any length of time, you understand that if you have compliant thing A and connect it to compliant thing B, the result will not necessarily be a compliant thing. While AWS has individual services certified by third-party auditors against a number of external compliance standards, it remains possible to build non-compliant environments out of compliant parts. To make it easier to build environments that your auditor is more likely to approve, AWS provides assets under the Enterprise Accelerator program. These comprise a spreadsheet mapping of the controls specified in the standard to the means of achieving and enforcing them in AWS, a modular set of AWS CloudFormation templates that can be plugged together to reasonably reflect your intended network design and accompanying documentation.

Audit Reports and Other Papers

Ultimately, the arbiter of whether or not your environment meets compliance requirements is your auditor. To help understand AWS environments, we provide a free online training course, access to which can be requested by emailing [email protected]. This is in addition to numerous guidance whitepapers and assets in our Enterprise Accelerator program.

Available assets include the following:

AWS Overview of Security Processes whitepaper This covers human factors such as separation of duty, need to know, and service maintenance:

https://d0.awsstatic.com/whitepapers/aws-security-whitepaper.pdf

AWS Security Best Practices whitepaper This covers configuration recommendations, data deletion processes, and physical security:

https://d0.awsstatic.com/whitepapers/Security/AWS_Security_Best_Practices.pdf

AWS Risk and Compliance whitepaper This provides answers to questions commonly found in customer compliance questionnaires; this also contains a completed copy of the Cloud Security Alliance CAIQ questionnaire and an Information Security Registered Assessors Program (IRAP) assessment:

https://d0.awsstatic.com/whitepapers/compliance/AWS_Risk_and_Compliance_Whitepaper.pdf

AWS also makes a number of reports from external auditors available online. These reports enable you to gain third-party-vetted details on AWS technologies, organization, environments, and operational practices. They can also help your auditors determine whether the whole environment you are deploying, on both your and AWS’s side of the shared responsibility model demarcation, meets compliance requirements.

These audit reports are available for free via the AWS Artifact service (https://aws .amazon.com/artifact/), which presents the reports via an AWS Management Console-based portal and API. Note that a number of the reports and scoping documents require a separate click-through Non-Disclosure Agreement (NDA) before you download them. Even if you do not need to be Payment Card Industry Data Security Standard (PCI DSS)-compliant, the PCI DSS audit report contains information on AWS’s approach to assisting with forensic investigations and guest-to-guest separation in our hypervisor. If the PCI DSS guest-to-guest separation assurance is insufficient for your own threat model, Amazon Elastic Compute Cloud (Amazon EC2) Dedicated Instances are also available. These instances use a different placement algorithm to ensure that your guests are only launched on physical servers hosting guest instances belonging to you, rather than to any other customer.

Ownership Model and the Role of Network Management

As discussed in Chapter 8, “Network Security,” management of the underlying AWS network is AWS’s responsibility. Even though an Availability Zone comprises one or more data centers, the network for each Availability Zone is part of a contiguous Open Systems Interconnection (OSI) Layer 2 space in the Amazon EC2 network for that Availability Zone. This layer can be separated into VPCs and connected with Internet gateways to the Amazon border network, which links all public AWS Regions (except China) and houses AWS API endpoints.

The Amazon VPC environment is designed to reflect the most common ways of organizing traditional data center environments and access permissions over them. As such, it makes sense for network permissions to be assignable to different teams on your workforce. Separation of duty is a key mechanism within AWS for maintaining human security. You can use IAM policies to assign permissions over security groups, network ACLs, and more to different IAM roles. You can also consider moving to a complete DevOps/DevSecOps model to remove humans from the data and system management process as much as possible.

Controlling Access to AWS

All access to AWS APIs for non-root users is controlled by IAM, which can now be augmented with AWS Organizations Service Control Policies (SCPs). SCPs are discussed further in Chapter 8.

An IAM policy is a JSON document that follows a Principal, Action, Resource, Condition (PARC) model. An IAM policy contains the following components:

Effect The effect can be Allow or Deny. By default, IAM users do not have permission to use resources and API actions, so all requests are denied. An explicit Allow overrides the default. An explicit Deny overrides any number of Allows; this can be useful as policies grow in complexity.

Principal The entity or service associated with the policy. Most often, the IAM principal is the entity (for example, user, group, or role) against which the policy is applied.

Action The action is the specific API action for which you are granting or denying permission.

Resource The resource is what is affected by the action. Some Amazon EC2 API actions allow you to include specific resources in your policy that can be created or modified by the action. To specify a resource in the statement, you need to use its Amazon Resource Name (ARN). If the API action does not support ARNs, use the * wildcard to specify that all resources can be affected by the action.

Condition Conditions are optional. They can be used to control when your policy is in effect.

Each service has its own set of actions, details of which are typically found in that service’s developer guide.

Figure 15.1 shows the decision tree used when evaluating policies.

Flowchart shows five steps such as decision starting at 'Deny', evaluating all applicable policies, checking for explicit Deny and if yes final decision is Deny, else check for Allow and if yes final decision is Allow, and else final decision is Deny.

FIGURE 15.1 Policy evaluation decision flow

IAM policies can also be used with the SourceIp condition, which can in many cases restrict the source IP addresses from which API calls can be made with the SourceIp condition. Multiple IP ranges can be specified on the condition, which are evaluated using OR. For example:

"IpAddress" : {
"aws:SourceIp" : ["192.0.2.0/24", "203.0.113.0/24"]
}

The aws:SourceIp condition key works only in an IAM policy if you are calling the tested API directly as a user. If you instead use a service to call the target service on your behalf, the target service sees the IP address of the calling service rather than the IP address of the originating user. This can happen, for example, if you use AWS CloudFormation to call Amazon EC2 to construct instances for you. There is currently no way to pass the originating IP address through a calling service to the target service for evaluation in an IAM policy. For these types of service API calls, do not use the aws:SourceIp condition key.

AWS Organizations

Until the introduction of AWS Organizations, the root user in an AWS account was a very traditional omnipotent user, such that IAM policy constraints did not apply to it. With AWS Organizations, an SCP can be applied to a child account such that the root user (and all other IAM users) in that child account is not only subject to the constraints imposed by the SCP, but also cannot read or alter it.

SCPs closely resemble IAM policies but do not currently support IAM Conditions or fine-grained Resource elements. While the most common approach is to use SCPs at account creation to deny access to AWS Cloud services that are not required or desirable to use, you can also use them with the same granularity as IAM policy at any time in the account lifecycle to deny access to specific API calls.

For a networking example, an SCP that denies the ability to attach an Internet gateway, attach a VGW, or peer a VPC would enforce isolation from the Internet of any VPCs created while the SCP is in force.

SCPs can be assigned to individual child accounts in an organization or to accounts grouped into an Organizational Unit (OU). An AWS Organization’s master account cannot apply an SCP to itself that affects the root user, so the root user continues to behave like a classic omnipotent user in an AWS Organization’s master account only.

Amazon CloudFront Distributions

Amazon CloudFront has built-in support for IP-based georestriction if you need to restrict access to services that you host in AWS based on client geographical location (for example, to comply with denied and restricted parties lists). While mapping location by IP address is an inexact science, especially if the client is using a proxy, it is still a control that can be used as an argument in demonstrating compliance.

While Amazon CloudFront distributions are paid for based on anticipated geographical usage, data can potentially be cached in and emitted from any Amazon CloudFront Point of Presence (PoP), irrespective of distribution configured. This is done in part so that, in the event of Distributed Denial of Service (DDoS) attacks, Amazon Route 53 sharding can be used to balance load across PoPs transparently, which are not within scope of the attack.

Encryption Options

AWS uses encryption, transaction authentication, and IAM authorization for the API call system, and we recommend that customers encrypt data at rest by default, where supported. Encryption in transit is more nuanced and is covered in greater detail in the following sections.

AWS API Calls and Internet API Endpoints

API calls are made using AWS’s Signature Version 4 (Sigv4) algorithm, which provides authentication and integrity of each transaction over an encrypted communications channel that has unidirectional cryptographic trust based on an API endpoint-side certificate/key pair. Cryptographic services on the API endpoint side are provided by AWS’s own s2n implementation of Transport Layer Security (TLS), a minimal, formally-proven implementation written from scratch in C, which is open source and available for use and analysis at: https://github.com/awslabs/. API endpoint certificates are signed using Amazon Trust Services, which is a global root Certificate Authority (CA) and is also used by AWS Certificate Manager.

Rather than authenticate your API-calling endpoint with a bidirectional cryptographic handshake (like TLS mutual authentication), AWS uses the Sigv4 algorithm to authenticate each individual transaction. All AWS Software Development Kits (SDKs) implement Sigv4, as does the AWS Command Line Interface (CLI), which uses boto3, the Python SDK. Currently, the only AWS Cloud service that performs a more traditional bidirectional cryptographic handshake between client and AWS-side API endpoint is the AWS Internet of Things (IoT) service.

s2n (as well as Elastic Load Balancing, Application Load Balancer, Amazon Relational Database Service [Amazon RDS], and Amazon CloudFront cipher suites, which are covered later) offers Secure Sockets Layer (SSL) 3.0 and all versions of TLS up to and including 1.2, with Diffie-Hellman Encryption (DHE) and Elliptic-Curve Diffie-Hellman Encryption (ECDHE). While SSL has been deprecated following the uncovering of the Padding Oracle On Downgraded Legacy Encryption (POODLE) vulnerability, there are still a significant number of active customers working with devices that require it, which is why it is still offered for AWS API endpoints. If you are concerned about the choice of cipher used to establish the HTTPS connection to the AWS API endpoints, we recommend configuring your client to accept only offers to use protocols and ciphers of your choice.

Selecting Cipher Suites

As already discussed, API endpoints and AWS IoT offer a range of cipher options that are client-side selectable. These options are also available for Elastic Load Balancing, Application Load Balancer, and Amazon CloudFront, though server-side control is offered for these as well. Cipher suites for each load balancer and Amazon CloudFront distribution can be selected as part of their configuration. The AWS recommendation is always to choose the most recent cipher suite unless you have a compelling business need not to do so.

In the event of a cryptographic algorithm, key length, or mode being deprecated by AWS Security, a new cipher suite will be made available that removes the deprecated combination of algorithm, key length, and mode(s). In the case of POODLE, for example, Amazon CloudFront and Elastic Load Balancing were issued with a new TLS-only cipher suite within 24 hours. When such deprecation events occur, however, you should check your configurations to ensure that an appropriate cipher suite is in place for your needs.

Encryption in Transit Inside AWS Environments

Amazon CloudFront, Elastic Load Balancing, Amazon API Gateway, and AWS IoT all normally terminate connections at their network interfaces. It is normal to terminate encryption there too, although Elastic Load Balancing can be also configured in a pass-through mode that allows ciphertext to be proxied through for termination in (typically) an Amazon EC2 instance. Pass-through is typically used when protocols other than HTTPS are involved or when unconventional combinations of algorithm, key length, and mode are involved.

Amazon CloudFront has no pass-through mechanism; the principal purpose of a Content Distribution Network (CDN) is to cache content as close to the consumer as possible, so there is no practical purpose in caching ciphertext. In addition, in order to perform deep-packet inspection, AWS Web Application Firewall (AWS WAF) must be able to see cleartext.

Both the Application Load Balancer and Amazon CloudFront are able to use keys presented to them using AWS Certificate Manager. Domain-validated (DV) certificates generated by AWS Certificate Manager are provisioned with a 13-month validity and are automatically renewed approximately 30 days before expiration. Certificate/key pairs generated in AWS Certificate Manager are usable in Application Load Balancer and Amazon CloudFront, but the private keys are not available for you to download into, for example, an Amazon EC2 instance. Alternatively, if you need to use extended-validation (EV) keys, you should obtain these according to your normal process from your usual CA and upload them into AWS Certificate Manager.

Both Application Load Balancer and Amazon CloudFront can also re-encrypt data for transfer into your VPC and to your origin, respectively, using different keys if desired.

You have the option to encrypt all data in transit inside your VPC if your security policy or external regulatory requirements require it. It is worth considering, however, if encryption in transit is a compliance requirement, or if it is actually necessary according to your own threat model. A VPC is defined as being a private network at OSI Layer 2 and is asserted as being such in the Service Organization Controls (SOC) 1 and PCI DSS audit reports. Many customers encrypt all communication within the VPC; however, you may have no strict requirement to do so. When you make a decision on intra-VPC traffic encryption, make sure that you consult your threat model, review the relevant information assurance or compliance frameworks, and assess your organizational risk profile.

Encryption in Load Balancers and Amazon CloudFront PoPs

Application Load Balancer and Amazon CloudFront can use keys generated by, or imported into, AWS Certificate Manager to terminate TLS and SSL for inbound connections. AWS load balancers can only engage in unidirectional-trust connections; there is no means at the network level of mutually authenticating connections between client and load balancer using asymmetric cryptographic keys bound to each party. If this kind of mutual authentication is needed, you should examine third-party load balancers in the AWS Marketplace for suitable options.

Network Activity Monitoring

One of the strengths of a cloud environment is the fact that all asset creation and modification operations must be performed via an API. There is no mechanism for you or your colleagues to change the disposition of assets in an AWS environment at an AWS level without executing AWS API calls. This makes the API a single point of control, visibility, and audit. In a cloud environment, there are no virtual desks under which you can hide your virtual servers.

AWS provides a number of logging capabilities for the APIs themselves (AWS CloudTrail), the effects that API calls produce (AWS Config), network traffic (Amazon VPC Flow Logs), Amazon EC2 instance statistics (Amazon CloudWatch and Amazon CloudWatch Logs), and sessions processed by load balancers (Elastic Load Balancing logs), among others. When considering a management, monitoring, and alerting capability, remember that such a capability needs to be at least as robust, responsive, scalable, and secure as the live service environment that it is managing, monitoring, and alerting on. With all of the logging capabilities listed, AWS transparently handles the scaling of the services involved. AWS CloudTrail, AWS Config, Elastic Load Balancing, and Amazon CloudFront logs are sent to Amazon Simple Storage Service (Amazon S3) by default (AWS Config logs can also be sent to an Amazon Simple Notification Service [Amazon SNS] topic), and Amazon S3 bucket capacity expands to accommodate the data involved. Amazon CloudWatch, Amazon CloudWatch Logs, and Amazon VPC Flow Logs log to a separate stream mechanism.

Different log mechanisms have different delivery latencies. Currently Amazon CloudWatch has the lowest delivery latency, varying from milliseconds to seconds. For automated event analysis and response using AWS Lambda, Amazon CloudWatch Events is currently the preferred AWS Lambda triggering mechanism.

Each of these logging sources needs to be enabled in every region, Amazon VPC Flow Logs needs to be configured in each VPC, and the Amazon CloudWatch Logs agent needs to be installed and configured on each Amazon EC2 instance, unless you choose to use agents native to your preferred Security Information and Event Management (SIEM) to scrape operating system- and application-level logs instead.

AWS CloudTrail

AWS CloudTrail’s data sources are the API calls to AWS Cloud services that have AWS CloudTrail support. Mature production services that have an API will support AWS CloudTrail. Not all services have AWS CloudTrail support when they are in preview mode; some only integrate AWS CloudTrail when they move into production. The current list of supported services is available at:

http://docs.aws.amazon.com/awscloudtrail/latest/userguide/ cloudtrail-supported-services.html

The Amazon S3 bucket to which AWS CloudTrail logs are sent can be encrypted using your preferred Amazon S3 encryption mechanism—we recommend server-side encryption with AWS KMS–managed keys (SSE-KMS). Uniquely among AWS logging services, AWS CloudTrail can also deliver a digest of a log record to the same bucket; if you use a bucket policy that partitions access grants on key prefix, you can use these digests as an integrity check for your actual AWS CloudTrail records. Digests are delivered hourly and contain SHA-256 digests of each object written during the hour. In the manner of a blockchain, digests also contain the digest of the previous digest record so that tampering with the digest objects can be detected over time. An empty digest file is delivered if there have been no objects written during the hour.

If you disable log file integrity validation, the chain of digest files is broken after one hour. AWS CloudTrail will not create digest files for log files that were delivered during a period in which log file integrity validation was disabled. The same applies whenever you stop AWS CloudTrail logging or delete a trail.

If logging is stopped or the trail is deleted, AWS CloudTrail will deliver a final digest file. This digest file can contain information for any remaining log files that cover events up, to and including the StopLogging event.

AWS CloudTrail records are delivered to Amazon S3 between 5 and 15 minutes after the API call is executed. AWS is continually working to reduce this delivery latency.

AWS Config

AWS Config can be viewed as a logical complement to AWS CloudTrail. While AWS CloudTrail records API calls, AWS Config records the changes that those API calls affect for AWS assets. If you work with an Information Technology Infrastructure Library (ITIL) model, AWS Config serves as your Configuration Management Database (CMDB) for services in scope at an AWS asset level.

The set of AWS Cloud services and assets within them that are enabled for AWS Config is available at:

http://docs.aws.amazon.com/config/latest/developerguide/ resource-config-reference.html#supported-resources

AWS Config tracks changes in the configuration of your AWS resources, and it regularly sends updated configuration details to an Amazon S3 bucket that you specify. For each resource type that AWS Config records, it sends a configuration history file every six hours. Each configuration history file contains details about the resources that changed in that six-hour period. Each file includes resources of one type, such as Amazon EC2 instances or Amazon Elastic Block Store (Amazon EBS) volumes. If no configuration changes occur, AWS Config does not send a file.

Amazon SNS notifications for AWS Config changes are typically ready for delivery in less than one minute and are delivered in this time plus the latency associated with the delivery mechanism that you chose.

AWS Config sends a configuration snapshot to your Amazon S3 bucket when you use the deliver-config-snapshot command with the AWS CLI or when you use the DeliverConfigSnapshot action with the AWS Config API. A configuration snapshot contains configuration details for the resources that AWS Config records in your AWS account. The configuration history file and configuration snapshot are in JSON format.

AWS Config has its own AWS Lambda trigger; AWS Lambda functions that trigger on it are referred to as AWS Config Rules. In addition to enabling you to trigger your own functions to analyze and potentially act in response to changes, AWS curates a set of more than 20 functions—Managed Config Rules—that are popular with a broad range of customers. These functions can be used to analyze individual configuration items commonly involved in compliance and report issues to an Amazon SNS topic that you choose. Source code for these functions is available at: https://github.com/awslabs/aws-config-rules, and AWS welcomes contributions of new rules.

AWS Config Rules functions are normally triggered a few seconds after the change being recorded is made, but latencies of a few minutes between change event and log write are possible.

The AWS Management Console enables you to set up AWS Config data for consumption in one of three ways:

Timeline For each asset, you can view the history of its configuration since the time AWS Config was enabled, provided that AWS Config been operating continuously.

Snapshot For the whole of your in-scope AWS assets within an account and a region, you can obtain a description of their disposition at any point in time since AWS Config was enabled, provided that AWS Config has been operating continuously.

Stream If you (and your SIEM) prefer to consume AWS Config records via a stream mechanism rather than by retrieving them from an Amazon S3 bucket, you have that option.

Amazon CloudWatch

You can use Amazon CloudWatch to gain system-wide visibility into resource utilization, application performance, and operational health. You can use these insights to react and keep your application running smoothly.

Amazon CloudWatch was originally created to make Hypervisor-based statistics available as generally applicable performance metrics, perform statistical analysis on them, and provide an alarm system that could trigger Auto Scaling and other actions. In addition to monitoring and alerting on metrics specific to services, Amazon CloudWatch is also commonly used to monitor and alert on AWS account billing.

Amazon CloudWatch provides standard metrics for AWS services at various intervals, depending on the service. Some services, such as Amazon EC2, provide detailed metrics, typically at a one-minute interval. Amazon CloudWatch stores data about a metric as a series of data points. Each data point has an associated time stamp.

You can also publish your own metrics to Amazon CloudWatch using the AWS CLI or an API. A statistical graph of this information is published in the AWS Management Console. Custom metrics have a standard resolution of one-minute granularity. For custom metrics you may also use a one-second, high resolution granularity.

Amazon CloudWatch includes an alarm capability. You can use an alarm to initiate actions automatically on your behalf. An alarm watches a single metric over a specified time period and performs one or more specified actions based on the value of the metric relative to a threshold over time. The action is a notification sent to an Amazon SNS topic or an Auto Scaling policy. You can also add alarms to dashboards.

Alarms invoke actions for sustained state changes only. Amazon CloudWatch alarms do not invoke actions simply because they are in a particular state. The state must have changed and been maintained for a specified number of periods.

When creating an alarm, select a period that is greater than or equal to the frequency of the metric to be monitored. For example, basic monitoring for Amazon EC2 provides metrics for your instances every five minutes. When setting an alarm on a basic monitoring metric, select a period of at least 300 seconds (five minutes). Detailed monitoring for Amazon EC2 provides metrics for your instances every one minute. When setting an alarm on a detailed monitoring metric, select a period of at least 60 seconds (one minute).

If you set an alarm on a high-resolution metric, you can specify a high-resolution alarm with a period of 10 seconds or 30 seconds, or you can set a regular alarm with a period of any multiple of 60 seconds. A maximum of five actions can be configured per alarm.

Amazon CloudWatch Logs

Amazon CloudWatch processes AWS-originated and customer-originated textual log data. Amazon EC2 instance logs (which use the Amazon CloudWatch Logs agent), Amazon VPC Flow Logs, and AWS CloudTrail records (where redirection is set up) can all send records to Amazon CloudWatch Logs. AWS Lambda functions can also use the print() function to send arbitrary output to Amazon CloudWatch Logs; this is often used for debugging and logging.

The Amazon CloudWatch Logs agent will send log data every five seconds by default and is configurable by the user. Other Amazon CloudWatch Logs records are delivered in milliseconds to seconds.

The Amazon CloudWatch Logs agent invokes AWS API calls in order to submit log records. As Amazon CloudWatch Logs does not have a VPC private endpoint, this means that security groups, network ACLs, and routing require instances using the agent to have HTTPS access to the relevant API endpoint in the Amazon boundary network.

Amazon CloudWatch handles storage of all metrics for customers from the previous 14 days to 15 months. Amazon CloudWatch retains metric data as follows:

  • Data points with a period of less than 60 seconds are available for three hours. These data points are high-resolution custom metrics.
  • Data points with a period of 60 seconds (one minute) are available for 15 days.
  • Data points with a period of 300 seconds (five minutes) are available for 63 days.
  • Data points with a period of 3,600 seconds (one hour) are available for 455 days (15 months).

Amazon VPC Flow Logs

Amazon VPC Flow Logs are enabled on a per-VPC, per-subnet, or per-interface basis. They deliver NetFlow-like records of network data flows (potentially throughout the VPC), taken over a sample time window and from each elastic network interface in scope. Amazon VPC Flow Logs are delivered to an Amazon CloudWatch Logs-based log group comprising a linked list of records per elastic network interface.

Amazon VPC Flow Logs do not record traffic to and from VPC-native DNS services, the Amazon EC2 metadata service, Dynamic Host Configuration Protocol (DHCP) services, or the Windows license activation server.

Like other Amazon CloudWatch Logs records, Amazon VPC Flow Logs data is delivered a matter of seconds after the end of the sample time window. The exception to this delivery schedule is if Amazon CloudWatch Logs rate limiting has been applied and the delivery rate of aggregate data, including Amazon VPC Flow Log data, exceeds this.

As with other Amazon CloudWatch Logs, AWS Lambda functions can be triggered by new log data arriving as Amazon CloudWatch Events both to analyze and respond to log records at arrival time. Amazon CloudWatch Logs, while also being a stream into which AWS CloudTrail can be piped, can themselves be piped into Amazon Elasticsearch Service (for more information, refer to http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_ES_Stream.html). However, depending on your compliance requirements, you may need to perform some pre-processing on your log data before you commit it to long-term storage. For example, some legislation considers a tuple of a source IP address and a timestamp to constitute Personally Identifiable Information (PII).

One of the most popular uses of Amazon CloudWatch Logs Metric Filters is to scan Amazon VPC Flow Logs records for REJECT flags in the penultimate field. The principle here is that, after you have your security groups and network ACLs set up suitably for your environment, a REJECT in an Amazon VPC Flow Log record indicates that traffic is trying to get to or from somewhere it should not be. This trigger marks the instances associated with the source and destination IP addresses in the log record as being worthy of further investigation. While Amazon VPC Flow Logs records do not perform full-packet capture, the Amazon VPC Flow Logs ➢ Amazon CloudWatch Metric Filter ➢ Amazon CloudWatch Alarms ➢ Amazon SNS architecture is a simple approach to a basic Network Intrusion Detection System (NIDS). This approach scales inline with workloads without you needing to perform any operations for it to do so because AWS handles those operations. More information on Amazon CloudWatch Log Metric Filters is available at:

http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html

Amazon CloudFront

You can configure Amazon CloudFront to create log files that contain detailed information about every user request that Amazon CloudFront receives. These access logs are available for both web and Real-Time Messaging Protocol (RTMP) distributions. If you enable logging, you can also specify the Amazon S3 bucket in which you want Amazon CloudFront to save files.

Amazon CloudFront’s data sources are the Amazon CloudFront endpoint PoPs and the HTTP, HTTPS, and RTMP connections that they serve.

Amazon CloudFront delivers access logs for a distribution up to several times an hour. In general, a log file contains information about the requests that Amazon CloudFront received during a given time period. Amazon CloudFront usually delivers the log file for that time period to your Amazon S3 bucket within an hour of the events that appear in the log. Note, however, that some or all of the log file entries for a time period can be delayed by up to 24 hours. When log entries are delayed, Amazon CloudFront saves them in a log file with a file name that includes the date and time of the period in which the requests occurred, rather than the date and time when the file was delivered.

As you can receive multiple access logs an hour, we recommend that you combine all of the log files that you receive for a given period into one file. You can then analyze the data for that period more quickly and accurately.

Other Log Sources

Most other AWS Cloud services have their own logging mechanisms. In the particular context of networking, Elastic Load Balancing can generate logs that are sent to an Amazon S3 bucket.

Malicious Activity Detection

AWS uses a variety of technologies to monitor and protect core infrastructure from attack as part of AWS’s side of the shared responsibility model. All AWS customers benefit from these technologies when they use the AWS Cloud.

Be sure review Chapter 8, as well, which covers additional AWS Cloud services like Amazon Macie and Amazon GuardDuty.

AWS Shield and Anti-DDoS Measures

Customers have the option to use AWS Cloud services such as AWS WAF, Amazon CloudFront, and Amazon Route 53 to protect environments. AWS also provides protection services that are enabled by default, such as the (customer-transparent) Blackwatch traffic-scoring system and AWS Shield. AWS Shield comes in two forms: Standard and Advanced.

AWS Shield Advanced provides expanded protection against many types of attacks, including:

User Datagram Protocol (UDP) reflection attacks An attacker can spoof the source of a request and use UDP to elicit a large response from the server. The extra network traffic directed toward the spoofed, attacked IP address can slow the targeted server and prevent legitimate users from accessing needed resources.

SYN flood The intent of an SYN flood attack is to exhaust the available resources of a system by leaving connections in a half-open state. When a user connects to a TCP service like a web server, the client sends a SYN packet. The server returns an acknowledgment, and the client returns its own acknowledgement, completing the three-way handshake. In an SYN flood, the third acknowledgment is never returned, and the server is left waiting for a response. This can prevent other users from connecting to the server.

DNS query flood In a DNS query flood, an attacker uses multiple DNS queries to exhaust the resources of a DNS server. AWS Shield Advanced can help provide protection against DNS query flood attacks on Amazon Route 53 DNS servers.

HTTP flood/cache-busting (Layer 7) attacks With an HTTP flood, including GET and POST floods, an attacker sends multiple HTTP requests that appear to be from a real user of the web application. Cache-busting attacks are a type of HTTP flood that uses variations in the HTTP request’s query string that prevent use of edge-located cached content and forces the content to be served from the origin web server, causing additional and potentially damaging strain on the origin web server.

With AWS Shield Advanced, complex DDoS events can be escalated to the AWS DDoS Response Team (DRT), which has deep experience in protecting AWS, Amazon.com, and its subsidiaries.

For Layer 3 and Layer 4 attacks, AWS provides automatic attack detection and proactively applies mitigations on your behalf. For Layer 7 DDoS attacks, AWS attempts to detect and notify AWS Shield Advanced customers through Amazon CloudWatch alarms, but it does not apply mitigations proactively. This is to avoid inadvertently dropping valid user traffic.

AWS Shield Advanced customers have two options to mitigate Layer 7 attacks:

Provide your own mitigations AWS WAF is included with AWS Shield Advanced at no extra cost. You can create your own AWS WAF rules to mitigate DDoS attacks. AWS provides preconfigured templates to get you started quickly. The templates include a set of AWS WAF rules that are designed to block common web-based attacks. You can customize the templates to fit your business needs. For more information, see AWS WAF Security Automations:

https://aws.amazon.com/answers/security/aws-waf-security-automations/

In this case, the DRT is not involved. You can, however, engage the DRT for guidance on implementing best practices, such as AWS WAF common protections.

Engage the DRT If you want additional support in addressing an attack, you can contact the AWS Support Center. Critical and urgent cases are routed directly to DDoS experts. With AWS Shield Advanced, complex cases can be escalated to the DRT. If you are an AWS Shield Advanced customer, you also can request special handling instructions for high-severity cases.

The response time for your case depends on the severity that you select and the response times, which are documented on the AWS Support Plans page.

The DRT helps you triage the DDoS attack to identify attack signatures and patterns. With your consent, the DRT creates and deploys AWS WAF rules to mitigate the attack.

When AWS Shield Advanced detects a large Layer 7 attack against one of your applications, the DRT might proactively contact you. The DRT triages the DDoS incident and creates AWS WAF mitigations. The DRT then contacts you for consent to apply the AWS WAF rules.

Amazon VPC Flow Logs Analysis

Flow log data is statistical in nature from being averaged across a time window. Even so, it can still be used to derive insight not only about top talkers (from the combination of timestamp and source IP address, filtered to exclude the addresses used by your Amazon EC2 and Amazon RDS instances), but also whether there are attacks in progress. We have already discussed the use of Amazon CloudWatch filter metrics to look for REJECTs in flow log records, and further interesting information can be derived from plotting the data.

The cube in Figure 15.2 is from internal research, where a heavily hardened and minimized Amazon EC2 Linux instance was stood up on an Elastic IP address directly exposed to the Internet. Using gnuplot to graph time against destination port against activity, some rotation reveals a number of sets of points that form distinct lines across the space.

Three-dimensional plot of activity level versus destination port versus time shows scatter points that form horizontal lines across the space. Activity level ranges from 1 to 100000 and time ranges from 0 to 60 seconds.

FIGURE 15.2 Rotated plot of Amazon VPC flow logs: time/destination port/activity

As these lines are invariant in the ActivityLevel axis and proceed up the Ports range as Time progresses, it is reasonable to assume that they represent simple port scans without any of the stealth or randomization options enabled.

Amazon CloudWatch Alerting and AWS Lambda

Amazon CloudWatch can also be used to generate statistical information and trigger alarms based on thresholding. A common example is raising an alarm if logs from an Amazon EC2 instance show there to be more than 10 unsuccessful Secure Shell (SSH) login attempts in a minute—such logs are a sound indication that an instance is under sustained probing. Amazon CloudWatch Events, another feature of Amazon CloudWatch, are the lowest-latency means of triggering AWS Lambda functions from another service.

AWS Marketplace and Other Third-Party Offerings

The AWS Marketplace contains a large number of security tools from third parties. A description of some of the capabilities they provide follows.

Security Information and Event Management (SIEM)

If you want to manage your Security Information and Event Management (SIEM) capability in-house, take particular note of which AWS log sources the SIEMs that you are considering can ingest and parse, based on the services you intend to use.

Intrusion Detection System (IDS)/Intrusion Prevention System (IPS)/AWS Web Application Firewall (AWS WAF)

Unless it is part of your organizational structure or policy to have a hard separation between the team maintaining your IDS/Intrusion Prevention System (IPS) and the team maintaining the Amazon EC2 instances providing the service, there are three advantages in using on-instance rather than in-network IDS/IPS within your VPC.

  1. While some network-based IDS/IPS systems have Auto Scaling capability by virtue of being deployed out of AWS Marketplace as an AWS CloudFormation template rather than a simple Amazon Machine Image (AMI), if you run your IDS/IPS on your front-end servers, then you can be sure that your protection is Auto Scaling up and down inline with your service capability.
  2. You can be sure that you are going to be on the cleartext side of any crypto boundary when on-instance. If you are performing your IDS/IPS “in transit” with a separate box, you may need to decrypt and potentially re-encrypt for forwarding in the event that you need encryption in transit inside your VPC.
  3. More subtly, if you are performing IDS/IPS on-instance, then you can get a privileged view of how your server application reacts to specific requests in terms of logs, load, and more. If an attacker is trying to enact application/semantic-level breaks against your environment, then an inline network AWS WAF would recognize questionable activity because your serving instances would return 404 errors in response to a lot of deliberately-malformed probing queries. It would not, however, be able to see whether the processing of these bad URLs would have other adverse effects, such as generating excessive server-side load. An on-instance WAF would have a better opportunity to identify those additional adverse effects.

The downside is that WAF is computationally expensive, so if you are running small front-end instances, you may need to move up a size.

Amazon Inspector

Penetration testing involves connecting to network listeners presented by service environments and determining and attempting to provoke the services binding the listeners to behave in a manner outside their design specification. However, it is possible using Amazon Inspector to have a much more comprehensive and privileged view of the behavior of an Amazon EC2 instance, both when it is being attacked in a penetration testing context and when not.

Amazon Inspector involves an agent, installed on an Amazon EC2 instance, that communicates outbound over HTTPS with an Internet-facing service endpoint in the AWS Regions where Amazon Inspector is available. The agent features a kernel module, which instruments the instance. Amazon Inspector provides a number of precompiled test suites to assess the configuration and behavior of the instance and application set running on it to identify issues based on the selected rules package.

In contrast to tools that are run sequentially as a point-in-time evaluation of configuration and status, the Amazon Inspector agent captures configuration data for periods of time up to 24 hours. This feature enables Amazon Inspector to identify and record transient issues, as well as persistent ones. Amazon Inspector is often used to instrument Amazon EC2 instances in a dev/test Continuous Integration (CI)/Continuous Delivery (CD) chain so that security issues can be found and characterized when release candidate code is exercised by its test harness. Amazon EC2 instances instrumented with Amazon Inspector must be able to communicate outbound over HTTPS with the Internet in order to reach the API endpoint. This communication can be directed via an Elastic IP address, a NAT Gateway, or a proxy such as Squid.

Other Compliance Tools

In addition to traditional SIEM capabilities, a number of companies are producing tools whose purpose goes beyond monitoring to near-real-time automated mitigation. It is possible to integrate your own detection and response system using Amazon CloudWatch Events and AWS Lambda, as detailed in the “Automating Security Event Response, from Idea to Code to Execution” presentation from AWS re:Invent 2016, available at: https://www .youtube.com/watch?v = x4GkAGe65vE. The AWS Trusted Advisor tool implements various triggerable environment checks, which can input into compliance capabilities.

Penetration Testing and Vulnerability Assessment

AWS recognizes the importance of penetration testing, both to meet your potential regulatory requirements and as general good security practice. AWS performs and commissions thousands of penetration tests a year in order to maintain standards compliance and test services both in production and development. As network traffic associated with penetration testing is indistinguishable from network traffic associated with an actual attack, it is necessary to apply to AWS for authorization to perform your own penetration testing to or from AWS environments, subject to a few exceptions.

Penetration Test Authorization Scope and Exceptions

You can conduct and manage penetration testing against the following, subject to authorization:

  • Amazon EC2, except t1.micro, m1.small, and nano instance types
  • Amazon RDS, except micro, small, and nano instance types
  • AWS Lambda functions
  • Amazon CloudFront distributions
  • Amazon API Gateway gateways
  • Amazon Lightsail

For tests restricted to OSI Layer 4 and above, you can test “through” an Elastic Load Balancing load balancer, subject to authorization.

You can also test environments outside AWS from AWS environments (that is, traffic outbound from AWS rather than inbound to AWS), subject to the same authorization process.

AWS has worked with a number of AWS Marketplace vendors to vet and pre-authorize a select set of AWS Marketplace AMIs, such that testing traffic from Amazon EC2 instances built using these AMIs will not trigger abuse alarms. Use the search term “pre-authorized” in the AWS Marketplace to find these offerings.

You are prohibited from performing Denial of Service (DoS) attacks, or simulations of such, to or from any AWS asset. Other types of testing to investigate the security of a service or assets deployed in it, including fuzzing, are permitted.

Targets for testing must be resources you own (such as Amazon EC2 or on-premises instances). AWS-owned resources (such as Amazon S3 or the AWS Management Console) are prohibited from being tested by customers.

Applying for and Receiving Penetration Test Authorization

Penetration testing can be authorized for a time window of up to 90 days. AWS recognizes that for many customers, particularly those performing Continuous Deployment, penetration testing also needs to be a continuous process triggered by deployment events. Therefore, a penetration test authorization request can be made for a new time window while an existing time window is in effect. This enables multiple time windows to be “rolled together” into a contiguous and ongoing block. Penetration test authorization has a Service Level Agreement (SLA) of 48 working hours. AWS recommends applying for a new authorization at the start of the last full week of an existing time window, if the two windows are to be rolled together.

To apply for a penetration test authorization, use the web-based application form or send an email. Email allows applications to be submitted by users who do not have access to the root user in the AWS account, whereas the web-based form currently requires root access.

If you are an IAM user in an account, complete the following fields and send the information to [email protected].

  • Account Name:
  • Account Number:
  • Email Address:
  • Additional Email address to cc:
  • Third Party Contact information (if any):
  • IPs to be Scanned:
  • Target or Source:
  • Amazon EC2/Amazon RDS Instance IDs:
  • Source IPs:
  • Region:
  • Time Zone:
  • Expected Peak Bandwidth in Gigabits per Second (Gbps):
  • Start Date/Time:
  • End Date/Time:

If you will be testing Amazon API Gateway/AWS Lambda or Amazon CloudFront, provide the following additional information.

  • API or Amazon CloudFront Distribution ID:
  • Region:
  • Source IPs (if a private IP is provided, clarify whether it is an AWS IP, include account if different, or an on-premises IP):
  • Penetration Test Duration:
  • Do you have an NDA with AWS?
  • If a third party is performing the testing (source), does AWS have an NDA with this entity?
  • What is your expected peak traffic (e.g., 10 rps, 100,000 rps)?
  • What is your expected peak bandwidth (e.g., 0.1 Mbps, 1 Mbps)?
  • Test Details/Strategy:
  • What criteria/metrics will you monitor to ensure the success of the pen-test?
  • Do you have a way to immediately stop the traffic if we/you discover any issues?
  • Phone and Email of Two Emergency Contacts:

When authorization requests are couched in terms of Classless Inter-Domain Routing (CIDR) blocks rather than individual IP addresses, IPv4 ranges must be no larger than /24, and IPv6 ranges must be no larger than /120.

Response from the Penetration Test Authorization team should be expected by email within 48 business hours. Approval will include a request approval number. Non-approval will include one or more requests for clarification regarding the information submitted in the request. Testing should proceed only if you get an authorization number.

Summary

AWS has a threat model and mitigating controls on the AWS side of the shared responsibility model demarcation line; you also need to take the same approach for your side of the demarcation.

Many of the mitigating controls AWS uses are described in the audit reports we publish free of charge and under click-through NDA, via the AWS Artifact service.

The human brain is notoriously poor at objectively assessing risk, so formal models and frameworks are necessary.

Compliance frameworks have scopes. If a service isn’t in scope for your compliance requirements, it doesn’t mean that you can’t use it, but you need to isolate it from the sensitive data signal path using mechanisms that satisfy your auditor. Building an environment from compliant services does not necessarily result in a compliant environment, but AWS provides materials to assist in designing and building for compliance.

The ability to implement separation of duty between your network management team and your server and serverless infrastructure management teams is designed into AWS services.

IAM has fine-grained action permissions and flexible principal, resource, and condition elements that you can use to give fine-grained scope to the actions you choose to allow and deny.

AWS Organizations’ SCPs can be used to implement Mandatory Access Control on the child accounts in an organization.

Amazon CloudFront and Amazon Route 53 should be used together when implementing effective DDoS mitigation. The AWS Shield service enables further useful DDoS mitigation capabilities.

AWS API calls are encrypted by default. Cipher suites for AWS services are updated promptly when cryptographic algorithms are deprecated. AWS Certificate Manager can provision, manage, and automatically renew domain-validated certificate/key pairs for use with Amazon CloudFront and AWS load balancers. Encryption of data at rest should be considered a default position. CloudHSM is available if you have regulatory requirements that mandate the use of an HSM; otherwise, AWS KMS is the recommended option. Encryption in transit within your VPCs is often a matter of individual risk appetite, and many external standards do not mandate it.

Most AWS services can generate logs. The scaling of log generation and storage is in line with the scalability of the services themselves. Different log sources have different latencies between event occurrence and log record delivery. These logs are useful for purposes as diverse as maintaining ITIL compliance and performing simple network-based intrusion detection, as well as being consumable by code triggered in response to the event of a log record being written, where the code can act automatically to address issues reflected in the log record content.

The AWS Marketplace contains many security-focused products from vendors with whom you are likely to be familiar, so it is often straightforward to deploy the same third-party security technologies you use in your data centers in AWS.

When used to instrument EC2 instances, Amazon Inspector can reveal both persistent and transient software misbehaviors and misconfigurations that are contrary to AWS recommendations and/or which have records in the public CVE database.

Penetration testing is permitted against certain AWS services, subject to AWS approval and the meeting of specific conditions.

Exam Essentials

Understand the compliance documentation AWS makes available. Review the Risk and Compliance whitepaper, and other AWS whitepapers, available from https://aws.amazon .com/security/security-resources/, including those which focus on designing for specific compliance standards such as HIPAA. Examine the set of audit reports available via the AWS Artifact service at https://aws.amazon.com/artifact/, and the NIST 800-53 and PCI-DSS documents available via the Enterprise Accelerator initiative.

Understand the compliance standards and certifications that AWS meets and understand scoping. Understand the information at https://aws.amazon.com/compliance and identify which services are in scope for at least one external standard. Know the circumstances in which it is likely to be appropriate to use a service that is not in scope for a specific standard in the context of supporting an environment that needs to maintain compliance against that standard. Also, be aware of the need to satisfy an auditor that circumstances cannot arise where such a service will come into contact [with data] that the standard defines as sensitive.

Understand Threat Modeling. Threat modeling is fundamental to the understanding of risk. Control frameworks are based on compliance requirements plus mechanisms needed to mitigate the items on your own risk register in order to meet your own risk appetite. There are many standards and frameworks for doing threat modeling, some of which have free public documentation; read a selection of them.

Understand what logs can be generated by which AWS services, what AWS logging services aggregate them, what tools exist to analyze them and alert you, and know how to act on events of interest. AWS has a number of presentations, as well as papers and service documentation, on log gathering, aggregation, monitoring, analysis, and remediation. Start with the service documentation, and also be aware of what the latencies are between event and record for each logging service.

Understand how the AWS Shield service works in concert with other AWS services to mitigate and manage different kinds of DoS attacks. In addition to reading the service documentation, https://www.youtube.com/watch?v = w9fSW6qMktA demonstrates how the different aspects of the service work in concert to reduce the volume of attack traffic.

Understand the requirements and scoping for penetration testing on AWS. This encompasses what services can and can’t be penetration tested, the duration of a test window, how to apply for initial testing authorization and authorization renewal, and how to know when authorization has been granted.

Resources to Review

Exercises

For assistance completing these exercises, refer to the User Guides and related documentation for each of the relevant AWS services below. These are located at:






Review Questions

  1. Amazon Virtual Private Cloud (Amazon VPC) Flow Logs reports accept and reject data based on which VPC features? (Choose two.)

    1. Security groups
    2. Elastic network interfaces
    3. Network Access Control Lists (ACLs)
    4. Virtual routers
    5. Amazon Simple Storage Service (Amazon S3)
  2. What is the minimum runtime for Amazon Inspector when initiated from the AWS Console?

    1. 1 minute
    2. 5 minutes
    3. 10 minutes
    4. 15 minutes
  3. Compliance documents are available from which of the following?

    1. AWS Artifact on the AWS Management Console
    2. Compliance portal on the AWS website
    3. Services in Scope page on the AWS website
    4. AWS Trusted Advisor on the AWS Management Console
  4. AWS Identity and Access Management (IAM) uses which access model?

    1. Principal, Action, Resource, Condition (PARC)
    2. Effect, Action, Resource, Condition (EARC)
    3. Principal, Effect, Resource, Condition (PERC)
    4. Resource, Effect, Action, Condition, Time (REACT)
  5. Which hash algorithm is used for AWS CloudTrail record digests?

    1. SHA-256
    2. MD5
    3. RIPEMD-160
    4. SHA-3
  6. Penetration requests may be submitted to AWS by which means?

    1. Postal mail
    2. Email
    3. Social media
    4. AWS Support
  7. What is the maximum duration of an AWS penetration testing authorization?

    1. 24 hours
    2. 48 hours
    3. 30 days
    4. 90 days
  8. Who is responsible for network traffic protection in Amazon Virtual Private Cloud (Amazon VPC)?

    1. AWS
    2. The customer
    3. It is a shared responsibility.
    4. The network provider
  9. What authorization feature can restrict the actions of an account’s root user?

    1. AWS Identity and Access Management (IAM) policy
    2. Bucket policy
    3. Service Control Policy (SCP)
    4. Lifecycle policy
  10. Which AWS Cloud service provides information regarding common vulnerabilities and exposures?

    1. AWS CloudTrail
    2. AWS Config
    3. AWS Artifact
    4. Amazon Inspector
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.4.191