Auditing

Now that we have gone through the process of setting up a new CloudTrail trail, we can move away from the AWS web console to the AWS CLI, where we will now cover how to audit CloudTrail to ensure that all best practices are being followed.

First, we will want to see if there are any active trails in our target account. We can do this with the CloudTrail DescribeTrails API, which allows us to view trails across all AWS regions, even if they are managed by the account's organization. The command will look something like this:

   aws cloudtrail describe-trails --include-shadow-trails 

The --include-shadow-trails flag is what allows us to see trails from other regions/our organization. The only trails that won't show up are region-specific trails outside the region the command is run against, so it is possible there is some CloudTrail logging going on and you just need to find it. This would still be a poor setup because those logs were not expanded across every region. The output of that command will give us most of the information that we are interested in.

We'll want to ensure that CloudTrail logging is expanded across all regions and we can determine that by looking at the IsMultiRegionalTrail key of the specific trail we are looking at. It should be set to true. If not, then that is something that needs to be remediated. A single multi-regional trail makes far more sense than a single trail per region for many reasons, but especially because as new AWS regions are released, you'd need to create trails for them, whereas a multi-regional trail will automatically cover them as they are added.

Then we want to ensure that IncludeGlobalServiceEvents is set to true, as that enables the trail to log API activity for non-region-specific AWS services, such as IAM, which is global. We will miss a lot of important activity if this is disabled. After that, we want to ensure LogFileValidationEnabled is set to true so that deletion and modification of logs can be detected and verified. Then we will look for the KmsKeyId key, which, if it is present, will be the ARN of the KMS key that is being used to encrypt the log files, and if it is not present then that means that the log files aren't being encrypted with SSE-KMS. This is another setting that should be added if it is not already present.

If we want to determine whether data events have been enabled, we can first check by looking at the HasCustomEventSelectors key and confirming it is set to true. If it is true, we'll then want to call the GetEventSelectors API in the region that the trail was created in to see what has been specified. The ExampleTrail that we created was created in the us-east-1 region, so we will run the following command to look at event selectors:

aws cloudtrail get-event-selectors --trail-name ExampleTrail --region us-east-1 

That API call returned the following data:

{
"TrailARN": "arn:aws:cloudtrail:us-east-1:000000000000:trail/ExampleTrail",
"EventSelectors": [
{
"ReadWriteType": "All",
"IncludeManagementEvents": true,
"DataResources": [
{
"Type": "AWS::S3::Object",
"Values": [
"arn:aws:s3:::bucket-for-lambda-pentesting/"
]
},
{
"Type": "AWS::Lambda::Function",
"Values": [
"arn:aws:lambda"
]
}
]
}
]
}

The values for the different event selectors tell us what kinds of event are being logged by this trail. We can see that ReadWriteType is set to All, which means we are recording both read and write events, and not just one of them. We can also see IncludeManagementEvents is set to true, which means the trail is logging management events like we want. Under DataResources we can see that S3 object logging is enabled for the bucket with the ARN arn:aws:s3:::bucket-for-lambda-pentesting/, but no others, and that Lambda function invocation logging is enabled for functions with arn:aws:lambda in their ARN, which means all Lambda functions.

Ideally, read and write events should be logged, management events should be logged, and all S3 buckets/Lambda functions should be logged, but that might not always be possible.

Now that we have checked the configuration of the trail, we need to make sure it is enabled and logging! We can do this with the GetTrailStatus API from the same region the trail was created in:

aws cloudtrail get-trail-status --name ExampleTrail --region us-east-1 

It will return output that looks like the following:

{
"IsLogging": true,
"LatestDeliveryTime": 1546030831.039,
"StartLoggingTime": 1546027671.808,
"LatestDigestDeliveryTime": 1546030996.935,
"LatestDeliveryAttemptTime": "2018-12-28T21:00:31Z",
"LatestNotificationAttemptTime": "",
"LatestNotificationAttemptSucceeded": "",
"LatestDeliveryAttemptSucceeded": "2018-12-28T21:00:31Z",
"TimeLoggingStarted": "2018-12-28T20:07:51Z",
"TimeLoggingStopped": ""
}

The number-one most important thing to look for is that the IsLogging key is set to true. If it is set to false, then that means the trail is disabled and none of that configuration we just checked even matters, because it is not actually logging anything.

Further, we can look at the LatestDeliveryAttemptTime and LatestDeliveryAttemptSucceeded keys to ensure that logs are being delivered correctly. If logs are being delivered, then those two values should be the same. If not, then there is something wrong that is preventing CloudTrail from delivering those logs to S3.

That essentially wraps up the basics of CloudTrail setup and best practices, but it is possible to get even more in-depth and secure by creating a custom policy for the KMS encryption key used on the trail and by modifying the S3 bucket policy to restrict access to the logs even further, prevent the deletion of logs, and more.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.40.53