Setting up a vulnerable Lambda function

The previous example of a Lambda function that's used to virus scan files in S3 is a similar but more complex version of what we are going to set up in our own environment. Our function will get triggered when a file is uploaded to an S3 bucket that we specify, where it will then download that file, inspect the contents, and then place tags on the object in S3, depending on what it finds. This function will have a few programming mistakes that open it up to exploitation for the sake of our demo, so don't go running this in your production account!

Before we get started on creating the Lambda function, let's first set up the S3 buckets that will trigger our function and the IAM role that our function will assume. Navigate to the S3 dashboard (click on the Services drop-down menu and search for S3) and click on the Create bucket button:

The Create bucket button on the S3 dashboard

Now, give your bucket a unique name; we will be using bucket-for-lambda-pentesting, but you'll likely need to choose something else. For the region, we are selecting US West (Oregon), which is also known as us-west-2. Then, click on Next, then Next again, and then Next again. Leave everything on those pages as the default. Now, you should be presented with a summary of your S3 bucket. Click on Create bucket to create it:

The final button to click to create your S3 bucket

Now, click on the bucket name when it shows up in your list of buckets, and that will complete the setup of the S3 bucket for our Lambda function (for now).

Leave that tab open in your browser, and in another tab, open the IAM dashboard (ServicesIAM). Click on Roles in the list on the left side of the screen, and then click on the Create role button in the top left. Under Select type of trusted entity, choose AWS service, which should be the default. Then, under Choose the service that will use this role, choose Lambda, and then click on Next: Permissions:

Creating a new role for our Lambda function to assume

On this page, search for the AWS managed policy, AWSLambdaBasicExecutionRole, and click on the checkbox next to it. This policy will allow our Lambda function to push execution logs to CloudWatch, and it is, in a sense, the minimum set of permissions that a Lambda function should be provided. It is possible to revoke these permissions, but then the Lambda function will keep trying to write logs, and it will keep getting access denied responses, which would be noisy to someone watching.

Now, search for the AWS managed policy, AmazonS3FullAccess , and click on the checkbox next to it. This will provide our Lambda function with the ability to interact with the S3 service. Note that this policy is far too permissive for our Lambda function use case, because it allows for full S3 access to any S3 resource, when technically we will only need a few S3 permissions on our single bucket-for-lambda-pentesting S3 bucket. Often, you will find over-privileged resources in an AWS account that you are attacking, which does nothing more than benefit you as an attacker, so that will be a part of our demo scenario here.

Now, click on the Next: Tags button on the bottom right of the screen. We don't need to add any tags to this role, as those are typically used for other reasons than what we need to worry about right now, so just click on Next: Review now. Now, create a name for your role; we will be naming it LambdaRoleForVulnerableFunction for this demo, and we will be leaving the role description as the default, but you can write your own description in there if you would like. Now, finish this part off by clicking on Create role on the bottom right of the screen. If everything went smoothly, you should see a success message at the top of the screen:

Our IAM role was successfully created

Finally, we can start to create the actual vulnerable Lambda function. To do so, navigate to the Lambda dashboard (Services | Lambda), and then click on Create a function, which should appear on the welcome page (because presumably, you don't have any functions created already). Note that this is still in the US West (Oregon)/us-west-2 region, just like our S3 bucket.

Then, select Author from scratch at the top. Now, give your function a name. We will be naming it VulnerableFunction for this demo. Next, we need to select our runtime, which can be a variety of different programming languages. For this demo, we will choose Python 3.7 as our runtime.

For the Role option, select Choose an existing role, and then under the Existing role option, select the role that we just created (LambdaRoleForVulnerableFunction). To finish it off, click on Create function in the bottom right:

All the options set for our new vulnerable Lambda function

You should now drop into the dashboard for the new vulnerable function, which lets you view and configure various settings for the Lambda function.

We can ignore most of the stuff on this page for the time being, but if you'd like to learn more about Lambda itself, I suggest reading the AWS user guide for it at: https://docs.aws.amazon.com/lambda/latest/dg/welcome.html. For now, scroll down to the Function code section. We can see that the value under Handler is lambda_function.lambda_handler. This means that when the function is invoked, the function named lambda_handler in the lambda_function.py file will be executed as the entry point for the Lambda function. The lambda_function.py file should already be open, but if it's not, double-click on it in the file list to the left of the Function code section:

The Lambda function handler and what those values are referencing

If you chose a different programming language for the runtime of your function, you may encounter a slightly different format, but in general, they should be similar.

Now that we have the Lambda function, the IAM role for the Lambda function, and our S3 bucket created, we are going to create the event trigger on our S3 bucket that will then invoke our Lambda function every time it goes off. To do this, go back to the browser tab where your bucket-for-lambda-pentesting S3 bucket is and click on the Properties tab, and then scroll down to the options under Advanced settings and click on the Events button:

Accessing the Events setting of our S3 bucket

Next, click on Add notification and name this notification LambdaTriggerOnS3Upload. Under the Events section, check the box next to All object create events, which will suffice for our needs. We'll want to leave the Prefix and Suffix blank for this notification. Click on the Send to drop-down menu and select Lambda Function, which should show another drop-down menu where you can select the function we created, VulnerableFunction. To wrap it all up, click on Save:

The configuration we want for our new notification

After you have clicked on Save, the Events button should show 1 Active notifications:

The notification that we just set up.

If you switch back to the Lambda function dashboard and refresh the page, you should see that S3 has been added as a trigger to our Lambda function on the left-hand side of the Designer section:

The Lambda function is aware that it will be triggered by the notification we just set up

Basically, what we have just done is told our S3 bucket that every time an object is created (/uploaded/ , and so on), it should invoke our Lambda function. S3 will automatically invoke the Lambda function and pass in details related to the file uploaded through the event parameter, which is one of two that our function accepts (event and context). The Lambda function can read this data by looking at the contents of event during its execution.

To finish off the setup of our vulnerable Lambda function, we need to add some vulnerable code to it! On the Lambda function dashboard, under Function code, replace the default code with the following:

import boto3
import subprocess
import urllib


def lambda_handler(event, context):
s3 = boto3.client('s3')

for record in event['Records']:
try:
bucket_name = record['s3']['bucket']['name']
object_key = record['s3']['object']['key']
object_key = urllib.parse.unquote_plus(object_key)

if object_key[-4:] != '.zip':
print('Not a zip file, not tagging')
continue

response = s3.get_object(
Bucket=bucket_name,
Key=object_key
)

file_download_path = f'/tmp/{object_key.split("/")[-1]}'
with open(file_download_path, 'wb+') as file:
file.write(response['Body'].read())

file_count = subprocess.check_output(
f'zipinfo {file_download_path} | grep ^- | wc -l',
shell=True,
stderr=subprocess.STDOUT
).decode().rstrip()
s3.put_object_tagging(
Bucket=bucket_name,
Key=object_key,
Tagging={
'TagSet': [
{
'Key': 'NumOfFilesInZip',
'Value': file_count
}
]
}
)
except Exception as e:
print(f'Error on object {object_key} in bucket {bucket_name}: {e}')
return

As we continue through this chapter, we will take a deeper look at what is going on in this function. In simple terms, this function gets triggered whenever a file is uploaded to our S3 bucket; it will confirm that the file has a .zip extension, and then it will download that file to the /tmp directory. Once it is downloaded, it will use the zipinfo, grep, and wc programs to count how many files are stored in the ZIP file. It will then add a tag to the object in S3 that specifies how many files are in that ZIP file. You may or may not already be able to see where some things could go wrong, but we will get to that later.

One last thing that we will do is drop-down to the Environment variables section of the Lambda dashboard and add an environment variable with the key app_secret and the value 1234567890:

Adding the app_secret environment variable to our function.

To finish off this section, just click on the big orange Save button in the top right of the screen to save this code to your Lambda function, and we will be ready to move on.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.89.85