Interacting using the SDK

We just put together an example of how a Lambda can be triggered and automatically get the new object. We want to do more processing on the file and then upload it to a different location in the same bucket—one that doesn't trigger an event. The following diagram shows this new functionality, where the Lambda function is performing the putObject API action using the SDK:

Adding more actions to our event triggering sequence

For the upload, we're going to leverage the API operations in our SDK to put or upload the object into a folder called photos-processed.

Let's walk through some examples in different languages to get an idea about what is involved.

For Node.js, you can use either the upload() or putObject() method. The upload method has some extra parts in it that can automatically switch over to perform a multi-part upload when the file is over a specific size. This parallelizes the upload to increase performance. It can also handle retries when there are errors. The details are created in a params JSON object that is passed into the upload function. Pass the upload function an object with the required parameters: bucket name, key (directory and filename), and the actual object being uploaded. The upload function is part of the AWS SDK and handles the actual upload to the S3 bucket, as shown in the following code. We handle the success and error conditions in the same function:

const params = {
Bucket: 'my-photo-bucket',
Key: 'photos-processed/' + filename,
Body: photo-file
};

s3.upload(params, (err, data) => {
if(err) console.log(err);
else console.log(data);
});

In Python, the SDK is called Boto3 and for uploading, we can use the upload_file() method:

import boto3
s3 = boto3.client('s3')
s3.upload_file(photo-file, 'my-photo-bucket', 'photos-processed/' + filename)

Java is a little more complex. Once the s3Client object is set up, we can use the putObject() method. The following example would upload a JPG file called my-photo to the photos-processed directory in the bucket called my-photos-bucket:

AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(yourRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();

PutObjectRequest request = new PutObjectRequest("my-photos-bucket", "photos-processed/" + fileName, new File(fileName));
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentType("image/jpeg");
metadata.addUserMetadata("x-amz-meta-title", "my-photo");
request.setMetadata(metadata);
s3Client.putObject(request);

The C#/.NET example is quite lengthy. I recommend that you consult the documentation for the scaffolding code in this language.

Here's a pro tip for high-use buckets: If you regularly exceed 3,500 TPS on a bucket, you should start to distribute your key names across multiple prefixes. Avoid starting key names with the same prefix or adding a suffix that increments. Add some randomness at the beginning of the key name, such as a hash of the date/time.

In this section, we learned how to upload files into S3 using the AWS SDK for various languages. Another useful service we could use with our example is DynamoDB. We'll introduce this in the next section. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.15.245.1