8

Simulating Production Locally

In the previous chapter, we managed to modularize our microservice-based application into different Compose files. Also, we went ahead with creating different environments for those applications. We have an environment with mock services, an environment that captures traffic between services, and an environment with monitoring enabled.

By being able to use mock services, generate different environments, and monitor our applications, we are able to be more productive and efficient in everyday development. In this chapter, we shall focus on simulating production locally using Compose.

A development team can be productive from the start if it has fewer dependencies and a development environment ready for testing.

Our target scenario will be an AWS environment. We shall simulate AWS services locally and also make a representation of a Lambda-based AWS environment through a Docker Compose application.

The target environment will be a simple application receiving a JSON payload. The application shall store the information in DynamoDB and then send the updates to a Simple Queue Service (SQS) queue. Another Lambda application will read the SQS messages and store them in Simple Storage Service (S3) for archival purposes.

In a real AWS environment, all the components involved, including SQS, Simple Notification Service (SNS), S3, and DynamoDB, are well integrated, thus making the operation of the application streamlined. However, having this environment available for local testing will require some workarounds to make up a well-integrated AWS environment. The components of our application will be a REST-based Lambda application storing the request in DynamoDB, an application simulating the publishing of SQS messages in a Lambda function, and the SQS-based Lambda application storing SQS events in S3.

The topics we shall cover in this chapter are the following:

  • Segregating private and public workloads
  • Setting up DynamoDB locally
  • Setting up SQS locally
  • Setting up S3 locally
  • Setting up a REST-based Lambda function
  • Setting up an SQS-based Lambda function
  • Connecting the Lambda functions

Technical requirements

The code for this book is hosted on the GitHub repository at https://github.com/PacktPublishing/A-Developer-s-Essential-Guide-to-Docker-Compose. If there is an update to the code, it will be updated on the GitHub repository.

Segregating private and public workloads

Since the actions taking place in AWS are internal, we should separate the workloads into private and public.

The REST-based Lambda application receiving the JSON payload needs to be on a public network, since it will interact with the end user. The SQS-based Lambda application, reading the SQS events and storing them in S3, needs to be private. The application simulating the SQS events to the SQS-based Lambda application will also be private.

The mock AWS components, such as DynamoDB, SQS, and S3, should use the private network.

We shall define the networks with the following Compose configuration:

networks:
  aws-internal:
  aws-public:

By having the private networks defined, we can now proceed with adding the mock AWS components to the Compose application.

Setting up DynamoDB locally

A commonly used Database on AWS is DynamoDB. DynamoDB is a serverless key/value NoSQL database. For local testing, AWS provides us with a local version of DynamoDB.

We shall use the Docker images provided by AWS and add them to the Compose configuration. For convenience, we shall expose the port locally.

As mentioned before, the DynamoDB service will use the private network defined previously:

services:
  dynamodb:
    image: amazon/dynamodb-local
    ports:
     - 8000:8000
    networks:
     - aws-internal

Since DynamoDB locally is up and running, let’s create a table on it.

Creating DynamoDB tables

Unlike Redis, in DynamoDB we need to create a table beforehand. We shall add a container to the Compose application, which creates the table in DynamoDB.

We did something similar to this in Chapter 2, Running the First Application Using Compose.

The container will use the AWS CLI image (https://hub.docker.com/r/amazon/aws-cli), override the command in order to use the DynamoDB util included, and create the table.

The initialization container will depend on the DynamoDB service, since DynamoDB needs to be available. The rest of the application will depend on the initialization service, since the table needs to exist before using it.

The script that will create the table will be the following:

#!/bin/sh
aws dynamodb create-table 
    --table-name newsletter 
    --attribute-definitions 
        AttributeName=email,AttributeType=S 
    --key-schema 
        AttributeName=email,KeyType=HASH 
    --provisioned-throughput 
        ReadCapacityUnits=5,WriteCapacityUnits=5 
    --table-class STANDARD --endpoint-url  http://dynamodb:8000 http://host.docker.internal:8000

Then, we shall add the initialization container to the Compose application:

services:
  dynamodb-initializer:
    image: amazon/aws-cli
    env_file:
      - ./mock_crentials.env
    entrypoint: "/create_table.sh"
    depends_on:
      - dynamodb
    volumes:
      - ./create_table.sh:/create_table.sh
    networks:
     - aws-internal

As you can see, we added some mock credentials in order to use the AWS CLI and also override the endpoint for DynamoDB. We can now test DynamoDB locally.

Interacting with the Local DynamoDB

We can test the local DynamoDB we set up previously by running a small snippet.

First, let’s start DynamoDB:

$ docker compose -f base-compose.yaml up

We shall use Go for running the example; therefore, we can use an existing go project, or create an example project with the initialization commands we used in the previous chapters.

Once done, we need to include the following dependencies (the following commands need to be executed from the dynamodb-snippet directory):

$ go get github.com/aws/aws-sdk-go/aws

$ go get github.com/aws/aws-sdk-go-v2/service/dynamodb

We can now use the following small snippet that puts an entry in the DynamoDB table:

sess, _ := session.NewSession(&aws.Config{
    Region:      aws.String("us-west-2"),
    Credentials: credentials.NewStaticCredentials("fakeMyKeyId", "fakeSecretAccessKey", ""),
})
svc := dynamodb.New(sess, aws.NewConfig().WithEndpoint("http://localhost:8000").WithRegion("eu-west-2"))
item := Subscribe{
    Email: "[email protected]",
    Topic: "what I subscribed",
}
av, _ := dynamodbattribute.MarshalMap(item)
input := &dynamodb.PutItemInput{
    Item:      av,
    TableName: aws.String("Newsletter"),
} 
svc.PutItem(input)

We have been successful in simulating using DynamoDB locally. We also managed to create a table using a container. We also ran a code example that will persist items in the DynamoDB table we created. Our Compose application has a DynamoDB running, making it possible for our services to interact with it. The next step would be to add a mock SQS component to our Compose application.

Setting up SQS locally

SQS will be used in order to notify us when a DynamoDB entry has been created. The REST-based Lambda application will send a message in SQS.

elasticmq is a very popular SQS emulator tool (https://github.com/softwaremill/elasticmq), which covers most of the features provided by SQS.

In order to push data to SQS, a queue should be created. elasticmq provides us with the option to create a queue on initialization.

The configuration will be the following:

//sqs.conf
include classpath("application.conf")
 
queues {
  subscription-event{}
}

Let’s now add the elasticmq configuration to our Compose file:

services:
  sqs:
    image: softwaremill/elasticmq
    ports:
     - 9324:9324
     - 9325:9325
    networks:
     - aws-internal
    volumes:
      - ./sqs.conf:/opt/elasticmq.conf
...

As we did with DynamoDB, for convenience reasons, we shall expose the port locally. Also, elasticmq provides us with an administrator interface on port 9325 (http://localhost:9325/).

Let’s interact with the local SQS broker using a Go snippet.

The following module needs to be included (the following commands need to be executed from the sqs-snippet directory):

$ go get github.com/aws/aws-sdk-go/aws

$ go get github.com/aws/aws-sdk-go/service/sqs

Our code snippet will print the available queues in the service:

session, _ := session.NewSession(&aws.Config{

Region:      aws.String("us-west-2"),

Credentials: credentials.NewStaticCredentials("fakeMyKeyId", "fakeSecretAccessKey", ""),

})

svc := sqs.New(session, aws.NewConfig().WithEndpoint("http://localhost:9324").WithRegion(os.Getenv(AWS_REGION_ENV)))

result, _ := svc.ListQueues(nil)

for i, url := range result.QueueUrls {

fmt.Printf("%d: %s ", i, *url)

}

We have successfully run an SQS simulator locally. We also created an SQS queue using the emulator’s integrated functionalities in creating queues. We also implemented a code example, which was successful in publishing data to the SQS queue by using the emulator endpoint. The services hosted on Compose should be able to interact with SQS and publish messages. In the next section, we shall set up a mock S3 server on Compose to facilitate blob storage in our application.

Setting up S3 locally

S3 is a highly available object storage service provided by AWS. As with most AWS services, it provides a REST API to interact with as well as an SDK.

In order to simulate S3 locally, we shall use S3mock (https://github.com/adobe/S3Mock), a highly rated project on GitHub.

A Docker image is available for it, which also provides the configuration option to create a bucket from the start.

We shall add it to our Compose file and attach it to the internal network:

services:
...
  s3:
    image: adobe/s3mock
    ports:
     - 9090:9090
    networks:
     - aws-internal
    environment:
      - initialBuckets=subscription-bucket

We will add a code snippet for it; thus, the following package needs to be included (the following commands need to be executed from the s3-snippet directory):

$ go get github.com/aws/aws-sdk-go/aws

$ go get github.com/aws/aws-sdk-go/service/s3

Our code snippet will list the available buckets:

sess := session.Must(session.NewSessionWithOptions(session.Options{

SharedConfigState: session.SharedConfigEnable,

}))

s3 := s3.New(sess, aws.NewConfig().WithEndpoint("http://localhost:9090").WithRegion("us-west-2"))

buckets, _ := s3.ListBuckets(nil)

for i, bucket := range buckets.Buckets {

fmt.Printf("%d: %s ", i, *bucket.Name)

}

We managed to run an S3 emulator locally. We configured the emulator and initialized it by using a bucket. Next, we ran a code example that will point to the local S3 bucket and list the existing buckets created. In the next section, we shall set up a REST-based Lambda function.

Setting up a REST-based Lambda function

AWS provides us with Lambda. AWS Lambda is a serverless computing offering that can be integrated and invoked in various ways. One way it can be utilized is by using it as a backend for REST APIs.

The REST-based Lambda function that we shall implement will receive a JSON payload and store it in DynamoDB.

This can be easily simulated locally, since AWS provides docker-lambda.

By using docker-lambda, we can create a container image that can simulate our AWS Lambda function. AWS provides images for this purpose that also include a runtime interface client that facilitates the interaction between our function code and Lambda (https://github.com/lambci/docker-lambda).

Furthermore, this makes it feasible to simulate calls to the Lambda function locally.

Let’s start with the function’s code base.

Initially, we shall persist the request in DynamoDB:

type Subscribe struct {
	Email string `json:"email"`
	Topic string `json:"topic"`
}
func HandleRequest(ctx context.Context, subscribe Subscribe) (string, error) {
	dynamoDb, _:= dynamoDBSession()
	marshalled, _ := dynamodbattribute.MarshalMap(subscribe)
	input := &dynamodb.PutItemInput{
		Item:      marshalled,
		TableName: aws.String(TableName),
	}
 
	dynamoDb.PutItem(input)
	sendToSQS(subscribe)
 
	return fmt.Sprintf("You have been subscribed to the %s newsletter", subscribe.Topic), nil
}

Then, we shall send a message to SQS:

func sendToSQS(subscribe Subscribe) {
	if !isSimulated() {
		return
	}
	if session, err := sqsSession(); err == nil {
		if bytes, err := jsonutil.BuildJSON(subscribe); err == nil {
			smsInput := &sqs.SendMessageInput{
				MessageBody: aws.String(string(bytes)),
				QueueUrl:    aws.String(os.Getenv(SQS_TOPIC_ENV)),
			}
			if _, err := session.SendMessage(smsInput); err != nil {
				fmt.Println(err)
			}
[...]
func sqsSession() (*sqs.SQS, error) {
	session, _ := session.NewSession()
	return sqs.New(session, aws.NewConfig().WithEndpoint(os.Getenv(SQS_ENDPOINT_ENV)).WithRegion(os.Getenv(AWS_REGION_ENV))), nil
}

The full source example can be found on GitHub (https://github.com/PacktPublishing/A-Developer-s-Essential-Guide-to-Docker-Compose/blob/main/Chapter8/newsletter-lambda/newsletter.go).

Let’s now create the Dockerfile for the application:

FROM amazon/aws-lambda-go:latest as build 
RUN yum install -y golang
RUN go env -w GOPROXY=direct
COPY go.mod ./
COPY go.sum ./
RUN go mod download 
COPY *.go ./
RUN go build -o /main
FROM amazon/aws-lambda-go:latest
COPY --from=build /main /var/task/main
CMD [ "main" ]

Everything is set up to add a Compose file for the application:

services:
  newsletter-lambda:
    build: 
      context: ./newsletter-lambda/
    image: newsletter_lambda
    ports:
     - 8080:8080
    environment:
     - SIMULATED=true
     - DYNAMODB_ENDPOINT=http://dynamodb:8000
     - SQS_ENDPOINT=http://sqs:9324
     - SQS_TOPIC=/000000000000/subscription-event
    depends_on:
      - dynamodb-initializer
      - sqs
    env_file:
      - ./mock_crentials.env
    networks:
      aws-internal:
      aws-public:

As we can see, we reference the mock AWS services we created previously. Also, we build the Docker image through the Compose file. This is a public service and the entry point for our application; therefore, we expose the port locally.

Let’s run it using Compose:

docker compose -f docker-compose.yaml -f newsletter-lambda/docker-compose.yaml build

docker compose -f docker-compose.yaml -f newsletter-lambda/docker-compose.yaml up

As we can see, we combined the Compose files as we did in Chapter 7, Combining Compose Files.

Our service is up and running, and we can test it by issuing a request with curl:

curl -XPOST "http://localhost:8080/2015-03-31/functions/function/invocations" -d '{"email":"[email protected]","topic":"Books"}'

"You have been subscribed to the Books newsletter"

To sum up, we managed to create an AWS Lambda function on Compose and facilitated its interactions with the mock DynamoDB and SQS services. We managed to simulate an AWS serverless-based application through Compose without interacting with the AWS console. In the next section, we will go one step further and introduce an SQS-based AWS Lambda function to our Compose application.

Setting up an SQS-based Lambda function

Previously, we managed to run locally a REST-based AWS Lambda function. Our next component will also be a Lambda function but message-based; more specifically, it will listen to the SQS events we emitted previously.

The Lambda application, by receiving the SQS events, will then persist them in S3. The same components we used previously will also be used for this application.

Let’s see the function handler:

func HandleRequest(ctx context.Context, sqsEvent events.SQSEvent) error {
    session := s3Session()
    for _, message := range sqsEvent.Records {
        var subscribe Subscribe
        json.Unmarshal([]byte(message.Body), &subscribe)
 
        key := fmt.Sprintf("%s.%d", hash(subscribe.Email), time.Now().UnixNano()/int64(time.Millisecond))
 
        marshalled, _ := json.Marshal(subscribe)
 
        session.PutObject(&s3.PutObjectInput{
            Bucket: aws.String(os.Getenv(SUBSCRIPTION_BUCKET_ENV)),
            Key:    aws.String(key),
            Body:   bytes.NewReader(marshalled),
            })
        }
    return nil
}

The function handler will receive SQSEvent containing SQS messages. Each message will be unmarshalled and stored in S3 using a hash- and time-based generated key.

AWS streamlines the SQS message handling. If the function invocation is successful, the message shall be removed from SQS. If not, the message will stay in the queue.

In order to build the image, a Dockerfile is needed:

FROM amazon/aws-lambda-go:latest as build
RUN yum install -y golang
RUN go env -w GOPROXY=direct
COPY go.mod ./
COPY go.sum ./
RUN go mod download
COPY *.go ./
RUN go build -o /main
FROM amazon/aws-lambda-go:latest
COPY --from=build /main /var/task/main
CMD [ "main" ]

Due to this being a Lambda-based application, the Dockerfile is identical to the one we implemented for the REST-based Lambda application.

Next, we shall create the Compose file:

services:
  s3store-lambda:
    build: 
      context: ./s3store-lambda/
    image: s3store-lambda
    environment:
     - SIMULATED=true
     - S3_ENDPOINT=http://s3:9090
     - SUBSCRIPTION_BUCKET=subscription-bucket
     - AWS_REGION=eu-west-2
    links:
    -  "s3:subscription-bucket.s3"
    depends_on:
      - s3
    env_file:
      - ./mock_crentials.env
    networks:
      aws-internal:

The application is accessed only internally; thus, it resides in the internal network. Also, we reference the mock AWS service we defined previously; however, there is a crucial detail in the links section.

Docker Compose links

Due to the way S3 works when accessing a bucket, instead of accessing through the root of the S3 endpoints, the bucket name is appended at the endpoints.

If the bucket name is my-bucket, the URL in order to interact with this bucket will be https://my-bucket.s3.your-region.amazonaws.com/.

This is in conflict with our deployment, since we have s3 as our endpoint, and our code base will try to access subscription-bucket.s3.

To tackle this, we shall utilize the links functionality that Compose provides us with.

By using links, we can define subscription-bucket.s3 as an extra alias for the s3 service; therefore, we shall be able to reach it via our service.

So far, we have successfully created an SQS-based Lambda function as well as run it locally. We managed to use S3 and an alias workaround for the bucket-based endpoint. In the next section, we shall combine the two applications through an intermediate local-only application that simulates the AWS environment for SQS-based Lambda functions.

Connecting the Lambda functions

So far, we have set up the mock AWS components for S3 and SQS, and we created two Lambda functions, one for REST-based communication and one for SQS-based communication. In an AWS environment, both functions would be seamlessly integrated, since by publishing a message to SQS, AWS handles the dispatching of that message to the Lambda function that should process it.

This seamless integration is what we miss in the current state of our Compose application. In order to facilitate this functionality, we shall create a service that pulls images from SQS and pushes them to the SQS-based function.

The code base is very streamlined:

session, _ := sqsSession()
queueUrl := aws.String(os.Getenv(SQS_TOPIC_ENV))
msgResult, _ := session.ReceiveMessage(&sqs.ReceiveMessageInput{
    QueueUrl: queueUrl,
})
if msgResult != nil && len(msgResult.Messages) > 0 {
    sqsEvent := map[string][]*sqs.Message{
    "Records": msgResult.Messages,
}
 
marshalled, _ := json.Marshal(sqsEvent)
http.Post(os.Getenv(S3STORE_LAMBDA_ENDPOINT_ENV), "application/json", bytes.NewBuffer(marshalled))
     for i := 0; i < len(msgResult.Messages); i++ {
        session.DeleteMessage(&sqs.DeleteMessageInput{
            QueueUrl:      queueUrl,
            ReceiptHandle:
            msgResult.Messages[i].ReceiptHandle,
       })
    }
}

The messages will be pulled from an SQS service, formatted in the format that the Lambda function expects to receive. Once the messages have been dispatched to the Lambda function, they shall be deleted from the queue.

This service will work only locally; thus, the image creation will be much simpler.

Let’s build the Dockerfile for the image:

# syntax=docker/dockerfile:1
FROM golang:1.17-alpine
WORKDIR /app 
COPY go.mod ./
COPY go.sum ./
RUN go mod download
COPY *.go ./
RUN go build -o /main
CMD [ "/main" ] 

Next, we shall create the Compose configuration:

services:
  sqs-to-lambda:
    build: 
      context: ./sqs-to-lambda/
    image: sqs-to-lambda
    environment:
     - SQS_ENDPOINT=http://sqs:9324
     - SQS_TOPIC=/000000000000/subscription-event
     - S3STORE_LAMBDA_ENDPOINT=http://s3store-lambda:8080/2015-03-31/functions/function/invocations
    depends_on:
      - sqs
      - s3store-lambda
    env_file:
      - ./mock_crentials.env
    networks:
      aws-internal:
networks:
  aws-internal:

The service is internal and will use only SQS. Since it will execute requests to s3store-lambda, it is dependent on it.

Note

If you have any active Compose sessions, ensure that they are stopped before moving on and executing the commands that will follow next.

Let’s run the entire application and see how the services interact together:

docker compose -f docker-compose.yaml -f newsletter-lambda/docker-compose.yaml -f s3store-lambda/docker-compose.yaml -f sqs-to-lambda/docker-compose.yaml build

docker compose -f docker-compose.yaml -f newsletter-lambda/docker-compose.yaml -f s3store-lambda/docker-compose.yaml -f sqs-to-lambda/docker-compose.yaml up

Let’s invoke the REST-based Lambda function the same way we did previously:

curl -XPOST "http://localhost:8080/2015-03-31/functions/function/invocations" -d '{"email":"[email protected]","topic":"Books"}'

"You have been subscribed to the Books newsletter"

We should be able to see logs on all services by now:

...

chapter8-newsletter-lambda-1     | START RequestId: f2dcc750-35a1-40d8-9c54-f7c2edc3bcfe Version: $LATEST

chapter8-newsletter-lambda-1     | END RequestId: f2dcc750-35a1-40d8-9c54-f7c2edc3bcfe

...

chapter8-sqs-to-lambda-1         | 2022/07/24 21:31:03 Dispatching 1 received messages

...

chapter8-s3store-lambda-1        | START RequestId: 7caff9ab-ddb4-46c7-b75f-0f726eaf2ae8 Version: $LATEST

chapter8-s3store-lambda-1        | END RequestId: 7caff9ab-ddb4-46c7-b75f-0f726eaf2ae8

Through this internal service, we managed to simulate a functionality that AWS provides out of the box. The limitations that we had initially were resolved by a Compose-driven solution. Being the entry point for our application, the REST-based service stores data on DynamoDB and sends messages to SQS. The SQS messages are then transmitted to the SQS-based Lambda function using this internal service.

Summary

In this chapter, we managed to spin a cloud-based infrastructure locally on our workstation in a seamless way. We configured the equivalent mock components for the AWS services DynamoDB, SQS and S3. Through our Compose configuration, we managed to configure them and also tackle some limitations that happen during local development. This gave us the option to develop our code base upon those services without the need to interact with an actual production environment.

Next, we proceeded to implement services suitable for the AWS Lambda environment. We successfully run those Lambda functions through our Compose application while making them eligible for deployment to a cloud environment. Last but not least, we simulated some functionality that AWS provides by introducing a local private application. Through the course of this chapter, there was no need to interact with the AWS console and a real production environment, and the focus remained on the development of the code base.

In the next chapter, we shall take advantage of this chapter’s code base and use Compose for our Continuous Integration and Continuous Deployment (CI/CD).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.107.90