Runtime API and bootstrapping

The Lambda service now provides a new runtime interface that helps us get more information about invocation events, and also allows us to submit an execution response. This functionality was made available when AWS released the ability to create a function using a custom runtime. This is where you can essentially bring your own language interpreter (a binary file or script), packaged in your deployment package or separately deployed as a Lambda layer. What this allows you to do is write your handler and logic in your preferred language, and then have this bootstrapped by Lambda. 

In your code, you must implement the runtime API. This will allow you to interface to the Lambda service to send responses, send errors, or check the invocation data to see if there is a new event to process.

The paths you will be most interested in during the initial stages are as follows:

  • /runtime/invocation/next: Call this to get the next event and some metadata about the request.
  • /runtime/invocation/$REQUEST_ID/response: Call this to post your response from the handler.

In order to initialize the runtime environment, we need to create a bootstrap file that we'll include in our deployment package, along with the function code. This is an executable file that the Lambda service will run in order to invoke your function handler. The file sits in a continuous loop that calls the Runtime API endpoint to get the invocation data. It then invokes the function handler to get the response data and sends that response data back to the Lambda service.

Here's an example of this in bash:

#!/bin/sh
set -euo pipefail

# Initialisation: load up the hander and set the root
source $LAMBDA_TASK_ROOT/"$(echo $_HANDLER | cut -d. -f1).sh"

while true
do
HEADERS="$(mktemp)"
# Invocation data: get the next invocation event
EVENT_DATA=$(curl -sS -LD "$HEADERS" -X GET "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next")
REQUEST_ID=$(grep -Fi Lambda-Runtime-Aws-Request-Id "$HEADERS" | tr -d '[:space:]' | cut -d: -f2)

# Run your handler function, passing in the invocation data
cd $LAMBDA_TASK_ROOT
RESPONSE=$(echo $EVENT_DATA | ./bin/lci hai.lo)

# Response: call the runtime API to return a response to the Lambda service
curl -X POST "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/$REQUEST_ID/response" -d "$RESPONSE"
done

The example is included in bash for brevity, and you can create this file in whichever language you like as long it can be executed using your provided runtime. Along with managing the response and processing the event and headers, the bootstrap file also needs to manage error handling. 

Remember to chmod 755 your bootstrap file! 

In the execution environment, there are three environment variables that will be immediately useful when you're writing your own bootstrap file:

  • AWS_LAMBDA_RUNTIME_API: The host and port number for the runtime API
  • LAMBDA_TASK_ROOT: The location of the directory containing the function code
  • _HANDLER: The function handler, as defined in the Lambda configuration

Implementing a bootstrap file is not a simple undertaking, and it's best to consult the documentation from here. A word of warning before you delve too deeply into providing your own runtime – you're messing with a layer of abstraction that Lambda provides. This abstraction is one of the core benefits of using serverless functions as a service, that is, not having to manage your own runtime. You will need to manage your own runtime life cycle and compatibility from that point on, so think carefully before committing to that.

Now that we've learned about the components and stages of bootstrapping a runtime, let's put it together and create our own example.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.199.112