© Tayo Koleoso 2020
T. KoleosoBeginning Quarkus Frameworkhttps://doi.org/10.1007/978-1-4842-6032-6_4

4. Packaging and Deploying Quarkus Applications

Tayo Koleoso1 
(1)
Silver Spring, MD, USA
 

Typically, technical textbooks save the “packaging and deployment” talk for last, like vegetables on a dinner plate full of meat. I’m bringing it up front, because when we’re talking Quarkus, the deployment options are like half the point. This will be worth your while.

The cloud presents a new frontier and new challenges for packaging and deploying microservices. Microservices are so hot right now, but way too many aren’t prepared for the change. Now, the haters have said many hurtful things about Java: how big java deployment kits are, how much RAM a Java application needs, and how slow it is. Really unkind stuff. In the cloud-everything world, these criticisms of Java become real hazards. Take Amazon Web Services (AWS), for example: a lot of the pricing models of their services revolve around two things.
  • CPU: How much CPU time your code or application requires

  • RAM: How much RAM your code consumes; its memory footprint

From the AWS Lambda service pricing page itself:

With AWS Lambda, you pay only for what you use. You are charged based on the number of requests for your functions and the duration, the time it takes for your code to execute … Duration is calculated from the time your code begins executing until it returns or otherwise terminates, rounded up to the nearest 100ms*…An increase in memory size triggers an equivalent increase in CPU available to your function.

Translation: You’re going to spend, spend, spend, if your application takes a “long” time to start or needs a bunch of RAM. Even CPU-efficient code that requires more RAM will wind up costing more because more RAM triggers more CPU allocation from AWS. Having code that starts sharply and consumes relatively little RAM can be the difference between running a service for free in AWS and skipping meals so you can afford to fund your startup. At the scale of a large enterprise, the multiplier effect is even more obvious. An organization that’s serving millions of requests a day in cloud infrastructure will start to see the hit to their bottom line when they’re spending a bunch of money in cloud operation costs. The way cloud pricing models are written on paper, you’d think “it’s just $0.0000008333 per GB/second. Doesn’t sound like much”. Multiply that enough times at scale, and you’ll start seeing your departmental heads asking questions about cost.

Let’s even leave the money out of it for now; suggesting that one can run a Java application in an embedded deployment environment has always raised eyebrows. “Are you sure Java is not too heavy to use in a Raspberry Pi?”; “Java is too slow to use in low-latency systems”. Individuals and organizations have had to make language and platform switches from Java to others, after considering the historical resource requirements of the Java platform.

So here we are with a supersonic subatomic platform promising to show them all. And show them, we shall. For my next demonstration, I’m going to kit up my Quarkus project with the following features:
  • PostgreSQL driver

  • Hibernate, with all the trimmings

  • Agroal connection pool manager

  • MicroProfile health

  • MicroProfile metrics

  • MicroProfile REST client

  • REST support

  • Narayana transaction manager

  • JSON marshaling support

  • Reactive transport

  • JWT

  • Scheduled batch processing

Between all these, you have the ingredients for a production-strength application. Let me show you the differences in outcome.

JVM Mode

There’s nothing fancy going on here. Deploying your Quarkus app in JVM mode simply refers to the vanilla Java way of running a JAR-red up application. You build the JAR:
mvn clean install
and you run it:
java -jar <app name.jar>. quarkus.package.output-name

That’s it. Oh wait, one more thing: configure the quarkus.package.output-name property in the application.properties file to control the name of the output file from the build.

Native Mode

You know what a JAR is.1 Most, if not all, Java applications are fundamentally composed of JARs. Sure, you’ll have your WAR, but really, it’s still just an aggregation of JARs with some configuration files thrown in. JARs containing Java classes are the way they are, because they were conceived in a WORA world where the JVM is expected to lug a bunch of fat around. You get to download/add a JAR to your project and just use it – no need to worry about any OS-specific conditions that could cause your code to work differently. At least that’s what the intention was. As you now know, that flexibility comes at the cost of speed and resource efficiency.

Native code is what you get when you don’t have to worry about any of that cross-OS overcompensation. You know all the classes and JVM resources your application will need. You also know what your target deployment environment is – serverless, containerized, embedded, whatever – you should be able to target your code for that platform. On a Windows machine, that’s a .exe in Window, and a .dimg in macOS. This is what native code is all about. And it isn’t just sales-speak; Observe: this is a traditional java-built Quarkus application’s startup:
04:49:21 INFO  [io.quarkus] (main) code-with-quarkus 1.0.0-SNAPSHOT (powered by Quarkus 1.5.0.Final) started in 2.285s. Listening on: http://0.0.0.0:8081
04:49:21 INFO  [io.quarkus] (main) Profile prod activated.
04:49:21 INFO  [io.quarkus] (main) Installed features: [agroal, cdi, hibernate-orm, hibernate-orm-panache, jdbc-postgresql, mutiny, narayana-jta, rest-client, resteasy, resteasy-jsonb, scheduler, security, servlet, smallrye-context-propagation, smallrye-health, smallrye-jwt, smallrye-metrics, smallrye-openapi, vertx, vertx-web]
started in 2.285 seconds. That’s after a bunch of warm-up startups, by the way. In the serverless or low-latency worlds, that might as well be an hour. Here’s exactly the same Quarkus code, with the same Quarkus dependencies, but now compiled as a native image:
04:34:46 INFO  [io.quarkus] (main) code-with-quarkus 1.0.0-SNAPSHOT (powered by Quarkus 1.5.0.Final) started in 0.028s. Listening on: http://0.0.0.0:8083
04:34:46 INFO  [io.quarkus] (main) Profile prod activated.
04:34:46 INFO  [io.quarkus] (main) Installed features: [agroal, cdi, hibernate-orm, hibernate-orm-panache, jdbc-postgresql, mutiny, narayana-jta, rest-client, resteasy, resteasy-jsonb, scheduler, security, servlet, smallrye-context-propagation, smallrye-health, smallrye-jwt, smallrye-metrics, smallrye-openapi, vertx, vertx-web]
started in 0.028 seconds! The Quarkus native image started up 81 times faster than the traditional JVM version; my jaw dropped the first time I saw this. The likes of Spring Boot couldn’t possibly compete with this, without going native themselves. How about RAM and CPU consumption?

App Mode

CPU%

RAM (MB)

% of Total RAM

JVM mode

0.3

381.13

10.1

Native mode

0.0

53.97

1.4

Look at that. It bears repeating: this is the same Quarkus project, with the same dependencies running in two different modes; both instances are at rest, not serving any requests. These are Nürburgring-worthy numbers, from the platform some would call “slow”. I feel like the lead character in that movie, Moneyball (I think Brad Moneyball was his name2). It’s not even close: while the traditional Java app is eating up 10.1% of available memory,3 the native image is using 1.4%, a more than 700% difference in memory consumption. Where the traditional Quarkus app is using 0.3% of CPU, the native image is not even registering at all. It gets even more impressive when you realize that, thanks to Quarkus optimizations for the target JVM, JVM mode Quarkus apps are already better performing than some of the competition.

What did I do to get the code to this point? Not too much. Let’s meet the main player in all of this.

GraalVM

GraalVM is a high-performance, polyglot JVM distributed by Oracle. You can get it at www.graalvm.org/downloads. It aims to be the JVM for all seasons and languages.

The secret sauce in GraalVM is an ability to take your .java file, straight to operating system-specific machine code, the so-called native image. For Windows, you’ll get a .exe; for *nix, you’ll get a Unix executable. It is partially this wizardry that makes GraalVM unique among JVMs. GraalVM works with a set of tools and utilities to generate an OS-specific image that will do all that magic that I showed in the previous section. With those OS-specific tools, GraalVM scans your project’s code and maps out every class, method, and JVM feature that’s referenced in your code, directly or indirectly. It’s then able to AOT-compile the entire dependency tree and produce the native image that contains strictly what your code needs and nothing else. With Quarkus, it can also include resource and configuration files in that image. What you get at the end of it is a single deployment unit that you can run on the target OS.

GraalVM started off as a component in the Hotspot VM – the VM you’re probably most familiar with. Oracle then excised it from the standard VM and made it its own stand-alone VM – so you now have the privilege of paying for an “Enterprise” version. Don’t get me wrong, the Community Edition of GraalVM is fine as a standard VM as well – you can run any Java applications in it as normal without any of the native business and still get superior performance to the standard Hotspot. The latest incarnation of GraalVM now ships a Maven plugin that lets you cut out the middleman a little bit. The Maven plugin allows you to compile your code and generate a native image using the native image tool, all in one step.

Additionally, Red Hat has announced the Mandrel project, a Red Hat sponsored and supported build of GraalVM. With Mandrel, Red Hat offers features and support for Graal that you might not get from the Oracle-supplied GraalVM.

Now back to Quarkus! Ideally, you should only need to download and configure GraalVM and run the following Maven command on your Quarkus project:
mvn package -Pnative
The reality is that reliably generating native images with Quarkus is not as straightforward as I would like. Also, I don’t recommend doing it in the standard “development environment” way and here’s why:
  • GraalVM as distributed directly by Oracle doesn’t “just work” because you downloaded it. To get the native image generation capability, there’s some configuration to do – not to mention having to manually install some of the tools it needs, which to me is a hassle.

  • As at the time of this writing, support for Windows is experimental (read: it’s probably not going to work for you in many cases); Windows devs aren’t going to have a good time. I should know: one of my personal development computers’ Windows machine.4

  • WORP: You should be compiling the native image in the operating system for which you’re targeting the image. What’s going to be the point of building a native image in Windows, for example, for an application destined for a Unix environment? No es bueno. Predictability is key.

So, to recap, trying to get Quarkus apps natively generated on my raw Windows development environment was not fun. The first problem I ran into was some video drivers conflicting with the native image generator utility in Graal. Being the lazy developer that I am, I’m interested in this only if it’s plug-n-play – everything should come bundled and ready to run.

Tip

Java Reflection and Native mode are at odds: Native mode operates on so-called “closed-world” basis; it requires being able to compile all the classes and dependencies that a Java app needs ahead of time. Wanton reflection is the opposite of that – it’s all about dynamic class loading. Quarkus closes this gap by providing the @RegisterForReflection annotation. Add this annotation to classes that will be candidates for reflection, for example, DTO classes that will be used for requests and responses in REST endpoints.

Native Java Image Limitations

Because we can’t have nice things. We know by now that native imagery requires upfront knowledge about what your Java application is going to need. Some other things you should know about generating native images with GraalVM are
  • Native images don’t do automatic heap or thread dump capture, which sucks for the site reliability engineering (SRE) folk.

  • Analyzing and AOT-compiling every class and dependency your application needs takes time and RAM. At a minimum, you’re going to need over 1G free RAM to complete a native compilation of a relatively small application. Configure quarkus.native.native-image-xmx in your application.properties to increase the RAM allocation for the native image.

  • Monitoring and management via JMX is limited. Fortunately, MicroProfile provides the Metrics API, so you’re not flying blind. You will be able to expose your microservices to Prometheus and any other platform that implements the OpenMetrics standard. You can also use VisualVM to monitor your native application; you just won’t be able to trigger a heap dump from it. Add -H:+AllowVMInspection or quarkus.native.enable-vm-inspection=true to expose your application to introspection.

  • At the time of this writing, there’s limited support for the Java Flight Recorder (JFR) in GraalVM. JFR is my favorite JDK tool, by the way. The GraalVM team is working on improving support for JFR incrementally, so I don’t expect this to be a long-lasting limitation.

  • Native images aren’t suited for applications that will trigger frequent garbage collection. Native imagery uses a serial garbage collector which isn’t the most efficient garbage collector. You can mitigate this by sizing and partitioning your heap sufficiently, to minimize the need for frequent garbage collection.

  • There could be a slight increase in latency when running a Quarkus Java application in native mode. Nothing too bad; the serial garbage collector is not helping. You should performance test your native image application.

  • To get past these limitations, you’ll need to pay Oracle5 for the enterprise version of GraalVM.

Overall, 9/10 recommend, #teamquarkus all the way. I pay a one-time upfront cost for continuous resource savings in production? Sign me up! The performance boost and cost savings make it all worth it.

Native Imagery in DevOps

Whether you’re running CircleCI, Jenkins, or something else, in most enterprises, build servers are shared infrastructure. The disk space, RAM, and CPU resources being used to build deployment kits need to be managed across many users and build jobs.

Quarkus’ native compilation is a hungry hungry hippo as I’ve already established. In a continuous integration/continuous deployment (CI/CD) shop, y’all have got to be mindful of how your builds affect others. Already it’s expected that an organization that takes CI/CD seriously must be prepared to allocate significant resources to a build server. This need will increase significantly if you’re introducing native mode compilation. To that end
  • Size your worker thread pools with the expectation that a single native build job could hold onto one thread for north of an hour. I’ve seen it happen. As a project grows to use more extensions or even more code, the length of time it’ll take to native compile is likely to grow. There’s a risk that native build jobs will starve their neighbors of CPU time.

  • The rate of growth of RAM requirements of a job is not linear; it could be exponential. The more extensions introduced to the project, the greater the thirst for build-time RAM. Some extensions will require less than others. Introducing some dependencies could double the RAM requirement overnight. If you’ve followed every example in this book up till this point, your Quarkus project will require a minimum of 2.5GB of free RAM to native compile.

  • If you run a pipeline where you run unit or integration tests as part of the build process, consider using the JVM mode deployment for the tests and only run the native image tests at the tail end of the release train. Alternatively, configure the quarkus.test.native-image-wait-time property to set a time limit for image building during a test run.

  • Configure quarkus.native.native-image-xmx to limit the maximum amount of memory native image generation can consume.

  • Consider containerized build jobs. This way, each build in Jenkins is isolated and predictable. You can also manage the RAM utilization per build job with more granularity and oversight.

An ideal build setup will be able to bundle everything that you’ll ever need for building native images in one neat package. The package should contain
  • GraalVM, installed and configured

  • Maven, installed and configured

  • The native image tool, installed and configured

How does one get essentially a running operating system, with software installed and preconfigured? Because really, all I want to do is drop my code somewhere and have my code converted into a native image, predictably and reliably. All of this stuff should…just work!

Enter containerization!

A Crash Course in Containerization

If you’re new to containerization and you’re a java developer, here’s the elevator pitch using Docker as the basis.

Docker is like a JVM for whole operating systems. A near-complete operating system environment is packaged as an “image” (like APIs and Java applications are packaged as JARs) and you can download the images to your local machine. An image you download to your local machine is a near-complete operating system (OS) bundle, and there are thousands of them. Just like you don’t need to worry about the implementation details of a JAR most of the time, you are generally able to pull down docker images and use them as is.

This concept is what the nerds call “containerization”: download a Docker image of an OS configured with anything you desire; run an instance of that image – called a “container” – inside the docker runtime; use it like you have another OS running inside your OS (your OS/machine is called the “host” in Docker-speak).

“But isn’t that just virtualization with extra steps?”, you ask in an oddly high-pitched voice, for some reason. Containerization with Docker serves a similar purpose as virtualization, but it offers far more portability and flexibility than vanilla virtualization. Think of the difference between containerization and virtualization this way: containerization is like using a JAR – on its own, a JAR is a completely functional, independent unit and ready to use. Virtualization is more akin to handing a third party your source code and the entire IDE you developed the code in. It’s not as portable is what I’m saying.

Hint

Just like JARs in Java, it can be used to either package complete, functional applications that are ready to use or they can be used to package APIs that you can build your own applications on; Docker images are either complete and ready to use as is or you can build your own images on top of other images.

How does any of this help with generating native Quarkus images? Remember how much of a hassle it could be to set up a native image capable GraalVM installation? And how counter-WORP it is to generate native code on your local development OS? What I need now is a docker image that comes preconfigured with GraalVM and all the tools it needs to do its thing. With that, I should only need to
  • Download the image and run it as a container

  • Copy my code into the running container

  • Use it to generate a native image of my Quarkus application

Remember Container images are functional OSes, so when I generate a native image, that image is targeted to the OS that the container is running. First thing to do is to install Docker on your machine. Docker is the “JVM” in this scenario; first we get the “JVM,” then we get the “JAR” or images to use in it.

Don’t worry if you’ve never done this before – this is why you bought this book.

Install Docker

www.docker.com is where you go for your Docker installation. To keep things simple, just download Docker Desktop and follow the instructions to install.

Configure Docker

After successfully installing Docker, two things need configuring:
  • File System sharing: For me, to be able to transmit my code, written inside my IDE, into a docker container, I need to expose my local File System to the Docker runtime. Go to SettingsResourcesFile System to configure the paths or path that you’d like to expose to the Docker runtime.

  • Machine resource configuration: This one bit me. The process of generating a native image is CPU and RAM intensive and takes a lot longer than I’m used to with traditional Java code compilation. For this reason, it’s important to allocate enough RAM and CPU to the Docker runtime. Without doing this, you may find that the native image generation step seems to stall and error out mysteriously. If you’re on a resource-poor machine, you can hold off on this until you hit that wall, if you do at all. Otherwise, it’s something to be aware of. Go to SettingsResourcesAdvanced and tweak the numbers there based on your needs.

Having installed Docker, I can validate my installation by opening a terminal or command-line window and running the following command:
docker info

This command prints diagnostic information about the installation and the OS environment. Now that I have my OS “JVM,” I need “JARs” or docker images to run in it. You can also run docker run hello-world to download and run a “hello-world” image, as proof that all is well.

Install the CentOS Image

Docker images are collected in image repositories or registries, much like Java JARs are collected in Artifactory, Nexus, or the global Maven repo. Traditionally, you would go to hub.​docker.​com to search for any images you want. Images are created and published with relevant info to that site. Vendors like Oracle, Redis, and even individuals like you and I can publish images containing canned OSes with preconfigured distributions of their products. Users can then go and “pull” those images into their deployment machines and run the images as containers. What I want now is a complete Linux distribution that comes with GraalVM, all its dependencies and tooling preconfigured, native imager installed, and Maven.

Meet https://quay.io. The Quarkus team has distributed several useful images to the docker images at the command line to see the list of images available in the docker runtime. Here’s what it looks like for me:
REPOSITORY                                TAG                    IMAGE ID            CREATED               SIZE
quay.io/quarkus/centos-quarkus-maven       20.0.0-java11        39d6594a5a6a        15 hours ago          1.86GB
quay.io/quarkus/ubi-quarkus-native-image     20.0.0-java11        91fb19e82ebc        16 hours ago          1.43GB

Now that we have an image, let’s start a container based on that image.

Run the CentOS Image

Let’s fire up a container from the CentOS image. When I “run” an image, I’m telling the Docker runtime that I want a virtual operating system running on my machine, using the image as the template. So, from my CentOS image, I want CentOS and all the goodies bundled within to start up on my PC. To make this a fulfilling run, I’d like a couple more things:
  • I want my Quarkus project code to be made available inside the virtual OS. I want changes I make in my local File System reflected inside the containerized OS.

  • Since the container I want to run is a fully self-contained computer, running inside my own computer, I should be able to run my Quarkus code inside the container. Not just that, but from my own environment outside the docker container, I should be able to send REST requests to the Quarkus service running inside the container.

Here’s the command that does all that:
docker run -it --name my-quarkus-app -p 8080:8080 -v //c/eclipse-workspace/code-with-quarkus:/my-quarkus/app quay.io/quarkus/centos-quarkus-maven:20.0.0-java11  bash -l
Here’s the breakdown of “all that”:
  1. 1.

    docker is the actual docker tool – can’t do anything without this.

     
  2. 2.

    run command to run the image.

     
  3. 3.

    -it asks the tool for an interactive session.

     
  4. 4.

    -p stipulates that when a request hits 8080 on my host machine, it should be forwarded to port 8080 inside the docker container. This way, I can initiate REST resource requests from my dev environment and have them executed by the code running inside the fake computer I’m about to start.

     
  5. 5.

    -v tells the docker runtime to take C:eclipse-workspacecode-with-quarkus and mirror it as /my-quarkus/app, inside the virtual computer I’m about to run. This option makes my project code available inside the container. This way, changes I make in C:eclipse-workspacecode-with-quarkus will be reflected inside /my-quarkus/app directory in the container and vice versa.

     
  6. 6.

    quay.io/quarkus/centos-quarkus-maven is the fully qualified name of the CentOS docker image, kinda like saying javax.ws.rs.core.Application, instead of just Application. 20.0.0-java11 is the version of this image that I want to use. Be sure to check quay.io for the latest version of this image, in case version 20.0.0-java11 has been deprecated by the time you get this book.

     
  7. 7.

    bash instructs the Docker engine to immediately launch a bash shell session inside the container as soon as it has been created.

     

You should be able to substitute your own values into the various command positions and run the command as is.

Inside the newly created container’s bash shell, I should be able to check to be sure certain things are available and configured in this OS. From inside the open bash shell:
  1. 1.

    Check that maven is installed with mvn -v:

    OpenJDK 64-Bit Server VM warning: forcing TieredStopAtLevel to full optimization because JVMCI is enabled
    Apache Maven 3.6.3 (cecedd343002696d0abb50b32b541b8a6ba2883f)
    Maven home: /usr/share/maven
    ...
     
  2. 2.

    Check that GraalVM is installed with echo $JAVA_HOME:

    [quarkus@1eaf2b1b569f project]$ echo $JAVA_HOME
    /opt/graalvm
     
  3. 3.

    Confirm that my code is available inside the OS by navigating to the directory I mounted.

     

If everything is looking good, on to the next step.

Build Native Images Inside a Docker Container

From this point on, it’s straightforward. First navigate to your project directory inside the bash shell
cd /my-quarkus/app
then run the following command to generate the native image:
mvn package -Pnative -DskipTests

This is what begins the compilation and generation process. If you’re the curious type, add -X and -e flags to the maven command to see debug-level information while the image generation executes. If it fails for non-compilation or dependency-related reasons, the most likely suspect is host OS resources. This is why I recommended that you allocate enough RAM to the Docker Desktop app. The native image generation is RAM intensive – it’s a lot of upfront hard work. Allocate more RAM and run it again. When all goes well, you should have a native image generated in the /target subdirectory (it’s the file without an extension). It’s the file without an extension: code-with-quarkus-1.0.0-SNAPSHOT-runner.6

Hint

Compare the sizes of the artifacts generated from native images and what you get from JVM mode JARs. The native image is slightly bigger than the complete JVM jar.

Native image secured. Fire it up:
./code-with-quarkus-1.0.0-SNAPSHOT-runner

I should now be able to hit the RESTful service deployed inside the container, from within my host operating system. It’s this native image that becomes your deployment package, targeted Unix/Linux environments. I’ve personally validated the generated image, not only in CentOS but also in Red Hat and Ubuntu Linux distros. Not only is this good to run in related distros of Linux; it no longer needs the JDK or JRE to run – not even GraalVM needs to be present once the native image has been generated. It’s now a self-contained executable with amazing throughput. All of this to lead to the holy grail: a small, lightweight java application that can run in resource-starved environments. Quarkus gives you flexibility and control of the native image generation process with a bunch of configuration options.

Build Native Images with Maven: A Shortcut

Muahahahahah! Yes, there’s a one-step way to generate native images, using Docker, but without having to manually go inside the container and execute mvn package -Pnative. Are you ready? Okay, here goes:
mvn package -Pnative -Dquarkus.native.container-build=true -DskipTests
and that’s it. Simple, yes? A few things:
  1. 1.

    This example defaults to Docker for the image generation; you still need to have Docker installed and configured. Set the -Dquarkus.native.container-runtime=<runtime-name> option to select a different container runtime (otherwise, Docker is used as default).

     
  2. 2.

    This is still targeted at a Linux Docker image, specifically quay.io/quarkus/ubi-quarkus-native-image. So, while you can run this from the comfort of, say, a Windows development environment, the generated native artifact is still runnable inside a *nix OS only. Use the quarkus.native.builder-image property to select a different image for use.

     
  3. 3.

    You can run into memory problems (typically java.lang.OutOfMemoryError) during execution of this command. For this, be sure to set MAVEN_OPTS as an environment variable in your host machine, with ample heap space settings for the JDK. Even more fun when you realize it can fail for memory reasons, and you might not get a satisfying exception or error message indicating this specific problem.

     

The result will still be the same, assuming all goes well for you – you still get the native executable generated into the target subdirectory of your Quarkus project.

SSL Support

When your app’s going native, you’re going to need to be explicit about a couple of things. This is so that the native image utility bundles the dependencies necessary to support those features. One of them is SSL support. To ensure your native executable can execute HTTPS calls, you need
quarkus.ssl.native = true

Third-Party Class Support

Kudos to the Quarkus team: they’ve gone ahead and ported a healthy number of popular open source frameworks into Quarkus to provide native imaging support. Then there’s the @RegisterForReflection annotation that you can add to your custom classes to make them available for AOT compilation. What about third-party classes that you have no control over and haven’t been properly “Quarkused”?

When you inevitably try to use a library that doesn’t expose itself for GraalVM’s AOT,7 you will need a way to manually expose that class to the native image tool. You will get some variety of the following exception at runtime, only when the application is packaged in native mode:
org.jboss.resteasy.spi.UnhandledException: io.smallrye.jwt.build.JwtException: JwtProvider io.smallrye.jwt.build.impl.JwtProviderImpl could not be instantiated: java.lang.InstantiationException: Type `io.smallrye.jwt.build.impl.JwtProviderImpl` can not be instantiated reflectively as it does not have a no-parameter constructor or the no-parameter constructor has not been added explicitly to the native image
No biggie: you resolve this by manually declaring the class’s layout and structure to the native image tool in GraalVM. Create a file named “reflection-config.json” in the resources directory of your Quarkus project. To address the runtime exception earlier, here’s an example reflection-config.json:
[
    {
      "name" : "io.smallrye.jwt.build.impl.JwtProviderImpl",
      "allDeclaredConstructors" : true,
      "allPublicConstructors" : true,
      "allDeclaredMethods" : true,
      "allPublicMethods" : true,
      "allDeclaredFields" : true,
      "allPublicFields" : true
    }
  ]

OK, what’s all this then? I’ve declared io.smallrye.jwt.build.impl.JwtProviderImpl as a class that needs AOT compilation, specifically requiring that all its constructors, public or private, all fields and all its methods, all of them, should be imaged by the native image tool. GraalVM will be well advised to pay attention to my declaration, for I am a powerful man!8

All that’s left is to deliver the instruction to GraalVM at maven build time:
<properties>
        <quarkus.package.type>native</quarkus.package.type>
        <quarkus.native.additional-build-args>-H:ReflectionConfigurationFiles=reflection-config.json</quarkus.native.additional-build-args>
</properties>

-H:ReflectionConfigurationFiles is a GraalVM parameter that you use to pass the reflection-config.json file to the runtime; you can also set it in the application.properties file. I prefer using the POM.xml, because it keeps all the build-time configuration in one place. Now dance, build!

Package a Quarkus App As a Docker Image

The previous exercises were to show you how to turn your boring vanilla JVM code into a blazing fast and lightweight (Linux)OS-native bundle (Windows support is still experimental as at the time of this writing). For true WORP goodness, you should bundle the whole thing as Docker image unto itself. This is true WORP thinking, with a microservice twist: it’s bundled as a functional unit, in its own little container bubble, and you get to distribute it as a single package. Yeah, that’s right: we’re going to package a Quarkus app as a Docker image.

Dockerfile

The Dockerfile is the instruction set for the Docker runtime for when it’s creating an image. When you run the docker build command, the Docker runtime will look for a Dockerfile in the current directory; using instructions in the file, it will create a Docker image that you can publish or run. For the purposes of this demo, you’ll use the Dockerfile to configure
  • The operating system you want to base your Docker image on.

  • File/directory sharing instructions – you need your code to pass into this image somehow.

  • TCP ports to expose by default – the port on which your microservice container will expose the microservice.

  • Shell commands, scripts, or programs to run immediately after the container is started.

The Quarkus maven archetype generates two kinds of Dockerfile in the /src/main/docker directory:
  • Dockerfile.jvm9 for creating a Docker image that runs your Quarkus app in JVM mode (read: traditional java application run)

  • Dockerfile.native for creating a Docker image that runs your Quarkus app as a native image

Clearly, I’m in this only for native business, but the same steps apply for JVM mode images. For the remainder of this section, I’ll be using the “. native” Dockerfile alone. Here’s what’s going on inside the native Dockerfile:
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.1
WORKDIR /work/
COPY target/*-runner /work/application
RUN chmod 775 /work /work/application
EXPOSE 8080
CMD ["./application", "-Dquarkus.http.host=0.0.0.0"]
and this is what all of that means:
  • FROM stipulates the base image, that is, an existing docker image on top of which I want to build my own image.

  • WORKDIR defines a directory to hold transient files and data that the entire image generation process can use.

  • COPY asks that the files ending in “-runner” in the directory “target” be moved into the /work/application directory. Remember: I defined /work/ in the WORKDIR directive immediately before this directive.

  • RUN will run the specified Unix command.

  • EXPOSE will ask that the container open up port 8080 on itself.

  • CMD will run the defined commands when the image is launched as a container.

Don’t worry if some of this feels uncomfortable – go on and read this section as many times as you like. If you’d like more formal definitions of these Dockerfile commands, check out the Dockerfile reference.

Build a Docker Image

The native Dockerfile comes with some instructions and sensible defaults. So sensible and instructional that you can take the instructions and execute them as is. So, after you’ve built your native executable with either mvn package -Pnative or the in-docker manual build, you should be able to execute
docker build -f src/main/docker/Dockerfile.native -t apress-quarkus/code-with-quarkus . (3)
Breakdown time!
  1. 1.

    build is the command that the docker utility uses to construct an image.

     
  2. 2.

    The -f flag directs the docker utility to a custom-named Dockerfile (src/main/docker/Dockerfile.native in this case).

     
  3. 3.

    The -t flag tells the docker runtime what to name and tag the image. The format I’m using here is <namespace>/<application-name>. Pay attention to the period at the end up there; it's important. The build command has a tendency to choke without that being there.

     
and that’s it. All goes well, you should have your very own docker-imaged Quarkus app. This is an image based on Native mode Quarkus. If you prefer, use Dockerfile.jvm for JVM mode quarkus. Verify that docker has the image stored by running docker images in your terminal window. what I get.
REPOSITORY                                 TAG                 IMAGE ID               SIZE
lambci/lambda                              java11              3bc93227d833           421MB
quay.io/quarkus/centos-quarkus-maven       20.0.0-java11       39d6594a5a6a           1.86GB
quay.io/quarkus/ubi-quarkus-native-image   20.0.0-java11       91fb19e82ebc           1.43GB
lambci/lambda                              provided            81c66411cd01           698MB
quarkus/code-with-quarkus                  latest              0dc7b0de59d6           220MB

There I have quarkus/code-with-quarkus in my list of images. It means I can run my entire Quarkus app as a “black-box” application, preconfigured and contained inside a complete operating system. It also means I can distribute the app as a canned, ready to drink run platform, by publishing this image to the docker hub; just like you can create a Java JAR, use it locally or send it to the central maven repository.

Run a Docker Container from an Image

With the app baked as an image, you can run a container from that image with
docker run -i --rm -p 8081:8080 quarkus/code-with-quarkus

Here, I’m mapping port 8081 on my local machine to port 8080 on the running container instance. This way, when I send an HTTP request to port 8081 in my host machine’s browser, the request is forwarded to my running container’s port 8080, on which the code-with-quarkus app is listening for connections.

In a real-world application, there will be multiple instances of an app like code-with-quarkus, running as containers. The containers could all be running on one machine or spread out among multiple machines in a cluster formation. When we start talking about multiple instances of the same container, we need some sort of load balancer or orchestrator that will sit and distribute incoming service requests among the containers. Fundamentally, this is the promise of containers and microservices: the ability to run insular deployments of software products in a fashion that can automatically scale to a high degree of granularity. When demand for a particular microservice increases, a container orchestration layer can spawn new containers of that microservice alone to deal with the increased demand.

Docker provides Swarm as its orchestration and load balancing module. An alternative and more sophisticated orchestrator is Kubernetes, or K8s, as we cool kids call it. In the AWS cloud, you have
  • Elastic Container Service (ECS)

  • Elastic Kubernetes Service (EKS)

Microsoft Azure has Azure Kubernetes Service (AKS). They didn’t come up with a clever name for their raw container orchestrator.

Serverless Microservices

I’ve been throwing “serverless” around all over this book so far, maybe even defined it. Here it is once again.

The serverless deployment of an application means simply to deploy your application without having to configure an application server or a host server. You shouldn’t care about sizing the server, securing the server, and clustering, scaling, monitoring, or managing it. The actual machine, physical or virtual, that your application will be deployed to is not your problem. In most cases, you only need to provide the deployment unit (jar, .js, .java, .py, etc.) to the serverless platform provider. Quarkus provides support for the major players in the serverless/cloud space (Amazon, Microsoft, Red Hat).

Most serverless platforms are event-driven: you deploy your code in a serverless platform and your code is triggered by events within the provider’s cloud ecosystem. You could have your code triggered by an HTTP request as in a vanilla web service; your code could be triggered by a database event (insert, update, delete); your code could be triggered by messages from a queueing service like SQS (from Amazon Web Services). These event connections are typically provided by your cloud provider, and you should only need to supply the code and configure the event connection.

Serverless applications are designed to be focused, efficient, and scalable. Quick in ‘n’ out. They’re not for long-running applications. In general, you’ll find that most serverless providers use the term “serverless functions” – this is important: serverless functions are treated like functional programming implementations. They’re not supposed to maintain persistent state (at least not without auxiliary storage like a database or message queue); they’re not made to be long-lived. Your cloud provider expects to trigger your application code with a supported event, run your code, and exit the function, preferably with a brisk response time.

Now, the perks that a specific cloud provider can provide with its serverless offering will vary from provider to provider – Microsoft, Amazon, OpenShift, and so on – all provide their own unique perks in their serverless services. On the whole though, the serverless deployment model uniquely meshes well with microservice architecture:
  • Microservices being lightweight and narrowly focused by design sit well in a serverless deployment model. Think about it: “micro”service that executes a narrow function, plus serverless deployment that’s lightweight and brisk in execution.

  • Amazon Web Services (AWS) and Microsoft’s Azure cloud platforms provide seamless versioning of serverless functions so that you can update the code, deployment after deployment, and maintain reasonable backward compatibility and contract stability with your service consumers.

  • In true WORP fashion, you won’t need to care about the OS, server, or cluster you’re deploying your application to. The native image is packaged as a completely self-sufficient deployment unit.

Yawn. Okay, let’s try Amazon Web Services serverless deployments.10

Amazon Web Services Serverless Deployment

Traditional Java applications aren’t suited to serverless deployment. Don’t take it from me; take it from the most popular cloud platform in the market:
  • Minimize your deployment package size to its runtime necessities.

  • Minimize the complexity of your dependencies. Prefer simpler frameworks that load quickly on execution context startup. For example, prefer simpler Java dependency injection (IoC) frameworks like Dagger or Guice over more complex ones like Spring Framework.

  • Reduce the time it takes Lambda to unpack deployment packages authored in Java by putting your dependency .jar files in a separate /lib directory. This is faster than putting all your function’s code in a single jar with a large number of .class files.

Wow, harsh. But their words, not mine. All of these basically rule out:
  • Spring Boot

  • A fat JAR

  • Vanilla JavaEE

Quarkus is going to show them. Quarkus is going to show them all.

Don’t worry; you won’t have to wrangle “the cloud” for this demonstration. I’m going to show you how to deploy a native Quarkus application to a local simulation of Amazon’s serverless platform, running on your computer. Amazon Web Services (AWS) serverless platform is a product called Lambda. To deploy code to Lambda, you’ll need to select what’s called a “runtime”.

AWS Lambda Runtimes

You have your choice of different runtimes, roughly corresponding to the programming language you’re working in; so there’s the PHP, JavaScript, Python, Java, and so on runtimes. Because a native image is no longer clearly bound to a programming language (at least to the OS), I’m going to opt for what AWS calls the custom runtime.

Package Quarkus for Lambda

The Lambda execution environment treats your code like…code. By that, I mean that it doesn’t treat your Quarkus application like a stand-alone, independently deployable application. Rather, it sees it more like a java method or class that it needs to call. So in this execution context, Lambda doesn’t need your Quarkus application to “start up” in the way I’ve been showing it: no need to start the embedded Netty app server, no need to care about ports or host IPs or the REST URL paths I’ve defined per the JAX-RS standards – in fact, it can’t use any of the JAX-RS REST annotations I’ve defined. From the perspective of the AWS Lambda runtime, this is the only question that needs answering:

When an event message is sent to Lambda, intended for your Quarkus application, what method does Lambda need to call?

What we need is a…

Quarkus Lambda Event Handler
Remember: Serverless applications are event-driven. The events are supplied as message payloads from other services in your cloud platform. A database action (insert, update, delete), an HTTP/S request, and a messaging queue payload are all viable event sources (among others). The actual data from these sources will be embedded inside the event payload that will be delivered to your serverless function. I’ll need to define an entrypoint for events to be delivered to my Quarkus application, separate from the JAX-RS resource classes I have defined. To support Lambda event handling, I need the Quarkus Lambda plugin:
mvn quarkus:add-extension -Dextension=quarkus-amazon-lambda
Then I can add a Lambda handler class to my Quarkus project:
import javax.inject.Inject;
import javax.inject.Named;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
@Named("http-lambda-handler")
public class LambdaHandlerImpl implements RequestHandler<HelloRequest,HelloResponse> (1) {
    @Inject
    ExampleResource restResource; (2)
    final Logger logger = Logger.getLogger(LambdaHandlerImpl.class.getName());
    @Override
    public ConversionResponse handleRequest (3)(HelloRequest input, Context context) {
        logger.info("Received serverless request: "+context.getAwsRequestId()+"; function version: "+context.getFunctionVersion());
        return restResource.hello(input));
    }
}
There’s not too much going on here:
  1. 1.

    The AWS Java SDK provides the RequestHandler interface . Implementing this interface marks this class as a Lambda handler class. It is typesafe – I’m supplying the expected request and response classes as parameters. The AWS SDK will extract the core message data from the Lambda event payload and cast it into the required types. There can be only one active handler per serverless Quarkus deployment. Using the @Named annotation from CDI to name this Lambda handler bean class, I now need to configure the handler in application.properties:

     
quarkus.lambda.handler=http-lambda-handler
This tells Quarkus to mount the named bean as the Lambda event handler. You can then use profiles to set up different handlers like%test.quarkus.lambda.handler=test-http-lambda-handler.
  1. 2.

    Because I’ve baked a lot of my business logic into the ExampleResource JAX-RS REST resource class, I now need to inject it into this handler so I can reuse the business logic. This isn’t an ideal design, so let this be a lesson to you! Encapsulate your core business logic in reusable classes and patterns like the Data Access Object (DAO) or Command patterns.

     
  2. 3.

    handleRequest is the method inherited from the RequestHandler interface. It supplies the Lambda event message payload, as well as some contextual information about the Lambda runtime and the function that was invoked.

     

This concludes the code changes necessary to support AWS lambda deployment. To be clear, none of this is native image deployment-specific; it’s all standard Lambda stuff here.

“Monolambda” Serverless Application

Because Quarkus currently supports just one Lambda handler per app, you might be wondering what to do when you have multiple API endpoints to support via REST. This is what the so-called “monolambda” approach solves. Treat the single Lambda handler as an entrypoint or Facade for the rest of your application. From inside your handler, you are free to dispatch the Lambda event to any other part of your application. You can use any part of the context and metadata to decide on what to do with an incoming event. This is what is sometimes known as a “monolambda.”

Tip

Be mindful of the Quarkus extensions, producers, and beans you define in your application. Quarkus spends some time during startup to clear unused components out of memory. This can be a couple of seconds, depending on how much clutter there is in your application. You can control this behavior by setting the quarkus.arc.remove-unused-beans to none or false. framework is also an option so that only non-custom beans are removed.

To deploy your Quarkus project to Lambda, AWS mandates the following:
  1. 1.

    The deployment unit must be named “bootstrap”, if it is going to be using the custom runtime (i.e., a native image).

     
  2. 2.

    The deployment unit must be packaged as a zip file. You can also use a JAR if you’re deploying your app in JVM mode.

     
  3. 3.

    The deployment unit can be packaged with what is called a Serverless Application Model (SAM) file, in YAML format. This file provides crucial deploy-time metadata about your serverless application to the Lambda platform.

     
  4. 4.

    A LAMBDA_ROLE_ARN environment variable that corresponds to the lambda execution role that you have created in the Identity Access Manager, in the AWS console. For the purposes of this book, you can ignore this step – all of the samples here are locally executable.

     
When you add the quarkus-amazon-lambda extension to your project and build it with mvn package, Quarkus automatically
  • Creates two files named sam.jvm.yaml and sam.native.yaml.

  • Creates manage.sh and bootstrap-example.sh files containing helpful shell scripting functions for deploying and running your application in AWS Lambda.

  • Creates a “function.zip” file that is supposed to be the complete deployable kit. You should be able to straight up upload this file to Amazon’s S3 service and point a Lambda function at it.

What your function.zip contains depends on the compilation mode; what you’ll get inside for a JVM mode app is different for a native image app. The JVM mode kit is not interesting (or recommended) for serverless deployment, so I won’t go into detail about it. No, I’m here for that native business.

AWS requires that custom runtime serverless projects be supplied to lambda
  • With the executable named “bootstrap”

  • And bootstrap be delivered in a zip file named function.zip

And that’s what quarkus-amazon-lambda does. It will rename your deployable output to bootstrap and then add it to a function.zip file; the output being in the target directory. Additionally, you’ll find the utility shell scripts, sam.jvm.yaml and sam.native.yaml in the same directory. Let’s check it out.

AWS Serverless Application Model
This is the piece de resistance to all of this. The Serverless Application Model (SAM) is a framework that AWS provides for defining metadata for a serverless application. It’s mostly defined in a YAML file and consumed by different AWS tools to provision a serverless application. I’m not going to go into too much detail of the model – the whole thing is a robust schema of options. Instead, I’ll show you only what you need to know to get a simple Quarkus serverless deployment going. Here’s some SAM file content you can use:
  AWSTemplateFormatVersion: '2010-09-09'
  Transform: AWS::Serverless-2016-10-31
  Description: AWS Serverless Quarkus - com.apress.samples::lambda
  Globals:
    Api:
      BinaryMediaTypes:
        - "*/*"
  Resources:
    LambdaNativeFunction:
      Type: AWS::Serverless::Function
      Properties:
        Handler: not.needed.for.provided.runtime
        Runtime: provided
        CodeUri: target/function.zip
        MemorySize: 128
        Policies: AWSLambdaBasicExecutionRole
        Timeout: 15
           ProvisionedConcurrencyConfig:
           ProvisionedConcurrentExecutions: 5:
        Environment:
          Variables:
            DISABLE_SIGNAL_HANDLERS: true
A lot of this is boilerplate stuff that won’t vary too much from deployment to deployment; I’ve highlighted the salient bits:
  1. 1.

    Handler: If this were a JVM mode serverless deployment, I would have io.quarkus.amazon.lambda.runtime.QuarkusStreamHandler::handleRequest in there instead of the dummy text I’ve placed there. The QuarkusStreamHandler class (and the handleRequest method in that class) is a dedicated Quarkus-provided class that’s required for AWS serverless deployment. For the native deployment scenario, any string will do. The field is mandatory, per AWS.

     
  2. 2.

    Runtime: This is where I would ordinarily specify “java8” or “java11”, were this a JVM mode package. Native mode packaging requires the “provided” runtime, a.k.a. custom runtime.

     
  3. 3.

    CodeUri: The path to the deployment package. This tells SAM where to find the deployment package/code to load for a serverless deployment. Since my deployment package is in the /target subdirectory of the Quarkus project, that’s what I’ve configured here. In a real AWS deployment, you would supply a path to an Amazon S3 bucket where you would have previously uploaded the zip file containing your Quarkus app.

     
  4. 4.

    MemorySize: What’s the maximum amount of RAM I’d like AWS to allocate to my serverless app?

     
  5. 5.

    ProvisionedConcurrencyConfig: This defines the minimum amount of concurrent instances of this Lambda I’d like to run. When you think of Lambdas as threads, this option defines how many Lambda instances should be pooled and ready to serve requests. It’s a great choice for minimizing latency or variability in the latency. This is what AWS calls “Provisioned Concurrency.” This helps you blunt the effects of Lambda cold starts. The downside is that you’ll now always be paying for those lambdas, whether or not they’re running. Note that this isn’t a requirement – you can run a perfectly fine Lambda function without this feature. I’m including it here only for completeness, because AWS just introduced this feature last December (2019).

     
  6. 6.

    Environment: This section of the SAM template defines environment variables that I expect the Lambda runtime to load into whatever OS it’s using. These variables are also available for consumption by my code. Here, I’ve defined only the DISABLE_SIGNAL_HANDLERS variable as recommended by the Quarkus team. Disabling signal handlers means that the native image will not respond to OS-level instructions like SIGKILL and SIGABRT.

     

I simply save this template into a YAML file named sam.native.yaml and go about my business. Finally, I can choose to now deploy my app either to the AWS cloud or run it locally in a simulated Lambda runtime. I’ll take #2 please; otherwise, you’ll need to go sign up for an AWS account and configure stuff, which is out of the scope of this fine book. To the simulated environment!

AWS SAM CLI Deployment

With my Quarkus application outfitted for serverless deployment for the AWS Lambda platform, I have the choice of uploading my kit named “function.zip” to the AWS online console. I could also just bring the Lambda platform to my local machine for a simulated deployment.

The SAM command-line interface (CLI) is a portable serverless application toolkit provided by AWS. Use it to test, deploy, and manage your serverless applications both locally and in the AWS cloud.11 It’s packaged as a Docker image (of course); you already have Docker Desktop installed by now, don’t you? Download and install the CLI for your operating system and let’s crack on!

Once your installation is complete, you should be able to run the following command in a terminal window and get results:
sam –-version

With a correctly packaged serverless Quarkus application, the SAM CLI will load and deploy my application, using the SAM template I supply.It’ll use the sam.native.yaml file as the definition to launch the app. I now need to trigger my app, simulating an actual Lambda event as it would be triggered in the AWS cloud. Use the generate-event command to use the SAM CLI to generate sample payloads for different AWS services. Observe

sam local generate-event apigateway aws-proxy
Running the preceding command will generate a sample JSON document that represents what will be passed to your Lambda by AWS’s API Gateway service. API Gateway is a service that you can use as the front facing entrypoint to your microservice. Think of it like a web server provided by Amazon. Here’s a trimmed down version of the generated document:
{
  "body": "This is the HTTP request body",
  "resource": "/{proxy+}",
  "path": "/path/to/resource",
  "httpMethod": "POST",
  "isBase64Encoded": true,
  "queryStringParameters": {
    "foo": "bar"
  },
  "multiValueQueryStringParameters": {
    "foo": [
      "bar"
    ]
  },
  "pathParameters": {
    "proxy": "/path/to/resource"
  },
  "headers": {
    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
    "Accept-Encoding": "gzip, deflate, sdch",
    "Accept-Language": "en-US,en;q=0.8",
    "Cache-Control": "max-age=0",
    "Host": "1234567890.execute-api.us-east-1.amazonaws.com"
  },
  "multiValueHeaders": {
    "Accept": [
      "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"
    ],
    "Accept-Encoding": [
      "gzip, deflate, sdch"
    ],
    "Accept-Language": [
      "en-US,en;q=0.8"
    ],
    "Cache-Control": [
      "max-age=0"
    ]
},
  "requestContext": {
    "accountId": "123456789012",
    "resourceId": "123456",
    "stage": "prod",
    "requestId": "c6af9ac6-7b61-11e6-9a41-93e8deadbeef",
    "requestTime": "09/Apr/2015:12:34:56 +0000",
    "requestTimeEpoch": 1428582896000,
    "path": "/prod/path/to/resource",
    "resourcePath": "/{proxy+}",
    "httpMethod": "POST",
    "apiId": "1234567890",
    "protocol": "HTTP/1.1"
  }
}
The generated JSON document shown here is a sample representing a Lambda event coming from the API Gateway service. API Gateway is another AWS service that you can use to provide HTTP/S interfaces to applications deployed in AWS. This whole document is then converted into a Java object and delivered to your code by the Lambda runtime. I’ll save this sample as a file named payload.json. All that’s left to do is to invoke my Lambda-packaged application using payload.json as the event. For that, I’ll run the following command from within my Quarkus project folder:
sam local invoke -t sam.native.yaml --event payload.json
Here’s what I’m doing with this command:
  1. 1.

    sam local invoke executes a local invocation of the serverless application that’s…

     
  2. 2.

    …defined in sam.native.yaml. The -t flag loads the file for the SAM cli. If I wanted to deploy my app in JVM mode, I’d use the generated sam.jvm.yaml instead.

     
  3. 3.

    The –event flag loads the JSON payload from the file named payload.json, using it as the request event payload to pass to my lambda function. This effectively treats this interaction like a REST service request. I have access to a lot of metadata in this event, so I could add conditional logic to inspect the URL or HTTP method that was invoked to generate this event.

     
Here’s the result of my command execution:
[START RequestId: 4de58426-2477-1dc6-72f2-b5f4dc42b611 Version: $LATEST]
{"timestamp":"2020-04-12T12:49:56.99Z","sequence":633,"loggerClassName":"org.jboss.slf4j.
JBossLoggerAdapter","loggerName":"com.apress.samples.handlers.LambdaHandlerImpl","level":"INFO",
"message":"Received serverless request: 4de58426-2477-1dc6-72f2-b5f4dc42b611; function version: $LATEST","threadName":"Lambda Thread","threadId":15,"mdc":{},"ndc":"","hostName":"4241f4c86237","processName":
"NativeImageGeneratorRunner$JDK9Plus","processId":488}
[END RequestId: 4de58426-2477-1dc6-72f2-b5f4dc42b611]
[REPORT RequestId: 4de58426-2477-1dc6-72f2-b5f4dc42b611    
Init Duration: 799.91 ms        Duration: 17.88 ms
    Billed Duration: 100 ms Memory Size: 128 MB     Max Memory Used: 58 MB]
{"responsContent":"Hello Quarkus Person, Welcome to AWS Lambda!"}
This run of this command shows me a couple of things that are specifically provided by the Lambda runtime:
  1. 1.

    I can see the running version of my Lambda app as $LATEST, because I didn’t deliberately configure a version number.

     
  2. 2.

    I can see how long it took for the Lambda runtime to start up,12Init Duration” – 799.91ms.

     
  3. 3.

    It shows me how long my serverless function took to completely execute, “Duration” – 17.88ms.

     
  4. 4.

    I can then see how much time I would be billed for, “Billed Duration” (were these running in the AWS cloud and not in my local machine) – 100ms. How come? The minimum billable duration in AWS is 100ms, regardless of how much less time your serverless function runs for.

     
  5. 5.

    I see the maximum amount of RAM my app consumed at any point during its execution, “Max Memory Used” – 58MB.

     

And that, ladies and gents, is how you deploy an AWS Lambda function in your local development environment. You can use the SAM CLI to deploy your lambda to an actual AWS environment as well; you just need to register for a free AWS account. Get a more comprehensive introduction to SAM starting with the wonderful AWS documentation (seriously, Oracle and AWS write some killer docs).

The model of deployment I’ve just walked through is the most flexible. My serverless Quarkus application can field events from any other AWS component – databases, message queues, CloudWatch, or anything. My application doesn’t expose an HTTP endpoint, but it is kitted out for flexibility. If I want to deploy my Lambda as a RESTful web application, complete with a REST endpoint that can be hit with HTTP, I’m going to need to get funky. Cue the bass!

Funqy Serverless Apps

Funqy – not a typo – is an effort by the Quarkus team to standardize the deployment of serverless functions. With Funqy, you can write a REST resource once, and Quarkus can make it readily deployable for multiple different serverless platforms and deployment scenarios. So with code like
@Funq("generate-anagram")
public AnagramResponse getAnagram(AnagramRequest request) {
     ...
}
@Funq marks this method as a “function”. From this point, the only question I need to answer is “what platform do I want to deploy this function to?”. Here are some answers:
  • AWS Lambda? I just add the quarkus-funqy-amazon-lambda extension to expose getAnagram as a Lambda function. Additionally, I’ll configure quarkus.funqy.export=generate-anagram. Funqy and the Quarkus lambda extension will work to manage the handler for receiving and unmarshalling Lambda events. You still get just one @Funq per deployable however.

  • Azure Function? I’ll need the quarkus-azure-functions-http extension. The nice bit of this extension is that it allows me to expose multiple @Funq-decorated methods as proper REST endpoints, complete with URL and all.

Funqy is still under development at this time of this writing, but watch this space! It’s an exciting direction for the platform as a whole.

AWS Serverless Success with Quarkus

If deploying your code serverlessly feels a little strange to you, it’s completely normal and you’re not alone. It requires a mindset shift that might take some time. In addition to all that I’ve already talked about here, bear these in mind:
  • There’s a default 5-minute limit imposed on Lambda functions in AWS. This means that whatever you do, your serverless app must complete its processing within that time limit. This is not ideal for batch processes, so be careful what you try to execute in a lambda.

  • Static variables, class-level variables, and singletons are reused between invocations of a lambda function. So, get really cozy with the @ApplicationScoped and @Singleton annotations as there could be some serious savings there.

  • If you’re caching data inside your function (instead of inside an externally managed cache), know that that cache is isolated to that instance of your function. AWS supports up to 1000 concurrent invocations of the same Lambda function. There’s no guarantee that the same instance of your function will be invoked successively enough to make an internal cache worth it.

  • If you’re caching data inside your function, you’re spending RAM. Take that into account when sizing the RAM usage of your function with your sam.yaml file. Sure a bunch of Strings won’t cost too much,13 but if you’re caching large objects like media content, size appropriately.

  • Consider using AWS’s Simple Storage Service (S3) to persist data between Lambda invocations. S3 offers cheap replication, security, accelerated delivery, and durability which could help in certain storage and caching scenarios.

  • Brand new from AWS is the Elastic File System. When I learned about this, I was shaking with excitement. It’s a File System for Lambda functions that’s shared between concurrent invocations of a Lambda function, with locking and multi-availability zone access. This is poised to be a game changer for managing state, data, and configuration for Lambdas.

  • Avoid or minimize recursive invocations inside your function code. Unchecked recursion will not only cost you more financially – each invocation of Lambda is billed – your function could simply just choke out on memory as well.

  • Treat your lambda function exactly like a function: don’t hold mutable state in it. Your function could be invoked once or possibly multiple times for the same request.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.203.85