Chapter 7. Where Do We Go from Here?

We have covered a lot in this small book but certainly didn’t cover everything! Keep in mind we are just scratching the surface here, and there are many more things to consider in a microservices environment than what we can cover in this book. In this last chapter, we’ll very briefly talk about a couple of additional concepts you must consider. We’ll leave it as an exercise for the reader to dig into more detail for each section!

Configuration

Configuration is a very important part of any distributed system and becomes even more difficult with microservices. We need to find a good balance between configuration and immutable delivery because we don’t want to end up with snowflake services. For example, we’ll need to be able to change logging, switch on features for A/B testing, configure database connections, or use secret keys or passwords. We saw in some of our examples how to configure our microservices using each of the three Java frameworks, but each framework does configuration slightly differently. What if we have microservices written in Python, Scala, Golang, NodeJS, etc?

To be able to manage configuration across technologies and within containers, we need to adopt an approach that works regardless of what’s actually running in the container. In a Docker environment we can inject environment variables and allow our application to consume those environment variables. Kubernetes allows us to do that as well and is considered a good practice. Kubernetes also adds APIs for mounting Secrets that allow us to safely decouple usernames, passwords, and private keys from our applications and inject them into the Linux container when needed. Kubernetes also recently added ConfigMaps which are very similar to Secrets in that application-level configuration can be managed and decoupled from the application Docker image but allow us to inject configuration via environment variables and/or files on the container’s file system. If an application can consume configuration files from the filesystem (which we saw with all three Java frameworks) or read environment variables, it can leverage Kubernetes configuration functionality. Taking this approach, we don’t have to set up additional configuration services and complex clients for consuming it. Configuration for our microservices running inside containers (or even outside), regardless of technology, is now baked into the cluster management infrastructure.

Logging, Metrics, and Tracing

Without a doubt, a lot of the drawbacks to implementing a microservices architecture revolve around management of the services in terms of logging, metrics, and tracing. The more you break a system into individual parts, the more tooling, forethought, and insight you need to invest to see the big picture. When you run services at scale, especially assuming a model where things fail, we need a way to grab information about services and correlate that with other data (like metrics and tracing) regardless of whether the containers are still alive. There are a handful of approaches to consider when devising your logging, metrics, and tracing strategy:

  • Developers exposing their logs

  • Aggregation/centralization

  • Search and correlate

  • Visualize and chart

Kubernetes has addons to enable cluster-wide logging and metrics collection for microservices. Typical technology for solving these issues include syslog, Fluentd, or Logstash for getting logs out of services and streamed to a centralized aggregator. Some folks use messaging solutions to provide some reliability for these logs if needed. ElasticSearch is an excellent choice for aggregating logs in a central, scalable, search index; and if you layer Kibana on top, you can get nice dashboards and search UIs. Other tools like Prometheus, Zipkin, Grafana, Hawkular, Netflix Servo, and many others should be considered as well.

Continuous Delivery

Deploying microservices with immutable images discussed earlier in Chapter 5 is paramount. When we have many more smaller services than before, our existing manual processes will not scale. Moreover, with each team owning and operating its own microservices, we need a way for teams to make immutable delivery a reality without bottlenecks and human error. Once we release our microservices, we need to have insight and feedback about their usage to help drive further change. As the business requests change, and as we get more feedback loops into the system, we will be doing more releases more often. To make this a reality, we need a capable software-delivery pipeline. This pipeline may be composed of multiple subpipelines with gates and promotion steps, but ideally, we want to automate the build, test, and deploy mechanics as much as possible.

Tools like Docker and Kubernetes also give us the built-in capacity to do rolling upgrades, blue-green deployments, canary releases, and other deployment strategies. Obviously these tools are not required to deploy in this manner (places like Amazon and Netflix have done it for years without Linux containers), but the inception of containers does give us the isolation and immutability factors to make this easier. You can use your CI/CD tooling like Jenkins and Jenkins Pipeline in conjunction with Kubernetes and build out flexible yet powerful build and deployment pipelines. Take a look at the Fabric8 and OpenShift projects for more details on an implementation of CI/CD with Kubernetes based on Jenkins Pipeline.

Summary

This book was meant as a hands-on, step-by-step guide for getting started with some popular Java frameworks to build distributed systems following a microservices approach. Microservices is not a technology-only solution as we discussed in the opening chapter. People are the most important part of a complex system (a business) and to scale and stay agile, you must consider scaling the organization structure as well as the technology systems involved.

After building microservices with either of the Java frameworks we discussed, we need to build, deploy, and manage them. Doing this at scale using our current techniques and primitives is overly complex, costly, and does not scale. We can turn to new technology like Docker and Kubernetes that can help us build, deploy, and operate following best practices like immutable delivery.

When getting started with microservices built and deployed in Docker and managed by Kubernetes, it helps to have a local environment used for development purposes. For this we looked at the Red Hat Container Development Kit which is a small, local VM that has Red Hat OpenShift running inside a free edition of Red Hat Enterprise Linux (RHEL). OpenShift provides a production-ready Kubernetes distribution, and RHEL is a popular, secure, supported operating system for running production workloads. This allows us to develop applications using the same technologies that will be running in Production and take advantage of application packaging and portability provided by Linux containers.

Lastly we touched on a few additional important concepts to keep in mind like configuration, logging, metrics, and continuous, automated delivery. We didn’t touch on security, self-service, and countless other topics; but make no mistake: they are very much a part of the microservices story.

We hope you’ve found this book useful. Please follow @openshift, @kubernetesio, @fabric8io, @christianposta, and @RedHatNews for more information, and take a look at the source code repository.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.139.70.101