Chapter 8. Where Do We Go from Here?

We have covered a lot in this report, but we certainly didn’t cover everything! Keep in mind we are just scratching the surface here, and there are many more things to consider in a microservices environment than what we could explore in this report. In this final chapter, we’ll very briefly talk about a couple of additional concepts you should be aware of. We’ll leave it as an exercise for the reader to dig into more detail for each section!

Configuration

Configuration is a very important part of any distributed system, and it becomes even more difficult with microservices. We need to find a good balance between configuration and immutable delivery because we don’t want to end up with snowflake services. For example, we’ll need to be able to change logging, switch on features for A/B testing, configure database connections, and use secret keys or passwords. We saw in some of our examples how to configure our microservices using each of the three Java frameworks presented here, but each framework does configuration slightly differently. What if we have microservices written in Python, Scala, Golang, Node.js, etc.?

To be able to manage configuration across technologies and within containers, we need to adopt an approach that works regardless of what’s actually running in the containers. In a Docker environment, we can inject environment variables and allow our application to consume those environment variables. Kubernetes allows us to do that as well, and considers it a good practice. Kubernetes also includes APIs for mounting Secrets that allow us to safely decouple usernames, passwords, and private keys from our applications and inject them into the Linux container when needed. Furthermore, the recently added ConfigMaps, which are very similar to Secrets in that they allow application-level configuration to be managed and decoupled from the application’s Docker image, and also allow us to inject configuration via environment variables and/or files on the container’s filesystem. If an application can consume configuration files from the filesystem (which we saw with all three Java frameworks) or read environment variables, it can leverage the Kubernetes configuration functionality. Taking this approach, we don’t have to set up additional configuration services and complex clients for consuming it. Configuration for our microservices running inside containers (or even outside them), regardless of technology, is now baked into the cluster management infrastructure.

Logging and Metrics

Without a doubt, a lot of the drawbacks to implementing a microservices architecture revolve around management of the services in terms of logging, metrics, and tracing. The more you break a system into individual parts, the more tooling, forethought, and insight you need to see the big picture. When you run services at scale, especially assuming a model where things fail, you need a way to grab information about services and correlate that with other data (like metrics and tracing), regardless of whether the containers are still alive. There are a handful of approaches to consider when devising your logging and metrics strategy:

  • Developers exposing their logs

  • Aggregation/centralization

  • Searching and correlating

  • Visualizing and charting

Kubernetes has add-ons to enable cluster-wide logging and metrics collection for microservices. Typical technologies for solving these issues include syslog, Fluentd, or Logstash for getting logs out of services and streaming them to a centralized aggregator. Some folks use messaging solutions to provide some reliability for these logs if needed. ElasticSearch is an excellent choice for aggregating logs in a central, scalable, searchable index, and if you layer Kibana on top, you get nice dashboards and search UIs. Other tools, like Prometheus, Jaeger, Grafana, Hawkular, Netflix Servo, and many others, should be considered as well.

Continuous Delivery

Deploying microservices with immutable images, as discussed earlier in Chapter 5, is paramount. When we have many more (if smaller) services than before, our existing manual processes will not scale. Moreover, with each team owning and operating their own microservices, we need a way for teams to make immutable delivery a reality without bottlenecks and human error. Once we release our microservices, we need to have insight and feedback about their usage to help drive further change. As the business requests change, and as we get more feedback loops into the system, we will be doing more releases more often. To make this a reality, we need a capable software delivery pipeline. This pipeline may be composed of multiple subpipelines with gates and promotion steps, but ideally, we want to automate the build, test, and deploy mechanics as much as possible.

Tools like Docker and Kubernetes also give us the built-in capacity to implement rolling upgrades, blue-green deployments, canary releases, and other deployment strategies. Obviously these tools are not required to deploy in this manner (places like Amazon and Netflix have done it for years without Linux containers), but the inception of containers does give us the isolation and immutability factors to make this easier. You can use your CI/CD tooling, like Jenkins and Jenkins Pipeline, in conjunction with Kubernetes and build out flexible yet powerful build and deployment pipelines. Take a look at OpenShift for more details on an implementation of CI/CD with Kubernetes based on Jenkins Pipeline.

Summary

This report was meant as a hands-on, step-by-step guide for getting started with building distributed systems with some popular Java frameworks following a microservices approach. Microservices is not a technology-only solution, as we discussed in the opening chapter. People are the most important part of a complex system (a business), and to scale and stay agile, you must consider scaling the organizational structure as well as the technology systems involved.

After building microservices with whatever Java framework you choose, you need to build, deploy, and manage them. Doing this at scale using traditional techniques and primitives is overly complex, costly, and does not scale. Fortunately, we can turn to new technologies like Docker and Kubernetes that can help us build, deploy, and operate while following best practices like immutable delivery.

When getting started with microservices built and deployed in Docker and managed by Kubernetes, it helps to have a local environment used for development purposes. For this we recommend the Red Hat Container Development Kit, which is a small, local VM that has Red Hat OpenShift running inside a free edition of Red Hat Enterprise Linux (RHEL). OpenShift provides a production-ready Kubernetes distribution, and RHEL is a popular, secure, supported operating system for running production workloads. This allows you to develop applications using the same technologies that will be running in production and take advantage of the application packaging and portability provided by Linux containers.

We’ve touched on a few additional important concepts to keep in mind here, like configuration, logging, metrics, and continuous, automated delivery. We didn’t touch on security, self-service, and countless other topics, but make no mistake: they are very much a part of the microservices story.

We hope you’ve found this report useful. Please follow @openshift, @kubernetesio, @rhdevelopers, @rafabene, @christianposta, and @RedHat on Twitter for more information, and take a look at the source code repository.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.14.246.148