Chapter 4. Putting Your Reactive Toolbox to Work

You spent most of the previous chapter focused on how the techniques and tools in our reactive toolbox help us get the most work out of your compute platforms. These help you to create an efficient, responsive service, but that’s really not sufficient when you’re aiming to create a reactive system. To create a fully reactive system, you need to consider the messaging within and between your services, the infrastructure it runs on, and the integration of other capabilities.

Going from Services to Systems: Being Message Driven

Microservices in a reactive system work together to achieve responsiveness, resilience, and elasticity, and this is largely achieved through being message driven. There are two levels of “message driven-ness” that we look at now: intra-service, or how the components of a service communicate; and inter-service, or messaging between services, as illustrated in Figure 4-1.

Creating multiple instances is one way of achieving scale and resilience within a given service. Ideally, you could scale your microservice in a way that’s transparent to other parts of the system. For example, load balancing and routing between instances would be handled internally by the microservice, and additional instances would be spun up or down as needed in response to load and other external conditions. None of this is trivial, but it’s made possible by messaging between the elements that make up the microservice collective.

Messaging within and between services
Figure 4-1. Messaging within and between services

There are open source libraries that can help you. One example is Akka, which provides libraries that allow you to manage the components of your microservice, actor systems in this case, as a “system of systems.” Another open source library is Vert.x, which uses an event bus between components of your microservices (verticals) and supports both point-to-point and publish/subscribe (pub/sub) messaging.

Caution

By default, both Akka and Vert.x support “best effort” or “at most once” delivery of messages. This means acknowledging that “failures happen,” so your application should be able to handle the potential for lost messages between components of your service.

Messaging between services is a different matter, and durable messaging is a prerequisite for a microservice-based application, reactive or otherwise. This is where a pub/sub system is really useful. In a pub/sub integration pattern, a microservice producing an event (data) publishes it to an event bus, and services subscribed to that event bus take note and consume the event or data. Communication between microservices is completely asynchronous and location independent. It is important that the event bus is “durable” so that events persist long enough for subscribers to pick them up; in case of a failure, the message stream can be reconstructed with the events intact. Apache Kafka is one of the better-known platforms for implementing durable pub/sub messaging.

Reactive streaming takes inter-service messaging to the next level. As explained in Chapter 2, reactive streams implementations give you a graceful way to handle unbounded streams of data across asynchronous boundaries with back pressure. This makes it perfect to integrate your reactive microservices with external systems and to implement messaging between services within your reactive system. There are several frameworks and libraries from which to choose that implement the Reactive Streams specification, including Akka Streams and Vert.x Reactive Streams.

What type of infrastructure will allow us to reap the full benefits of this message-driven approach to application architecture?

Distributed Infrastructure 

As the pure quantity of data and scale of connected devices, sessions, and transactions continue to rise at an exponential rate, businesses have turned to distributed, cloud-based infrastructures to build applications that can process these huge volumes of data and support vast numbers of online users concurrently. Cloud native systems have become the bread and butter of many applications. However, getting the most out of this compute platform entails writing highly concurrent, distributed software. That isn’t easy! It requires properly handling threads, implementing synchronization, preventing race conditions, dealing with persistence and state, scaling the application, and responding to failures. Fortunately, this is exactly what reactive systems were designed for. Reactive systems embrace distributed infrastructure, creating a consistent and responsive experience that works as expected even in the face of failure and unpredictable loads.

Orchestrated Cloud Infrastructure

How can you go about deploying and managing your reactive system in a cloud environment? The answer is containers and Kubernetes. Containerization—packaging up your microservices and all of their dependencies into lightweight packages that can run anywhere—is a basic tenet of building cloud native software, and Kubernetes has become the de facto open source standard for managing containerized applications in production.

Kubernetes is fundamentally a cluster orchestration system that brings “reactive systems” characteristics to container management. We touch on just a few basic Kubernetes concepts here, but there are plenty of good resources available.

Here are the basics in a nutshell:

Clusters and nodes

A Kubernetes cluster consists of a set of nodes (VMs or physical machines) on which Kubernetes services are running.

Pods

A pod is the fundamental unit of deployment in a Kubernetes cluster. It essentially wraps one or more app containers along with the necessary network and storage resources.

Controllers

Lastly, controllers are Kubernetes services that watch over the cluster and take corrective action when the current state of the cluster strays from the desired state.

The cluster management capacity of Kubernetes gives you both resilience and elasticity at the infrastructure layer. Elasticity is achieved with Kubernetes autoscaling, which can dynamically adjust the number of nodes in your cluster as well as scale the number of pods running in your cluster based on workload. Automated pod recovery features enhance resilience by restarting failed pods or re-creating pods that have been deleted. An application created with reactive architectural principles will expand, contract, and redistribute itself with changes in the underlying infrastructure.

Running your reactive microservices application on a Kubernetes orchestrated infrastructure can provide multiple levels of resilience and elasticity.

Tip

A blog series on IBM Developer illustrates the concepts described in this section. The series includes code for a simple Akka application deployed to Kubernetes as well as a simple tool to visualize the interaction of the Kubernetes pods, Akka clusters, and Akka actors.

Reactive Meets Machine Learning

Now that we’ve covered the application architecture and infrastructure approach for creating a responsive, elastic, and resilient application, the next step in your journey to become a cognitive business is to instrument your application to take advantage of data-driven insights in real time. This means combining vast amounts of data with machine learning to make your reactive system “smarter” in ways that create value for your customers.

First, we need to (briefly) introduce the concepts of machine learning (ML) and ML models, given that these are the means to infusing your applications with intelligence. ML is a subset of artificial intelligence that involves computers “learning” from exposure to data, being trained to find patterns in the data. The result is a computational model that can more or less correctly respond when new data is provided. Models can be developed for identification (such as image or voice recognition) or predictions (such as the weather).

Figure 4-2 illustrates the basic elements and flow of an ML-enabled reactive system. It’s generalized to show devices as well as users as sources of data on which the models are based.

Let’s say you are shopping at a brick-and-mortar store for new running shoes and find a pair you like, but you don’t love the price. You pull out your mobile phone and open an app for an online retailer to see if it has the shoes at a better price. (Admit it. You’ve done this.) The online retailer will have modeled your buying behaviors and that of millions of others, so with the right supplemental data (for example, your location near a running shoe store), it can present you with an offer for a pair of great running shoes at an attractive price even before you begin your search in its app. You’re delighted.

It’s worth noting that ML-enriched applications can be created with virtually any application technology. However, beginning from within a reactive systems context gives you the ability to dynamically manage the entire ML model—serving life cycle: streaming data from a wide array of sources, retraining models, and automatically redeploying them in real time.

Messaging within and between services
Figure 4-2. The ML-infused application flow begins in the lower right with the model development. The model is created and trained (1) using historical data, then deployed (2), and then made available to an application. Data arrives into the system from users (3a) and other sources (3b) and becomes the source data used by the model (4) to produce a prediction, or recommendation (5). This additional data (6) can be used to retrain the machine learning model to improve its accuracy.

Conclusions

Enterprises have been striving to create ultra-fast, ultra-responsive applications since the dawn of the internet. But, with the vast number of connected devices, huge quantities of data, and ever-growing number of consumers of our applications, traditional methods of trying to achieve this just don’t cut it. Reactive systems enable you to achieve this responsiveness through elasticity and resiliency in an autonomous, cost-effective manner—no longer are specialist teams needed to redesign and redeploy applications when they need to scale due to load changes; no longer do applications go down every time a new feature is introduced. Throughout this report we’ve listed the many benefits that reactive systems give your enterprise applications—in summary, the ability to design and build truly responsive, cognitive applications that manage themselves.

When Is Reactive Systems the Right Choice?

The right time to consider using reactive application architecture to transform your enterprise applications into reactive systems is when you begin caring about any of the cornerstones of the Reactive Manifesto—resiliency, elasticity, or responsiveness. If your application is dealing with vast volumes of data and you want your system to remain responsive and provide the same quality of service to every user regardless of changes in load, reactive architecture is definitely worth considering. Our old architecture patterns just weren’t designed to cope with the changing world of data we live in and the huge fluctuations in load on our systems.

Reactive isn’t for everyone, though, just as microservices are not the answer to modernizing every application. Before jumping in, consider whether your enterprise cares about maintaining responsiveness by having a more resilient and elastic system. If you do, maybe it’s time you had a go at creating your own reactive system.

How to Get Started

You’ve seen that there are some important decisions to make before diving into the design of your own reactive system. The first step toward redesigning a traditional application into a reactive system is deciding how best to split up your application. Event storming1 combined with domain-driven design are great techniques for breaking down an application into its distinct business domains.

The next important decision is which programming framework to use. There are many different ways to implement a reactive system. We hope this overview has given you enough information and resources to decide which implementation is best for you. Each programming framework achieves concurrency, parallelism, resiliency, and messaging in a variety of ways and interacts with the underlying infrastructure differently. So, it’s important to fully understand the capabilities of the implementation you choose.

After you break down your application into its respective microservices and select your programming framework, it’s time to decide which reactive patterns are best for your system. There’s a whole laundry list of patterns that you can combine to build a reactive system. For example, Event Sourcing and Command Query Responsibility Separation (CQRS) are often used together to optimize performance and scalability.

We hope this book has given you a solid understanding of what reactive systems are and why they may just be the next step forward in the evolution of your own enterprise systems. We wish you the best as you improve the responsiveness, resiliency, and elasticity of your applications, providing better customer interactions.

1 Event storming brings together the IT, business, and service delivery teams in a collaborative workshop to model business processes. For an example, see https://oreil.ly/mr83f.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.17.203.68