Message brokers as an elastic, reliable layer for message transferring

Fortunately, a reactive manifesto offers a solution for problems related to server-side and client-side balancing:

"Employing explicit message-passing enables load management, elasticity, and flow control by shaping and monitoring the message queues in the system and applying back-pressure when necessary."

This statement can be interpreted as the employment of an independent message brokers for the purpose of transferring messages. Consider the following diagram:

Diagram 8.6. Example of load balancing with message queue as a service

In the preceding diagram, the numbered points mean the following:

  1. These are the caller services. As we can see here, caller services only know the location of the message queue and the recipient's service name, which allows the decoupling of the caller service from the actual target service instance. This communication model is similar to what we have with server-side balancing. However, one of the significant differences here is asynchronous behavior in communication between the caller and the final recipient. Here, we do not have to keep the connection open while the request is processing.
  2. This is the outgoing message representation. In this example, an outgoing message may hold the information about the recipient's service name and the message correlation ID.
  3. This is the representation of a message queue. Here, the message queue works as an independent service and allows users to send messages to the group of Service C instances so that any available instance may process messages.
  4. This is the recipient's service instances. Each Service C instance has the same average load because each worker is capable of controlling backpressure by showing a demand (6) so that the message queue can send incoming messages (5).

First of all, all requests are sent over the message queue, which may then send them to the available worker. Moreover, the message queue may keep the message persisted until one of the workers has asked for new messages. In this way, the message queue knows how many interested parties there are in the system and can manage the load based on that information. In turn, each worker is capable of managing backpressure internally and sending demands depending on machine possibilities. Just by monitoring the number of pending messages, we can increase the count of active workers. Furthermore, just by relying on the amount of pending workers demands, we can reduce dormant workers.

Although a message queue solves the problem of client-side load balancing, it might seem that we are getting back to a very similar solution to server-side load balancing that we had earlier, and the message queue might become a hot point in the system. However, this is not true. First of all, the communication model here is a bit different. Instead of searching for available services and making the decision of whom to send the request to, the message queue just puts the incoming message into the queue. Then, when the worker declares the intent to receive messages, the enqueued messages are transferred. Therefore, there are two separate, possibly independent stages here. These are as follows:

  • Receiving the messages and putting them into the queue (which may be very fast)
  • Transferring data when consumers show demand

On the other hand, we may replicate the message queue for each group of recipients. Therefore, we may enhance our system's scalability and masterfully avoid any bottlenecks. Consider the following diagram:

Diagram 8.7. Example of elasticity with message queues as a service

Each section of the numbered diagram is explained as follows:

  1. This represents the message queue with enabled data replication. In this example, we have a few replicated message queues that are dedicated per group of the service instances.
  2. This represents the state synchronization between replicas for the same group of recipients.
  3. This represents possible load balancing (for example, client-side balancing) between replicas.

Here, we have a queue per recipients group and a replication set for each queue in that group. However, the load may vary from group to group, so one group may be overloaded, and another may simply lay dormant without any work, which may be wasteful. Consequently, instead of having the message queue as a separate service, we might rely on a message broker that supports virtual queues. By doing that, we can reduce the cost of the infrastructure since the load on the system may decrease so that one message broker may be shared between different recipients' groups. In turn, message broker may be a reactive system as well. Consequently, the message broker can be elastic, resilient, and can share its internal state employing asynchronous, non-blocking message passing. Consider the following diagram:

Diagram 8.8. Elasticity with distributed message broker

In the previous diagram, the numbered points mean the following:

  1. This is the caller service with a partitioned client-side load balancer. In general, the message broker may use the previously mentioned techniques to organize the discovery of partitions and sharing information with its clients.
  2. This refers to the representation of the message broker partition. In this example, each partition has a number of assigned recipients (topics). In turn, along with partitioning, each part may also have a replica.
  3. This refers to the rebalancing of the partition. A message broker may employ an additional rebalancing mechanism, so in the case of new recipients or new nodes in the cluster, such a message broker may scale easily.
  4. This is an example of the recipient, which may listen to messages from different partitions.

The preceding diagram depicts a possible design of the message broker as a system that can be a reliable backbone for the target application. As may be noticed from the preceding diagram, a message broker may hold as many virtual queues as the system requires. Modern message brokers adopt techniques for state sharing, such as eventual consistency and message multicasting, consequently achieving elasticity out of the box. Message brokers can be a reliable layer for asynchronous message transferral with backpressure support and replayability guarantees.

For example, a message broker's reliability may be achieved by employing an effective technique for message replication and persistence on fast storages. However, experience may vary, since the performance of such message brokers may be slower than the performance of those message brokers that do not use message persistence or in cases when messages are sent peer-to-peer.

What does this mean for us? It means that in the case of the crash of the message broker, all messages are available, hence once the messaging layer becomes available, all undelivered messages can find their destinations.

In summary, we may conclude that the message broker technique improves the overall scalability of the system. In this case, we may build an elastic system easily just because the message broker can behave as a reactive system. Hence, communication is no longer a bottleneck.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.177.86