Load balancing across a number of endpoints

When you need the ability to distribute a sequence of messages between a predefined set of endpoints, the Load Balancer EIP is a good choice. This is useful for tasks such as web service load balancing at the application level, when a hardware load balancer is not available for use. This EIP allows you to plug in a number of strategies to define how the messages should be distributed amongst the endpoints.

This recipe will show you how to load balance (route) a message across a set of endpoints using a specified policy (for example, round robin).

Getting ready

The Java code for this recipe is located in the org.camelcookbook.routing.loadbalancer package. The Spring XML files are located under src/main/resources/META-INF/spring and prefixed with loadBalancer.

How to do it...

The following demonstrates how you might distribute messages using a round-robin strategy.

In the XML DSL, this routing logic is written as follows:

<route>
  <from uri="direct:start"/>
  <loadBalance>
    <roundRobin/>
    <to uri="mock:first"/>
    <to uri="mock:second"/>
    <to uri="mock:third"/>
  </loadBalance>
  <to uri="mock:out"/>
</route>

In the Java DSL, the same thing is expressed as:

from("direct:start")
  .loadBalance().roundRobin()
    .to("mock:first")
    .to("mock:second")
    .to("mock:third")
  .end()
  .to("mock:out");

How it works...

The Load Balancer EIP can be thought of as a processing phase that has a number of producer endpoints to choose from when a message is fed into it. It decides which endpoint should get the next message based on the provided load-balancing strategy.

The preceding example uses the simplest of the pre-defined strategies, round-robin. The first message goes to the first endpoint, the second to the second endpoint, the third to the third endpoint, the fourth back to the first endpoint, and so on. In that respect, it can be thought of as a stateful switch.

There are a number of other strategies that can be used out of the box.

The random strategy is the most straightforward. It behaves as the name would suggest. In the XML DSL, it is used as follows:

.loadBalance().random()

The same thing written in the Java DSL appears as:

<loadBalance>
  <random/>
  <!-- ... -->
</loadBalance>

The sticky load-balancing strategy works similar to a round-robin in that it distributes messages evenly between endpoints; however, all messages that share the same result for a provided Expression will be routed to the same endpoint. You might use this strategy to ensure that the same server handles all processing requests for a given customer.

The following demonstrates its use in the Java DSL:

.loadBalance().sticky(header("customerId"))

In the XML DSL, it is expressed slightly differently:

<loadBalance>
  <sticky>
    <correlationExpression>
      <header>customerId</header>
    </correlationExpression>
  </sticky>
  <!-- ... -->
</loadBalance>

The sticky load balancer uses the correlation Expression to get a data value that is used to create a hash key that is used to bucket messages to the same load balanced endpoints.

The failover strategy allows you to define a set of steps that will be tried in sequence until one of them succeeds, or the maximum number of retries is reached. The default behavior is that the steps are attempted in a top-down order, but you can set it so the pattern otherwise acts like a round-robin.

In the Java DSL, this is written as:

.loadBalance()
  .failover(-1,    // max retry attempts 
            false, // whether the current route's error handler 
                   // should come into play
            true)  // round-robin
  .to("direct:first")
  .to("direct:second")
.end()

The XML DSL version is a lot simpler to read:

<loadBalance>
  <failover roundRobin="true"/>
  <to uri="direct:first"/>
  <to uri="direct:second"/>
</loadBalance>

It is also possible to configure the failover to happen only on certain exceptions and otherwise revert back to the route's exception handling.

In the Java DSL, you express this as follows:

.failover(IllegalStateException.class)

In the XML DSL, this is written as:

<failover>
  <exception>java.lang.IllegalStateException</exception>
</failover>

It is also possible to use weighted load balancing strategies to favor certain steps over others. You may want to do this if you have a set of servers that are being integrated, some of which are more capable than others. The general idea is to provide a list of weightings that are used as a ratio.

In the XML DSL, this strategy is used as follows:

<loadBalance>
  <weighted roundRobin="true" distributionRatio="4,2,1"/>
  <to uri="mock:first"/>
  <to uri="mock:second"/>
  <to uri="mock:third"/>
</loadBalance>

In the Java DSL, the same thing is written as:

.loadBalance().weighted(true, // true = round-robin,
                              // false = random
                        "4,2,1") // distribution ratio
  .to("mock:first")
  .to("mock:second")
  .to("mock:third")

A topic strategy also exists, which behaves in a similar manner to the Multicast EIP, but without the full options of that pattern.

There's more...

If one of the pre-defined strategies does not suit your use case, it is possible to define your own custom load balancing strategy and use it within this EIP. To do this, you would extend the abstract org.apache.camel.processor.loadbalancer.LoadBalancerSupport class (see the Camel implementations for details), and provide it to the Load Balancer EIP.

In the XML DSL, the ref attribute refers to a bean defined in your Camel context:

<loadBalance>
  <custom ref="myCustomLoadBalancingStrategy"/>
  <!-- ... -->
</loadBalance>

In the Java DSL, you can pass the instance directly into the custom statement:

.loadBalance().custom(new MyCustomLoadBalancingStrategy())

See also

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.249.198