Chapter 6. Parallel Processing

In this chapter, we will cover:

  • Increasing message consumption through multiple endpoint consumers
  • Spreading the load within a route using a set of threads
  • Routing a request asynchronously
  • Using custom thread pools
  • Using thread pool profiles
  • Working with asynchronous APIs

Introduction

In this chapter, we will take a deeper look at using Camel's support for increasing throughput inside a single JVM by processing exchanges in parallel.

So far we have seen parallel processing mentioned in the context of a number of EIPs including Multicast, Splitter, and Aggregator. This chapter will introduce you to the ability to easily define processing phases in an ad hoc manner, in order to scale out your integrations as required.

Parallel processing is a useful tool as it allows you to do more work in a shorter space of time by distributing work across a set of worker threads.

Imagine that you have 100 messages, each requiring 0.1 seconds to execute. Using only one thread, this load should be processed in about 10 seconds. However, if you get 10 threads to work on this batch of messages, you might reasonably expect to process it in 1 second. That is the promise of parallelism.

Parallelism is not a silver bullet for performance problems. It will not scale infinitely; it is unlikely that your JVM will be able to run thousands of threads productively at the same time. You will inevitably hit a limit somewhere along the line at that point adding more worker threads will decrease your throughput, as the demands of servicing a large pool of threads cuts into the useful processing cycles of your CPU. You may also encounter contention problems as threads compete with each other for limited resources. What you do with each thread is as important, if not more so, as the number of threads in use.

Camel gives you a set of tools to scale out your processing where needed. It is up to you to use those mechanisms to tune your application. Performance tuning is often seen as a dark art, but it does not need to be. The general process is:

  1. Test your application's performance by applying a message load to it through your favorite load-testing tool and measuring how long your application takes to process it.
  2. Make a single change that you hope will improve performance and test again. If it improved things, keep going. If it made things worse go back to what you had.

When tuning your application for performance, you should keep in mind what else might be going on inside your JVM in a production setting. Five different integrations, that are perfectly tuned individually, might suffer from poor performance when executed at the same time. This can be due to contention around the CPU, memory, I/O, or other external resources.

Tip

Always try to test an application as a whole after individual tunings to ensure that you have not introduced a problem.

A number of Camel architectural concepts are used throughout this chapter. There is a broader overview of Camel concepts in the Preface. Full details can be found on the Apache Camel website at http://camel.apache.org.

The code for this chapter is contained within the camel-cookbook-parallel-processing module of the examples.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.14.129.194