High availability pipelines

Previously, we spent the majority of our time working with socket-based communication between nodes on a cluster, which is generally something that makes sense to most people and has tooling built around it in almost every programming language. So, it is the first tool that people transitioning their classic infrastructure to containers usually go for, but for large-and-beyond scales where you are dealing with pure data processing, it simply does not work well due to the back-pressure caused by exceeding the capacity of a particular stage on the rest of the processing pipeline.

If you imagine each cluster service as a consecutive set of transformation steps, the socket-based system would go through a loop of steps similar to these:

  • Opening a listening socket.
  • Looping forever doing the following:
    • Waiting for data on a socket from the previous stage.
    • Processing that data.
    • Sending the processed data to the next stage's socket.

But what happens in that last step if the next stage is already at the maximum capacity? What most socket-based systems will do is either throw an exception and completely fail the processing pipeline for this particular piece of data or prevent the execution from continuing and keep retrying to send the data to the next stage until it succeeds. Since we don't want to fail the processing pipeline as the result was not an error and we do not want to keep our worker waiting around for the next stage to unblock, we need something that can hold inputs to stages in an ordered structure so that the previous stage can continue working on its own new set of inputs.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.29.201