At least once processing

The at least once processing paradigm involves a mechanism to save the position of the last event received only after the event is actually processed and results persisted somewhere so that, if there is a failure and the consumer restarts, the consumer will read the old events again and process them. However, since there is no guarantee that the received events were not processed at all or partially processed, this causes a potential duplication of events as they are fetched again. This results in the behavior that events ate processed at least once.

At least once is ideally suitable for any application that involves updating some instantaneous ticker or gauge to show current values. Any cumulative sum, counter, or dependency on the accuracy of aggregations (sum, groupBy, and so on) does not fit the use case for such processing simply because duplicate events will cause incorrect results.

The sequence of operations for the consumer are as follows:

  1. Save results
  2. Save offsets

Shown in the following is an illustration of what happens if there are a failure and consumer restarts. Since the events have already been processed but the offsets have not saved, the consumer will read from the previous offsets saved, thus causing duplicates. Event 0 is processed twice in the following figure:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.58.196