Asynchronous processing with platform events

Platform events and the Event Bus offer an opportunity to execute background processing continuously in real time rather than being based on ad hoc workers or jobs that are triggered based on a schedule or user action. This means that there is less chance of processing hitting limits or being queued due to the interim build-up of unprocessed data or other jobs.

Earlier in this chapter, you ran the TestData.createVolumeData script that created some synthetic race data and inserted it in bulk into the RaceData__c object. Later, you ran a Batch Apex job to post-process that data to associate it with the Contestant_c records. Recall that Batch Apex jobs have limits around the number of concurrent jobs that might place them in a queue if the customer is running other jobs, further delaying the post-processing of the data and its usefulness to the rest of the application and the users.

I am using a sequence diagram to show how the race data flows in real time from the driver's car into the system and results in updates to race statistics, such as fastest sector times. By using platform events, all this happens continuously within a few seconds. Salesforce Event Bus is engineering to handle high levels of ingestion and has built-in retry and recovery features. 

What is a race sector? A lap is broken into three sections and timings are taken between each as each car passes between the sector markers. During the race, fans and the teams monitor who completes the fastest sector. 

The following sequence diagram shows how the RaceService.ingestTelemetry and RaceService.processTelemetry services are connected to two platform events. The first event occurs when the driver's car sends data to the RaceTelemetry__e event, and the second event occurs when the RaceData__c records are inserted into the system through a feature called Change Data Capture:

Using the publish and subscribe model to decompose processing and separation of concerns manages future extensibility and helps to manage resources and associated limits. This is due to each batch of events being handled within its own execution context. This is an example of applying event-driven architecture programming patterns to achieve scale and to improve responsiveness. The following sections further elaborate on how this has been implemented through the sample code included with this chapter.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.47.166