How it works...

There are microservices that are built to run independently in the background, with an objective to capture real-time, less-biased, and optimal results without user intervention or errors. These kinds of microservices are designed for FTP, data loading, data rendition, report generation, data warehousing, and archive and software auditing.

Spring Batch is not new to Spring; actually, it is still used in many of the current applications requiring data spooling. The preceding recipe shows the step-by-step process to build a complete scheduled batch process. A scheduled batch process runs a set of executions continually after a certain period of time. A Spring batch execution is basically all about reading items from a source media and transferring them to another media, with or without any noise or alterations. It has an API class called ItemReader<T> that allows the reading of items from a text file, CSV, XML, or database schema. Once injected into the webflux container, the singleton ItemReader<T> object is converted to @StepScope, which asks for a new reader instance and a new set of items to be sifted from the source periodically. This annotation is mandatory, given that these sources are updated in real-time once in a while.

After the ItemReader<T> fetches the items, it passes them one at a time or in chunks to the ItemProcessor<I,O> and ValidatingItemProcessor<T> to filter, scrutinize, and validate these items before it is transferred as the List<O> of processed items to ItemWriter<T> for the final execution stage. This API class writes to a text file, CSV, XML, or the database schema all the items from ItemProcessor<I,O>, and signals the last stage of the execution. Since our implementation is an asynchronous but scheduled batch process, one Step execution will be spawned after another to execute the read-process-write again and again. This recipe showed us how to custom implement these readers, writers, and processors.

After establishing the core processes, the next step is to build the steps to be executed using StepBuilderFactory; some of these step executions are Tasklet or chunked processes. And, finally, to create the job or task, we design a lineup of step executions using the JobBuilderFactory methods, start() and next().

To close the implementation and run the microservice, we instantiate JobExecution together with the needed job parameters, which are to be launched by a @Scheduled method.

Spring Boot 2.0 provides a straightforward and routine solution as long as we invoke @EnableBatchProcessing at the @Configuration context, and we properly inject all the readers, processors, and writers with @StepScope, since the recipe is a scheduled and continuous batch process type.

Each job execution is a live object containing the properties:

JobExecution: id=136, version=1, startTime=2017-07-26 14:50:06.0, endTime=null, lastUpdated=2017-07-26 14:50:06.0, status=STARTED, exitStatus=exitCode=UNKNOWN;exitDescription=, job=[JobInstance: id=136, version=0, Job=[deptBatchJob]], jobParameters=[{}].

Spring Batch automatically generates its metadata tables for recovery or retry purposes. But this can be disabled by having the spring.batch.initializer.enabled=false property in the application.properties file.

It will run at the JVM heap until it finishes its algorithm. If the job encounters exceptions and needs to be killed, Spring Batch provides a command-line runner that will execute to stop these jobs.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.12.163.180