Reactive I/O

Another dramatic improvement related to reactive support is the reinforcement of the core I/O package. First of all, the Spring Core module introduced an additional abstraction over a buffer of byte instances called DataBuffer. The main reason to avoid java.nio.ByteBuffer is to provide an abstraction that may support different byte buffers without being required to make any additional conversions between them. For example, in order to convert io.netty.buffer.ByteBuf to ByteBuffer, we have to access the stored bytes that may require pulling them into a heap from the off-heap space. This may break efficient memory usage and buffer recycling (reusing the same buffers of bytes) provided by Netty. Alternatively, Spring DataBuffer provides an abstraction of a particular implementation, and allows us to use underlying implementations in a generic way. The additional subinterface, called PooledDataBuffer, also enables a reference-counting feature and allows efficient memory management out of the box.

Furthermore, the fifth version of Spring Core introduces an extra DataBufferUtils class that permits interaction with I/O (interaction with a network, resources, files, and so on) in the form of Reactive Streams. For example, we may read Shakespeare's Hamlet reactively and with backpressure support in the following way:

Flux<DataBuffer> reactiveHamlet = DataBufferUtils
.read(
new DefaultResourceLoader().getResource("hamlet.txt"),
new DefaultDataBufferFactory(),
1024
);

As we may have noticed, DataBufferUtils.read returns a Flux of DataBuffer instances. Therefore, we may use all of Reactor's functionality in order to read Hamlet

Finally, the last significant and vital feature related to Reactive in Spring Core is reactive codecs. Reactive codecs provide a convenient way to convert a stream of DataBuffer instances to the stream of objects and back. For that purpose, there are the Encoder and Decoder interfaces, which provide the following API for encoding/decoding streams of data:

interface Encoder<T> {
...

Flux<DataBuffer> encode(Publisher<? extends T> inputStream, ...);
}

interface Decoder<T> {
...

Flux<T> decode(Publisher<DataBuffer> inputStream, ...);

Mono<T> decodeToMono(Publisher<DataBuffer> inputStream, ...);

}

As we can see from the preceding example, both interfaces operate with the Reactive Streams' Publisher, and allow the encoding/decoding of a stream of DataBuffer instances to objects. The central benefit of using such an API is that it offers a nonblocking way to convert serialized data to Java objects and vice-versa. Furthermore, such a way of encoding/decoding data may decrease the processing latency since the nature of Reactive Streams allows independent element processing so that we do not have to wait for the last byte to start decoding the whole dataset. On the contrary, it is unnecessary to have the complete list of objects in order to start encoding and sending them to the I/O channel so that both directions can improve.

To learn more about Reactive I/O in Spring Core, please visit https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#databuffers.

To summarize, we can say that we have an excellent foundation for reactive programming in the Spring Framework with the fifth version of Spring Core. In turn, Spring Boot delivers that foundation as a backbone component to any application. It also makes it possible to write reactive applications and at the same time put less effort into inventing ways to convert reactive types, working with I/O in the reactive style, and encoding/decoding data on-the-fly.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.89.18