Akka Streams

The purpose of Akka Streams (https://doc.akka.io/docs/akka/2.5.13/stream/stream-introduction.html) is to offer an intuitive and safe way to formulate stream processing setups such that we can execute them efficiently and with bounded resource usage.

Akka Streams fully implements a Reactive Stream standard in order to interoperate with another compliant Reactive Streams library, but this fact is usually considered to be an implementation detail.

The initial motivation for Akka Streams was the fact that all Akka actor systems share the same sets of technical problems, which adds accidental complexity and needs to be solved for almost every single project separately over and over again. For example, Akka does not have any general flow control mechanism, and in order to prevent an actor's mailboxes from overflowing, it needs to be implemented as a home-grown solution within every application. Another common pain point is the at-most-once messaging semantics, which is less than ideal in most cases, but also dealt with on an individual basis. Yet another inconvenience Akka is criticized for is its untyped nature. The absence of types makes it impossible to check the soundness of possible interactions between actors at the compile time.

Akka Streams aim to solve this problem by placing a streaming layer on top of the actor system. This layer adheres to the small set of architectural principles to provide a consistent user experience. These principles are a comprehensive domain model for stream processing and compositionally. The focus of the library lies in modular data transformation. In this sense, Reactive Streams are just an implementation detail for how data is passed between steps of the flow and Akka actors are the implementation detail for individual steps.

The principle of the completeness of the domain model for distributed bounded stream processing means that Akka Streams has a rich DSL that allows you to express all aspects of the domain, such as single processing and transformation steps and their interconnections, streams with complex graph topologies, back-pressure, error and failure handling, buffering and so on.

The modularity principle means that the definition of single transformations, multiple transformations connected in specific ways, or even whole graphs must be freely shareable. This principle leads to the design decision to make the description of the stream separate from the execution of the stream. Therefore, a user of Akka Streams has to go over the following three steps to execute a stream:

  1. Describe the stream in the form of building blocks and connections between them. The result of this step is usually called a blueprint in Akka documentation.
  2. Materialize the blueprint which creates an instance of the flow. The materialization is done by providing a materializer which in Akka takes the form of using an actor system or an actor context to create actors for each processing stage.
  3. Execute a materialized stream using one of the run methods.

In practice, usually the last two steps are combined and the materializer is provided as an implicit parameter.

With this theory in mind, let's take a look at what building and executing streams with Akka Streams looks like in practice.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.218.151.44