Spring Cloud Data Flow

In early 2016, Spring Cloud introduced a new module called Spring Cloud Data Flow, with the official module description at this link: https://cloud.spring.io/spring-cloud-dataflow:

"Spring Cloud Data Flow is a toolkit for building data integration and real-time data-processing pipelines."

To generalize, the main idea of that module is to achieve separation between the development of functional business transformations and the actual interaction between developed components. In other words, this is the separation between functions and their composition in the business flow. To solve this problem, Spring Cloud Data Flow gives us a user-friendly web interface which makes it possible to upload deployable Spring Boot applications. It then sets up data flows by using uploaded artifacts and deploying the composed pipe to the chosen platform, such as Cloud Foundry, Kubernetes, Apache YARN or Mesos. Moreover, Spring Cloud Data Flow provides an extensive list of out-of-the-box connectors to sources (DBs, message queue, and files), different built-in processors for data transformation, and sinks, which represent different ways of storing results.

To learn more about supported sources, processors, and sinks, please visit the following links: https://cloud.spring.io/spring-cloud-task-app-starters/ and https://cloud.spring.io/spring-cloud-stream-app-starters/.

As previously mentioned, Spring Cloud Data Flow employs the idea of stream processing. Hence, all deployed flows are built on top of the Spring Cloud Stream module, and all communication is done via distributed, elastic message brokers such as Kafka or distributed highly-scalable variants of RabbitMQ.

To understand the power of distributed reactive programming with Spring Cloud Data Flow, we are going to build a payments processing flow. As we might already know, payment processing is quite complicated. However, here is a simplified diagram of this process:

Diagram 8.10. The flow diagram of payment processing

As we may have noticed, a user's payment has to transit through a few vital steps such as validation, account limits checking, and payment approvals. In Chapter 6WebFlux Async Non-Blocking Communicationwe built a similar application in which one service orchestrated the entire flow. Although the whole interaction was distributed among asynchronous calls to several independent microservices, the state of the flow was held by one service within Reactor 3 internals. This means that in the case of a failure of that service, recovering the state might be challenging.

Fortunately, Spring Cloud Data Flow relies on Spring Cloud Streams, which rely on a resilient message broker. Consequently, in case of failure, the message broker will not be acknowledged about message receiving, so the message will, therefore, be redelivered to another executor without additional effort.

Since we have obtained a basic understanding of the core principles of the internals of Spring Cloud Data Flow and the business requirements for Payment Flow, we may implement that service using that technology's stack.

First of all, we have to define the entry point, which is usually accessible as an HTTP endpoint. Spring Cloud Data Flow offers an HTTP source that may be defined with the Spring Cloud Data Flow DSLas in the following example:

SendPaymentEndpoint=Endpoint: http --path-pattern=/payment --port=8080

The preceding example represents a small part of Spring Cloud Data Flow pipes DSL. In the following examples, we will define more samples of how to build the complete Spring Cloud Data Flow pipes. To learn more about Stream Pipeline DSL visit the following link at https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#spring-cloud-dataflow-stream-intro-dsl.

Before starting any manipulations, ensure that all supported applications and tasks have already been registered at https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#supported-apps-and-tasks.

In the preceding example, we defined a new data flow function, which represents all HTTP requests as a stream of messages. Consequently, we can react to them in a defined way.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.179.85