Introduction

We live in a digital world. Many of our daily interactions are, in personal and professional contexts, being proxied through digitized processes that create the opportunity to capture and analyze messages from these interactions. Let’s take something as simple as our daily cup of coffee: whether it’s adding a like on our favorite coffee shop’s Facebook page, posting a picture of our latte macchiato on Instagram, pushing the Amazon Dash Button for a refill of our usual brand, or placing an online order for Kenyan coffee beans, we can see that our coffee experience generates plenty of events that produce direct and indirect results.

For example, pressing the Amazon Dash Button sends an event message to Amazon. As a direct result of that action, the message is processed by an order-taking system that produces a purchase order and forwards it to a warehouse, eventually resulting in a package being delivered to us. At the same time, a machine learning model consumes that same message to add coffee as an interest to our user profile. A week later, we visit Amazon and see a new suggestion based on our coffee purchase. Our initial single push of a button is now persisted in several systems and in several forms. We could consider our purchase order as a direct transformation of the initial message, while our machine-learned user profile change could be seen as a sophisticated aggregation.

To remain competitive in a market that demands real-time responses to these digital pulses, organizations are adopting Fast Data applications as a key asset in their technology portfolio. This application development is driven by the need to accelerate the extraction of value from the data entering the organization. The streaming workloads that underpin Fast Data applications are often complementary to or work alongside existing batch-oriented processes. In some cases, they even completely replace legacy batch processes as the maturing streaming technology becomes able to deliver the data consistency warranties that organizations require.

Fast Data applications take many forms, from streaming ETL (extract, transform, and load) workloads, to crunching data for online dashboards, to estimating your purchase likelihood in a machine learning–driven product recommendation. Although the requirements for Fast Data applications vary wildly from one use case to the next, we can observe common architectural patterns that form the foundations of successful deployments.

This report identifies the key architectural characteristics of Fast Data application architectures, breaks these architectures into functional blocks, and explores some of the leading technologies that implement these functions. After reading this report, the reader will have a global understanding of Fast Data applications; their key architectural characteristics; and how to choose, combine, and run available technologies to build resilient, scalable, and responsive systems that deliver the Fast Data application that their industry requires.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.108.119