To further understand machine learning workflows, let us review some examples here.
In the later chapters of this book, we will work on risk modelling, fraud detection, customer view, churn prediction, and recommendation. For many of these types of projects, the goal is often to identify causes of certain problems, or to build a causal model. Below is one example of a workflow to develop a causal model.
ind
variables (exogenous variables):Also refer to http://www.researchmethods.org/step-by-step1.pdf, Spark Pipelines
The Apache Spark team has recognized the importance of machine learning workflows and they have developed Spark Pipelines to enable good handling of them.
Spark ML represents a ML workflow as a pipeline, which consists of a sequence of PipelineStages to be run in a specific order.
PipelineStages include Spark Transformers, Spark Estimators and Spark Evaluators.
ML workflows can be very complicated, so that creating and tuning them is very time consuming. The Spark ML Pipeline was created to make the construction and tuning of ML workflows easy, and especially to represent the following main stages:
With regards to the above tasks, Spark Transformers can be used to extract features. Spark Estimators can be used to train and estimate models, and Spark Evaluators can be used to evaluate models.
Technically, in Spark, a Pipeline is specified as a sequence of stages, and each stage is either a Transformer, an Estimator, or an Evaluator. These stages are run in order, and the input dataset is modified as it passes through each stage. For Transformer stages, the transform()
method is called on the dataset. For estimator stages, the fit()
method is called to produce a Transformer (which becomes part of the PipelineModel, or fitted Pipeline), and that Transformer's transform()
method is called on the dataset.
The specifications given above are all for linear Pipelines. It is possible to create non-linear Pipelines as long as the data flow graph forms a Directed Acyclic Graph (DAG).
3.143.247.125