0%

Book Description

Unlock the complexities of machine learning algorithms in Spark to generate useful data insights through this data analysis tutorial

About This Book

  • Process and analyze big data in a distributed and scalable way
  • Write sophisticated Spark pipelines that incorporate elaborate extraction
  • Build and use regression models to predict flight delays

Who This Book Is For

Are you a developer with a background in machine learning and statistics who is feeling limited by the current slow and ?small data? machine learning tools? Then this is the book for you! In this book, you will create scalable machine learning applications to power a modern data-driven business using Spark. We assume that you already know the machine learning concepts and algorithms and have Spark up and running (whether on a cluster or locally) and have a basic knowledge of the various libraries contained in Spark.

What You Will Learn

  • Use Spark streams to cluster tweets online
  • Run the PageRank algorithm to compute user influence
  • Perform complex manipulation of DataFrames using Spark
  • Define Spark pipelines to compose individual data transformations
  • Utilize generated models for off-line/on-line prediction
  • Transfer the learning from an ensemble to a simpler Neural Network
  • Understand basic graph properties and important graph operations
  • Use GraphFrames, an extension of DataFrames to graphs, to study graphs using an elegant query language
  • Use K-means algorithm to cluster movie reviews dataset

In Detail

The purpose of machine learning is to build systems that learn from data. Being able to understand trends and patterns in complex data is critical to success; it is one of the key strategies to unlock growth in the challenging contemporary marketplace today. With the meteoric rise of machine learning, developers are now keen on finding out how can they make their Spark applications smarter.

This book gives you access to transform data into actionable knowledge. The book commences by defining machine learning primitives by the MLlib and H2O libraries. You will learn how to use Binary classification to detect the Higgs Boson particle in the huge amount of data produced by CERN particle collider and classify daily health activities using ensemble Methods for Multi-Class Classification.

Next, you will solve a typical regression problem involving flight delay predictions and write sophisticated Spark pipelines. You will analyze Twitter data with help of the doc2vec algorithm and K-means clustering. Finally, you will build different pattern mining models using MLlib, perform complex manipulation of DataFrames using Spark and Spark SQL, and deploy your app in a Spark streaming environment.

Style and approach

This book takes a practical approach to help you get to grips with using Spark for analytics and to implement machine learning algorithms. We'll teach you about advanced applications of machine learning through illustrative examples. These examples will equip you to harness the potential of machine learning, through Spark, in a variety of enterprise-grade systems.

Table of Contents

  1. Preface
    1. What this book covers
    2. What you need for this book
    3. Who this book is for
    4. Conventions
    5. Reader feedback
    6. Customer support
      1. Downloading the example code
      2. Downloading the color images of this book
      3. Errata
      4. Piracy
      5. Questions
  2. Introduction to Large-Scale Machine Learning and Spark
    1. Data science
    2. The sexiest role of the 21st century – data scientist?
      1. A day in the life of a data scientist
      2. Working with big data
      3. The machine learning algorithm using a distributed environment
      4. Splitting of data into multiple machines
      5. From Hadoop MapReduce to Spark
      6. What is Databricks?
      7. Inside the box
    3. Introducing H2O.ai
      1. Design of Sparkling Water
    4. What's the difference between H2O and Spark's MLlib?
    5. Data munging
    6. Data science - an iterative process
    7. Summary
  3. Detecting Dark Matter - The Higgs-Boson Particle
    1. Type I versus type II error
      1. Finding the Higgs-Boson particle
      2. The LHC and data creation
      3. The theory behind the Higgs-Boson
      4. Measuring for the Higgs-Boson
      5. The dataset
    2. Spark start and data load
      1. Labeled point vector
        1. Data caching
      2. Creating a training and testing set
        1. What about cross-validation?
      3. Our first model – decision tree
        1. Gini versus Entropy
      4. Next model – tree ensembles
        1. Random forest model
          1. Grid search
        2. Gradient boosting machine
      5. Last model - H2O deep learning
      6. Build a 3-layer DNN
        1. Adding more layers
        2. Building models and inspecting results
    3. Summary
  4. Ensemble Methods for Multi-Class Classification
    1. Data
    2. Modeling goal
      1. Challenges
      2. Machine learning workflow
        1. Starting Spark shell
        2. Exploring data
          1. Missing data
          2. Summary of missing value analysis
        3. Data unification
          1. Missing values
          2. Categorical values
          3. Final transformation
      3. Modelling data with Random Forest
        1. Building a classification model using Spark RandomForest
        2. Classification model evaluation
          1. Spark model metrics
        3. Building a classification model using H2O RandomForest
    3. Summary
  5. Predicting Movie Reviews Using NLP and Spark Streaming
    1. NLP - a brief primer
    2. The dataset
      1. Dataset preparation
    3. Feature extraction
      1. Feature extraction method– bag-of-words model
      2. Text tokenization
        1. Declaring our stopwords list
        2. Stemming and lemmatization
    4. Featurization - feature hashing
      1. Term Frequency - Inverse Document Frequency (TF-IDF) weighting scheme
    5. Let's do some (model) training!
      1. Spark decision tree model
      2. Spark Naive Bayes model
      3. Spark random forest model
      4. Spark GBM model
      5. Super-learner model
    6. Super learner
      1. Composing all transformations together
      2. Using the super-learner model
    7. Summary
  6. Word2vec for Prediction and Clustering
    1. Motivation of word vectors
    2. Word2vec explained
      1. What is a word vector?
      2. The CBOW model
      3. The skip-gram model
      4. Fun with word vectors
      5. Cosine similarity
    3. Doc2vec explained
      1. The distributed-memory model
      2. The distributed bag-of-words model
    4. Applying word2vec and exploring our data with vectors
    5. Creating document vectors
    6. Supervised learning task
    7. Summary
  7. Extracting Patterns from Clickstream Data
    1. Frequent pattern mining
      1. Pattern mining terminology
        1. Frequent pattern mining problem
        2. The association rule mining problem
        3. The sequential pattern mining problem
    2. Pattern mining with Spark MLlib
      1. Frequent pattern mining with FP-growth
      2. Association rule mining
      3. Sequential pattern mining with prefix span
      4. Pattern mining on MSNBC clickstream data
    3. Deploying a pattern mining application
      1. The Spark Streaming module
    4. Summary
  8. Graph Analytics with GraphX
    1. Basic graph theory
      1. Graphs
      2. Directed and undirected graphs
      3. Order and degree
      4. Directed acyclic graphs
      5. Connected components
      6. Trees
      7. Multigraphs
      8. Property graphs
    2. GraphX distributed graph processing engine
      1. Graph representation in GraphX
      2. Graph properties and operations
      3. Building and loading graphs
      4. Visualizing graphs with Gephi
        1. Gephi
        2. Creating GEXF files from GraphX graphs
      5. Advanced graph processing
        1. Aggregating messages
        2. Pregel
      6. GraphFrames
    3. Graph algorithms and applications
      1. Clustering
      2. Vertex importance
      3. GraphX in context 
    4. Summary
  9. Lending Club Loan Prediction
    1. Motivation
      1. Goal
      2. Data
      3. Data dictionary
    2. Preparation of the environment
    3. Data load
    4. Exploration – data analysis
      1. Basic clean up
        1. Useless columns
        2. String columns
        3. Loan progress columns
        4. Categorical columns
        5. Text columns
        6. Missing data
      2. Prediction targets
        1. Loan status model
          1. Base model
          2. The emp_title column transformation
          3. The desc column transformation
        2. Interest RateModel
      3. Using models for scoring
      4. Model deployment
        1. Stream creation
        2. Stream transformation
        3. Stream output
    5. Summary
18.221.20.159