0%

Book Description

Understand the fundamentals and develop your own AI solutions in this updated edition packed with many new examples

Key Features

  • AI-based examples to guide you in designing and implementing machine intelligence
  • Build machine intelligence from scratch using artificial intelligence examples
  • Develop machine intelligence from scratch using real artificial intelligence

Book Description

AI has the potential to replicate humans in every field. Artificial Intelligence By Example, Second Edition serves as a starting point for you to understand how AI is built, with the help of intriguing and exciting examples.

This book will make you an adaptive thinker and help you apply concepts to real-world scenarios. Using some of the most interesting AI examples, right from computer programs such as a simple chess engine to cognitive chatbots, you will learn how to tackle the machine you are competing with. You will study some of the most advanced machine learning models, understand how to apply AI to blockchain and Internet of Things (IoT), and develop emotional quotient in chatbots using neural networks such as recurrent neural networks (RNNs) and convolutional neural networks (CNNs).

This edition also has new examples for hybrid neural networks, combining reinforcement learning (RL) and deep learning (DL), chained algorithms, combining unsupervised learning with decision trees, random forests, combining DL and genetic algorithms, conversational user interfaces (CUI) for chatbots, neuromorphic computing, and quantum computing.

By the end of this book, you will understand the fundamentals of AI and have worked through a number of examples that will help you develop your AI solutions.

What you will learn

  • Apply k-nearest neighbors (KNN) to language translations and explore the opportunities in Google Translate
  • Understand chained algorithms combining unsupervised learning with decision trees
  • Solve the XOR problem with feedforward neural networks (FNN) and build its architecture to represent a data flow graph
  • Learn about meta learning models with hybrid neural networks
  • Create a chatbot and optimize its emotional intelligence deficiencies with tools such as Small Talk and data logging
  • Building conversational user interfaces (CUI) for chatbots
  • Writing genetic algorithms that optimize deep learning neural networks
  • Build quantum computing circuits

Who this book is for

Developers and those interested in AI, who want to understand the fundamentals of Artificial Intelligence and implement them practically. Prior experience with Python programming and statistical knowledge is essential to make the most out of this book.

Table of Contents

  1. Preface
    1. Who this book is for
    2. What this book covers
    3. To get the most out of this book
    4. Get in touch
  2. Getting Started with Next-Generation Artificial Intelligence through Reinforcement Learning
    1. Reinforcement learning concepts
    2. How to adapt to machine thinking and become an adaptive thinker
    3. Overcoming real-life issues using the three-step approach
      1. Step 1 – describing a problem to solve: MDP in natural language
        1. Watching the MDP agent at work
      2. Step 2 – building a mathematical model: the mathematical representation of the Bellman equation and MDP
        1. From MDP to the Bellman equation
      3. Step 3 – writing source code: implementing the solution in Python
    4. The lessons of reinforcement learning
      1. How to use the outputs
        1. Possible use cases
      2. Machine learning versus traditional applications
    5. Summary
    6. Questions
    7. Further reading
  3. Building a Reward Matrix – Designing Your Datasets
    1. Designing datasets – where the dream stops and the hard work begins
      1. Designing datasets
      2. Using the McCulloch-Pitts neuron
      3. The McCulloch-Pitts neuron
      4. The Python-TensorFlow architecture
    2. Logistic activation functions and classifiers
      1. Overall architecture
      2. Logistic classifier
      3. Logistic function
      4. Softmax
    3. Summary
    4. Questions
    5. Further reading
  4. Machine Intelligence – Evaluation Functions and Numerical Convergence
    1. Tracking down what to measure and deciding how to measure it
      1. Convergence
        1. Implicit convergence
        2. Numerically controlled gradient descent convergence
    2. Evaluating beyond human analytic capacity
    3. Using supervised learning to evaluate a result that surpasses human analytic capacity
    4. Summary
    5. Questions
    6. Further reading
  5. Optimizing Your Solutions with K-Means Clustering
    1. Dataset optimization and control
      1. Designing a dataset and choosing an ML/DL model
        1. Approval of the design matrix
    2. Implementing a k-means clustering solution
      1. The vision
        1. The data
        2. The strategy
      2. The k-means clustering program
        1. The mathematical definition of k-means clustering
        2. The Python program
      3. Saving and loading the model
      4. Analyzing the results
        1. Bot virtual clusters as a solution
        2. The limits of the implementation of the k-means clustering algorithm
    3. Summary
    4. Questions
    5. Further reading
  6. How to Use Decision Trees to Enhance K-Means Clustering
    1. Unsupervised learning with KMC with large datasets
      1. Identifying the difficulty of the problem
        1. NP-hard – the meaning of P
        2. NP-hard – the meaning of non-deterministic
      2. Implementing random sampling with mini-batches
      3. Using the LLN
      4. The CLT
        1. Using a Monte Carlo estimator
      5. Trying to train the full training dataset
      6. Training a random sample of the training dataset
      7. Shuffling as another way to perform random sampling
      8. Chaining supervised learning to verify unsupervised learning
        1. Preprocessing raw data
      9. A pipeline of scripts and ML algorithms
        1. Step 1 – training and exporting data from an unsupervised ML algorithm
        2. Step 2 – training a decision tree
        3. Step 3 – a continuous cycle of KMC chained to a decision tree
      10. Random forests as an alternative to decision trees
    2. Summary
    3. Questions
    4. Further reading
  7. Innovating AI with Google Translate
    1. Understanding innovation and disruption in AI
      1. Is AI disruptive?
        1. AI is based on mathematical theories that are not new
        2. Neural networks are not new
      2. Looking at disruption – the factors that are making AI disruptive
        1. Cloud server power, data volumes, and web sharing of the early 21st century
        2. Public awareness
      3. Inventions versus innovations
      4. Revolutionary versus disruptive solutions
      5. Where to start?
    2. Discover a world of opportunities with Google Translate
      1. Getting started
      2. The program
        1. The header
        2. Implementing Google's translation service
      3. Google Translate from a linguist's perspective
        1. Playing with the tool
        2. Linguistic assessment of Google Translate
    3. AI as a new frontier
      1. Lexical field and polysemy
      2. Exploring the frontier – customizing Google Translate with a Python program
      3. k-nearest neighbor algorithm
        1. Implementing the KNN algorithm
        2. The knn_polysemy.py program
        3. Implementing the KNN function in Google_Translate_Customized.py
        4. Conclusions on the Google Translate customized experiment
        5. The disruptive revolutionary loop
    4. Summary
    5. Questions
    6. Further reading
  8. Optimizing Blockchains with Naive Bayes
    1. Part I – the background to blockchain technology
      1. Mining bitcoins
      2. Using cryptocurrency
    2. PART II – using blockchains to share information in a supply chain
      1. Using blockchains in the supply chain network
      2. Creating a block
      3. Exploring the blocks
    3. Part III – optimizing a supply chain with naive Bayes in a blockchain process
      1. A naive Bayes example
        1. The blockchain anticipation novelty
        2. The goal – optimizing storage levels using blockchain data
      2. Implementation of naive Bayes in Python
        1. Gaussian naive Bayes
    4. Summary
    5. Questions
    6. Further reading
  9. Solving the XOR Problem with a Feedforward Neural Network
    1. The original perceptron could not solve the XOR function
      1. XOR and linearly separable models
        1. Linearly separable models
        2. The XOR limit of a linear model, such as the original perceptron
    2. Building an FNN from scratch
      1. Step 1 – defining an FNN
      2. Step 2 – an example of how two children can solve the XOR problem every day
      3. Implementing a vintage XOR solution in Python with an FNN and backpropagation
        1. A simplified version of a cost function and gradient descent
        2. Linear separability was achieved
    3. Applying the FNN XOR function to optimizing subsets of data
    4. Summary
    5. Questions
    6. Further reading
  10. Abstract Image Classification with Convolutional Neural Networks (CNNs)
    1. Introducing CNNs
      1. Defining a CNN
      2. Initializing the CNN
      3. Adding a 2D convolution layer
        1. Kernel
        2. Shape
        3. ReLU
      4. Pooling
      5. Next convolution and pooling layer
      6. Flattening
      7. Dense layers
        1. Dense activation functions
    2. Training a CNN model
      1. The goal
      2. Compiling the model
        1. The loss function
        2. The Adam optimizer
        3. Metrics
      3. The training dataset
        1. Data augmentation
        2. Loading the data
      4. The testing dataset
        1. Data augmentation on the testing dataset
        2. Loading the data
      5. Training with the classifier
      6. Saving the model
        1. Next steps
    3. Summary
    4. Questions
    5. Further reading and references
  11. Conceptual Representation Learning
    1. Generating profit with transfer learning
      1. The motivation behind transfer learning
        1. Inductive thinking
        2. Inductive abstraction
        3. The problem AI needs to solve
      2. The Γ gap concept
      3. Loading the trained TensorFlow 2.x model
        1. Loading and displaying the model
        2. Loading the model to use it
        3. Defining a strategy
        4. Making the model profitable by using it for another problem
    2. Domain learning
      1. How to use the programs
        1. The trained models used in this section
        2. The trained model program
      2. Gap – loaded or underloaded
      3. Gap – jammed or open lanes
      4. Gap datasets and subsets
        1. Generalizing the Γ (the gap conceptual dataset)
      5. The motivation of conceptual representation learning metamodels applied to dimensionality
        1. The curse of dimensionality
        2. The blessing of dimensionality
    3. Summary
    4. Questions
    5. Further reading
  12. Combining Reinforcement Learning and Deep Learning
    1. Planning and scheduling today and tomorrow
      1. A real-time manufacturing process
        1. Amazon must expand its services to face competition
        2. A real-time manufacturing revolution
    2. CRLMM applied to an automated apparel manufacturing process
      1. An apparel manufacturing process
      2. Training the CRLMM
        1. Generalizing the unit training dataset
        2. Food conveyor belt processing – positive pγ and negative nγ gaps
        3. Running a prediction program
    3. Building the RL-DL-CRLMM
      1. A circular process
      2. Implementing a CNN-CRLMM to detect gaps and optimize
      3. Q-learning – MDP
        1. MDP inputs and outputs
      4. The optimizer
        1. The optimizer as a regulator
        2. Finding the main target for the MDP function
      5. A circular model – a stream-like system that never starts nor ends
    4. Summary
    5. Questions
    6. Further reading
  13. AI and the Internet of Things (IoT)
    1. The public service project
    2. Setting up the RL-DL-CRLMM model
      1. Applying the model of the CRLMM
        1. The dataset
        2. Using the trained model
    3. Adding an SVM function
      1. Motivation – using an SVM to increase safety levels
      2. Definition of a support vector machine
      3. Python function
    4. Running the CRLMM
      1. Finding a parking space
      2. Deciding how to get to the parking lot
        1. Support vector machine
        2. The itinerary graph
        3. The weight vector
    5. Summary
    6. Questions
    7. Further reading
  14. Visualizing Networks with TensorFlow 2.x and TensorBoard
    1. Exploring the output of the layers of a CNN in two steps with TensorFlow
      1. Building the layers of a CNN
      2. Processing the visual output of the layers of a CNN
        1. Analyzing the visual output of the layers of a CNN
    2. Analyzing the accuracy of a CNN using TensorBoard
      1. Getting started with Google Colaboratory
      2. Defining and training the model
      3. Introducing some of the measurements
    3. Summary
    4. Questions
    5. Further reading
  15. Preparing the Input of Chatbots with Restricted Boltzmann Machines (RBMs) and Principal Component Analysis (PCA)
    1. Defining basic terms and goals
    2. Introducing and building an RBM
      1. The architecture of an RBM
      2. An energy-based model
      3. Building the RBM in Python
        1. Creating a class and the structure of the RBM
        2. Creating a training function in the RBM class
        3. Computing the hidden units in the training function
        4. Random sampling of the hidden units for the reconstruction and contractive divergence
        5. Reconstruction
        6. Contrastive divergence
        7. Error and energy function
      4. Running the epochs and analyzing the results
    3. Using the weights of an RBM as feature vectors for PCA
      1. Understanding PCA
        1. Mathematical explanation
      2. Using TensorFlow's Embedding Projector to represent PCA
      3. Analyzing the PCA to obtain input entry points for a chatbot
    4. Summary
    5. Questions
    6. Further reading
  16. Setting Up a Cognitive NLP UI/CUI Chatbot
    1. Basic concepts
      1. Defining NLU
      2. Why do we call chatbots "agents"?
      3. Creating an agent to understand Dialogflow
      4. Entities
      5. Intents
      6. Context
    2. Adding fulfillment functionality to an agent
      1. Defining fulfillment
      2. Enhancing the cogfilmdr agent with a fulfillment webhook
      3. Getting the bot to work on your website
    3. Machine learning agents
      1. Using machine learning in a chatbot
      2. Speech-to-text
      3. Text-to-speech
      4. Spelling
      5. Why are these machine learning algorithms important?
    4. Summary
    5. Questions
    6. Further reading
  17. Improving the Emotional Intelligence Deficiencies of Chatbots
    1. From reacting to emotions, to creating emotions
      1. Solving the problems of emotional polysemy
        1. The greetings problem example
        2. The affirmation example
        3. The speech recognition fallacy
        4. The facial analysis fallacy
      2. Small talk
        1. Courtesy
        2. Emotions
    2. Data logging
    3. Creating emotions
    4. RNN research for future automatic dialog generation
      1. RNNs at work
        1. RNN, LSTM, and vanishing gradients
      2. Text generation with an RNN
      3. Vectorizing the text
      4. Building the model
      5. Generating text
    5. Summary
    6. Questions
    7. Further reading
  18. Genetic Algorithms in Hybrid Neural Networks
    1. Understanding evolutionary algorithms
      1. Heredity in humans
        1. Our cells
        2. How heredity works
      2. Evolutionary algorithms
        1. Going from a biological model to an algorithm
        2. Basic concepts
      3. Building a genetic algorithm in Python
        1. Importing the libraries
        2. Calling the algorithm
        3. The main function
        4. The parent generation process
        5. Generating a parent
        6. Fitness
        7. Display parent
        8. Crossover and mutation
        9. Producing generations of children
        10. Summary code
      4. Unspecified target to optimize the architecture of a neural network with a genetic algorithm
        1. A physical neural network
        2. What is the nature of this mysterious S-FNN?
        3. Calling the algorithm cell
        4. Fitness cell
        5. ga_main() cell
    2. Artificial hybrid neural networks
      1. Building the LSTM
      2. The goal of the model
    3. Summary
    4. Questions
    5. Further reading
  19. Neuromorphic Computing
    1. Neuromorphic computing
    2. Getting started with Nengo
      1. Installing Nengo and Nengo GUI
      2. Creating a Python program
      3. A Nengo ensemble
        1. Nengo neuron types
        2. Nengo neuron dimensions
        3. A Nengo node
      4. Connecting Nengo objects
      5. Visualizing data
      6. Probes
    3. Applying Nengo's unique approach to critical AI research areas
    4. Summary
    5. Questions
    6. References
    7. Further reading
  20. Quantum Computing
    1. The rising power of quantum computers
      1. Quantum computer speed
      2. Defining a qubit
      3. Representing a qubit
      4. The position of a qubit
        1. Radians, degrees, and rotations
        2. The Bloch sphere
      5. Composing a quantum score
        1. Quantum gates with Quirk
        2. A quantum computer score with Quirk
        3. A quantum computer score with IBM Q
    2. A thinking quantum computer
      1. Representing our mind's concepts
      2. Expanding MindX's conceptual representations
      3. The MindX experiment
        1. Preparing the data
        2. Transformation functions – the situation function
        3. Transformation functions – the quantum function
        4. Creating and running the score
        5. Using the output
    3. Summary
    4. Questions
    5. Further reading
  21. Answers to the Questions
    1. Chapter 1 – Getting Started with Next-Generation Artificial Intelligence through Reinforcement Learning
    2. Chapter 2 – Building a Reward Matrix – Designing Your Datasets
    3. Chapter 3 – Machine Intelligence – Evaluation Functions and Numerical Convergence
    4. Chapter 4 – Optimizing Your Solutions with K-Means Clustering
    5. Chapter 5 – How to Use Decision Trees to Enhance K-Means Clustering
    6. Chapter 6 – Innovating AI with Google Translate
    7. Chapter 7 – Optimizing Blockchains with Naive Bayes
    8. Chapter 8 – Solving the XOR Problem with a Feedforward Neural Network
    9. Chapter 9 – Abstract Image Classification with Convolutional Neural Networks (CNNs)
    10. Chapter 10 – Conceptual Representation Learning
    11. Chapter 11 – Combining Reinforcement Learning and Deep Learning
    12. Chapter 12 – AI and the Internet of Things
    13. Chapter 13 – Visualizing Networks with TensorFlow 2.x and TensorBoard
    14. Chapter 14 – Preparing the Input of Chatbots with Restricted Boltzmann Machines (RBMs) and Principal Component Analysis (PCA)
    15. Chapter 15 – Setting Up a Cognitive NLP UI/CUI Chatbot
    16. Chapter 16 – Improving the Emotional Intelligence Deficiencies of Chatbots
    17. Chapter 17 – Genetic Algorithms in Hybrid Neural Networks
    18. Chapter 18 – Neuromorphic Computing
    19. Chapter 19 – Quantum Computing
  22. Other Books You May Enjoy
  23. Index
3.15.144.56