Index
C
- categorical cross-entropy loss, Compiling the Model
- CelebFaces Attributes (CelebA) dataset, Using VAEs to Generate Faces
- character tokens, Tokenization
- CIFAR-10 dataset, Loading the Data, Analysis of the WGAN
- CMA-ES (covariance matrix adaptation evolution strategy), Training the Controller-Output from the Controller Training
- CNTK, Keras and TensorFlow
- code examples, obtaining and using, Prerequisites, Using Code Examples, Setting Up Your Environment
- comments and questions, How to Contact Us
- composition (see music generation)
- concatenate layer, The Generators (U-Net)
- content loss, Content Loss-Content Loss
- convolutional layers in neural networks, Convolutional Layers-Convolutional Layers, Summary
- convolutional transpose layers, The Decoder-The Decoder
- covariate shift, Batch Normalization
- CycleGAN (cycle-consistent adversarial network)
- analysis of, Analysis of the CycleGAN
- benefits of, CycleGAN
- compiling, Compiling the CycleGAN-Compiling the CycleGAN
- CycleGAN versus pix2pix, CycleGAN
- discriminators, The Discriminators
- generators (ResNet), The Generators (ResNet)-The Generators (ResNet)
- generators (U-Net), The Generators (U-Net)-The Generators (U-Net)
- Keras-GAN code repository, Your First CycleGAN
- Monet-style transfer example, Creating a CycleGAN to Paint Like Monet-Neural Style Transfer
- overview of, Overview
- published paper on, CycleGAN
- training, Training the CycleGAN
- training data, Your First CycleGAN
G
- gated recurrent units (GRUs), Long Short-Term Memory Networks, Gated Recurrent Units
- generative adversarial networks (GANs)
- challenges of, GAN Challenges-Tackling the GAN Challenges
- defining, Your First GAN-The Generator
- discriminators, The Discriminator, The Discriminators
- "ganimal" example, Ganimals-Ganimals
- published paper on, Generative Adversarial Networks
- theory underlying, Introduction to GANs
- training, Training the GAN-Training the GAN
- Wasserstein GAN, Wasserstein GAN-Analysis of the WGAN
- WGAN-GP, WGAN-GP-Analysis of WGAN-GP
- generative deep learning
- additional resources, Other Resources
- advances in generative modeling, The Rise of Generative Modeling-The Rise of Generative Modeling, The Transformer-AI Music
- challenges of, The Challenges of Generative Modeling-Representation Learning
- future of, Conclusion-Conclusion
- Generative modeling framework, The Generative Modeling Framework-The Generative Modeling Framework
- history of generative modeling, Five Years of Progress-Five Years of Progress
- introduction to, What Is Generative Modeling?-Advances in Machine Learning
- learning objectives and approach, Prerequisites
- learning prerequisites, Objective and Approach
- probabilistic generative models, Probabilistic Generative Models-Hello Wrodl! Continued
- generators
- GloVe (“Global Vectors”), Model Architecture, BERT
- Goodfellow, Ian, Generative Adversarial Networks
- Google Colaboratory, Other Resources
- GPT-2 language model, The Rise of Generative Modeling, GPT-2
- gradient descent, Neural Style Transfer
- Gram matrices, Style Loss
H
- Ha, David, Play, The MDN-RNN, Parallelizing CMA-ES
- Hello Wrodl! example, Hello Wrodl!-Hello Wrodl! Continued
- hidden layers, Deep Neural Networks
- hidden state, The LSTM Layer, The LSTM Cell
- Hinton, Geoffrey, Advances in Machine Learning, Dropout Layers
- Hochreiter, Sepp, Long Short-Term Memory Networks
- Hou, Xianxu, Analysis of the VAE
- Hull, Jonathan, The Lipschitz Constraint
- hyperparameters, Hyperparameters
I
- identity, Compiling the CycleGAN
- image generation (see also facial image generation; neural style transfer technique)
- BigGAN, BigGAN
- CIFAR-10 dataset for, Loading the Data
- generative modeling process, What Is Generative Modeling?
- generative versus discriminative modeling, Generative Versus Discriminative Modeling
- key breakthrough in, Advances in Machine Learning
- ProGAN, ProGAN
- progress in facial image generation, The Rise of Generative Modeling
- representation learning for, Representation Learning
- rise of generative modeling for, The Rise of Generative Modeling
- Self-Attention GAN (SAGAN), Self-Attention GAN (SAGAN)
- StyleGAN, StyleGAN
- ImageNet dataset, Content Loss
- ImageNet Large Scale Visual Recognition Challenge (ILSVRC), Advances in Machine Learning
- in-dream training, In-Dream Training-Challenges of In-Dream Training
- inference, Inference
- instance normalization layers, The Generators (U-Net)
K
- Keras
- attention mechanisms in, Building an Attention Mechanism in Keras-Building an Attention Mechanism in Keras
- autoencoder creation in, The Encoder
- backends for, Keras and TensorFlow
- benefits of, Keras and TensorFlow
- content loss calculation in, Content Loss
- Conv2DTranspose layer, The Decoder, The Generator
- custom loss function creation, The MDN-RNN Loss Function
- CycleGAN creation and training, Your First CycleGAN-Analysis of the CycleGAN
- data loading, Loading the Data
- decoder creation in, The Decoder
- documentation, Compiling the Model
- fit_generator method, Training the VAE
- GAN discriminator creation in, The Discriminator
- importing, Setting Up Your Environment
- inference model in, Inference
- LSTM in, Long Short-Term Memory Networks
- model building, Building the Model-Building the Model
- model compilation, Compiling the Model
- model evaluation, Evaluating the Model
- model improvement, Improving the Model-Putting It All Together
- model training, Training the Model
- MuseGAN generator in, Putting It All Together
- PatchGAN discriminators in, The Discriminators
- residual blocks in, The Generators (ResNet)
- U-Net generators in, The Generators (U-Net)
- VAE creation in, The Encoder
- Keras layers
- Activation, Putting It All Together
- Batch Normalization, Batch Normalization
- Bidirectional, Bidirectional Cells
- Concatenate, The Generators (U-Net)
- Conv2D, Convolutional Layers
- Conv2DTranspose, The Decoder
- Conv3D, The Critic
- Dense, Building the Model
- Dropout, Dropout Layers
- Embedding, The Embedding Layer
- Flatten, Building the Model
- GRU, Long Short-Term Memory Networks
- Input, Building the Model
- Lambda, The Encoder
- LeakyReLU, Putting It All Together
- LSTM, The LSTM Layer
- Reshape, Building an Attention Mechanism in Keras
- Upsampling2D layer, The Generator
- Kingma, Diederik, Variational Autoencoders
- Kullback–Leibler (KL) divergence, The Loss Function
L
- L-BFGS-B algorithm, Running the Neural Style Transfer
- labels, Generative Versus Discriminative Modeling
- language translation, Encoder–Decoder Models, Attention
- layers, Deep Neural Networks
- LeakyReLU, Building the Model
- likelihood, Probabilistic Generative Models
- Lipschitz constraint, The Lipschitz Constraint, Analysis of the WGAN
- loss functions, Compiling the Model
- LSTM (long short-term memory) networks
- with attention mechanism, Your First Music-Generating RNN
- dataset used, Your First LSTM Network
- embedding layer, The Embedding Layer
- generating datasets, Building the Dataset
- generating new text, Generating New Text-Generating New Text
- history of, Long Short-Term Memory Networks
- LSTM architecture, The LSTM Architecture
- LSTM cell, The LSTM Cell-The LSTM Cell
- LSTM layer, The LSTM Layer-The LSTM Layer
- published paper on, Long Short-Term Memory Networks
- tokenizing the text, Tokenization-Tokenization
M
- machine learning
- machine painting (see style transfer)
- machine writing (see text data generation)
- Machine-Learning-as-a-Service (MLaaS), Advances in Machine Learning
- Maluuba NewsQA dataset, A Question-Answer Dataset
- masked language model, BERT
- maximum likelihood estimation, Probabilistic Generative Models
- MDN (mixture density network), The MDN-RNN, Collecting Data to Train the RNN-The MDN-RNN Loss Function
- mean squared error loss, Compiling the Model
- MIDI files, Musical Notation
- mode collapse, Mode Collapse
- models
- CycleGAN, CycleGAN-Analysis of the CycleGAN
- deep neural networks, Your First Deep Neural Network-Putting It All Together
- encoder-decoder models, Encoder–Decoder Models-Encoder–Decoder Models
- generative adversarial networks (GANs), Generative Adversarial Networks-Tackling the GAN Challenges
- generative modeling, What Is Generative Modeling?-The Generative Modeling Framework
- generative versus discriminative modeling, Generative Versus Discriminative Modeling
- improving models, Improving the Model-Putting It All Together
- LSTM (long short-term memory) networks, Your First LSTM Network-The LSTM Cell
- neural style transfer, Neural Style Transfer-Analysis of the Neural Style Transfer Model
- parametric modeling, Probabilistic Generative Models
- probabilistic generative models, Probabilistic Generative Models-Hello Wrodl! Continued
- probabilistic versus deterministic, What Is Generative Modeling?
- question-answer generators, A Question and Answer Generator-Model Results
- RNNs (recurrent neural networks), Your First Music-Generating RNN-Generating Polyphonic Music
- variational autoencoders (VAEs), Variational Autoencoders-Summary
- Wasserstein GAN, Wasserstein GAN-Analysis of the WGAN
- WGAN-GP, WGAN-GP-Analysis of WGAN-GP
- World Model architecture, World Model Architecture-The Controller
- Monet-to-photo dataset, Creating a CycleGAN to Paint Like Monet
- multihead attention module, Multihead Attention
- multilayer RNNs, Stacked Recurrent Networks
- MuseGAN, Your First MuseGAN-Analysis of the MuseGAN
- MuseNet, MuseNet, AI Music
- MuseScore, Preliminaries
- music generation
- challenges of, Compose
- data extraction, Musical Notation
- dataset used, Preliminaries
- generating polyphonic music, Generating Polyphonic Music
- importing MIDI files, Musical Notation
- MuseGAN analysis, Analysis of the MuseGAN
- MuseGAN creation, Your First MuseGAN-Putting It All Together
- MuseGAN critic, The Critic
- MuseGAN example, The Musical Organ
- music versus text generation, Compose
- musical notation, Musical Notation
- prerequisites to, Preliminaries
- RNN (recurrent neural network) for, Your First Music-Generating RNN-Generating Polyphonic Music
- music21 library, Musical Notation
P
- padding, Convolutional Layers
- painting (see style transfer)
- Papers with Code, Other Resources
- parameters, trainable and nontrainable, Batch Normalization
- parametric modeling, Probabilistic Generative Models
- PatchGAN discriminators, The Discriminators
- pix2pix, CycleGAN
- positional encoding, Positional Encoding
- probabilistic generative models
- probability density function, Probabilistic Generative Models, The Encoder
- ProGAN, ProGAN
- Project Gutenberg, Your First LSTM Network
- Python, Setting Up Your Environment
S
- sample space, Probabilistic Generative Models
- scaled dot-product attention, Multihead Attention
- Schmidhuber, Jurgen, Long Short-Term Memory Networks, Play
- self-attention, Multihead Attention
- Self-Attention GAN (SAGAN), Self-Attention GAN (SAGAN)
- sequence modeling, Five Years of Progress
- Sequential models (Keras), Building the Model-Building the Model
- sigmoid activation, Building the Model
- skip connections, The Generators (U-Net)
- softmax activation, Building the Model
- stacked recurrent networks, Stacked Recurrent Networks
- standard deviation, The Encoder
- standard normal curves, The Encoder
- stemming, Tokenization
- stochastic (random) elements, What Is Generative Modeling?
- strides parameter (Keras), Convolutional Layers
- structured data, Structured and Unstructured Data
- style loss, Style Loss-Style Loss
- style transfer
- StyleGAN, The Rise of Generative Modeling, StyleGAN
- supervised learning, Generative Versus Discriminative Modeling
T
- TensorFlow, Keras and TensorFlow
- text data generation
- text summarization, Encoder–Decoder Models
- Theano, Keras and TensorFlow
- tokenization, Tokenization-Tokenization
- total variance loss, Total Variance Loss
- trainable parameters, Batch Normalization
- training data, What Is Generative Modeling?
- training process, Deep Neural Networks
- Transformer
- analysis of, Analysis of the Transformer
- BERT model, BERT
- decoder layers, The Decoder
- GPT-2 language model, GPT-2
- history of, Your First Music-Generating RNN
- models architecture, The Transformer
- multihead attention layer, Multihead Attention
- MuseNet model, MuseNet
- positional encoding function, Positional Encoding
- published paper on, The Transformer
- truncation trick, BigGAN
V
- validity, Compiling the CycleGAN
- vanishing gradient problem, The Generators (ResNet), Long Short-Term Memory Networks
- variance, The Encoder
- variational autoencoders (VAEs)
- autoencoder analysis, Analysis of the Autoencoder-Analysis of the Autoencoder
- autoencoder example, Your First Autoencoder-Joining the Encoder to the Decoder
- autoencoder parts, Autoencoders
- decoders, The Decoder-The Decoder
- encoders, The Encoder-The Encoder
- facial image generation using, Using VAEs to Generate Faces-Morphing Between Faces
- generative art example, The Art Exhibition-The Art Exhibition, The Variational Art Exhibition-The Variational Art Exhibition
- published paper on, Variational Autoencoders, Analysis of the VAE
- VAE analysis, Analysis of the Variational Autoencoder
- VAE build in Keras, The Encoder
- VAE diagram, The Encoder
- VAE loss function, The Loss Function
- VAE parts, Building a Variational Autoencoder-The Loss Function
- World Model architecture, The Variational Autoencoder
- World Model training, Training the VAE-The decoder model
- VGG19 network, Content Loss
- virtual environments, Setting Up Your Environment
W
- Wasserstein GANs (WGANs)
- Wasserstein GAN–Gradient Penalty (WGAN-GP)
- weight clipping, Weight Clipping, Analysis of the WGAN
- weights, Deep Neural Networks
- Welling, Max, Variational Autoencoders
- World Models
- collecting random rollout data, Collecting Random Rollout Data
- collecting RNN training data, Collecting Data to Train the RNN
- model setup, Setup
- published paper on, Play
- training in-dream, In-Dream Training-Challenges of In-Dream Training
- training overview, Training Process Overview
- training the controller, Training the Controller-Output from the Controller Training
- training the MDN-RNN, Training the MDN-RNN-The MDN-RNN Loss Function
- training the VAE, Training the VAE-The decoder model
- World Model architecture, World Model Architecture-The Controller
- World Models paper, Play, The MDN-RNN, The Future of Generative Modeling
- Wrodl!, Hello Wrodl!
..................Content has been hidden....................
You can't read the all page of ebook, please click
here login for view all page.