Examples

5.1     Software dependencies for shallow net in Keras

5.2     Loading MNIST data

5.3     Flattening two-dimensional images to one dimension

5.4     Converting pixel integers to floats

5.5     Converting integer labels to one-hot

5.6     Keras code to architect a shallow neural network

5.7     Keras code to train our shallow neural network

8.1     Keras code to architect an intermediate-depth neural network

8.2     Keras code to compile our intermediate-depth neural network

8.3     Keras code to train our intermediate-depth neural network

9.1     Weight initialization with values sampled from standard normal distribution

9.2     Architecture for a single dense layer of sigmoid neurons

9.3     Weight initialization with values sampled from Glorot normal distribution

9.4     Additional dependencies for deep net in Keras

9.5     Deep net in Keras model architecture

9.6     Deep net in Keras model compilation

9.7     Regression model dependencies

9.8     Regression model network architecture

9.9     Compiling a regression model

9.10   Fitting a regression model

9.11   Predicting the median house price in a particular suburb of Boston

9.12   Using TensorBoard while fitting a model in Keras

10.1   Dependencies for LeNet in Keras

10.2   Retaining two-dimensional image shape

10.3   CNN model inspired by LeNet-5

10.4   CNN model inspired by AlexNet

10.5   CNN model inspired by VGGNet

10.6   Loading the VGGNet19 model for transfer learning

10.7   Adding classification layers to transfer-learning model

10.8   Defining data generators

10.9   Train transfer-learning model

11.1   Converting a sentence to lowercase

11.2   Removing stop words and punctuation with a list comprehension

11.3   Adding word stemming to our list comprehension

11.4   Detecting collocated bigrams

11.5   Removing capitalization and punctuation from Project Gutenberg corpus

11.6   Detecting collocated bigrams with more conservative thresholds

11.7   Creating a “clean” corpus that includes bigrams

11.8   Running word2vec

11.9   t-SNE for dimensionality reduction

11.10 Static two-dimensional scatterplot of word-vector space

11.11 Interactive bokeh plot of two-dimensional word-vector data

11.12 Loading sentiment classifier dependencies

11.13 Setting dense sentiment classifier hyperparameters

11.14 Loading IMDb film review data

11.15 Printing the number of tokens in six reviews

11.16 Printing a review as a character string

11.17 Print full review as character string

11.18 Standardizing input length by padding and truncating

11.19 Dense sentiment classifier architecture

11.20 Compiling our sentiment classifier

11.21 Creating an object and directory for checkpointing model parameters after each epoch

11.22 Fitting our sentiment classifier

11.23 Loading model parameters

11.24 Predicting ŷ for all validation data

11.25 Printing a full validation review

11.26 Plotting a histogram of validation data ŷ values

11.27 Calculating ROC AUC for validation data

11.28 Creating a ydf DataFrame of y and ŷ values

11.29 Ten cases of negative validation reviews with high ŷ scores

11.30 Ten cases of positive validation reviews with low ŷ scores

11.31 Additional CNN dependencies

11.32 Convolutional sentiment classifier hyperparameters

11.33 Convolutional sentiment classifier architecture

11.34 RNN sentiment classifier hyperparameters

11.35 RNN sentiment classifier architecture

11.36 LSTM sentiment classifier hyperparameters

11.37 LSTM sentiment classifier architecture

11.38 Bidirectional LSTM sentiment classifier architecture

11.39 Stacked recurrent model architecture

11.40 Multi-ConvNet sentiment classifier hyperparameters

11.41 Multi-ConvNet sentiment classifier architecture

12.1   Generative adversarial network dependencies

12.2   Loading the Quick, Draw! data

12.3   Discriminator model architecture

12.4   Compiling the discriminator network

12.5   Generator model architecture

12.6   Adversarial model architecture

12.7   Compiling the adversarial network

12.8   GAN training

12.9   Plotting our GAN training loss

12.10 Plotting our GAN training accuracy

13.1   Cart-Pole DQN hyperparameters

13.2   A deep Q-learning agent

13.3   DQN agent interacting with an OpenAI Gym environment

14.1   Dependencies for building a Keras layer-based deep net in TensorFlow without loading the Keras library

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.218.234.83