© Isaiah Hull 2021
I. HullMachine Learning for Economics and Finance in TensorFlow 2https://doi.org/10.1007/978-1-4842-6373-0_9

9. Generative Models

Isaiah Hull1  
(1)
Nacka, Sweden
 
Machine learning models can be divided into two categories: discriminative and generative. Discriminative models are trained to perform classification or regression. That is, we input a set of features and expect to receive probabilities of class labels or predicted values as outputs. In contrast, generative models are trained to learn the underlying distribution of the data. Once we have trained a generative model, we can use it to produce new examples of a class. Figure 9-1 illustrates the difference between the two categories of model.
../images/496662_1_En_9_Chapter/496662_1_En_9_Fig1_HTML.jpg
Figure 9-1

Comparison of discriminator and generator models

Thus far, we have focused on discriminative models in this book; however, there was one exception: the latent Dirichlet allocation (Blei et al. 2003), which we introduced in Chapter 6. The LDA model took a text corpus as an input and returned a set of topics, where each topic was defined as a distribution over the vocabulary.

There has recently been considerable progress in the generative machine learning literature, and much of it has been concentrated in the development of two types of models: variational autoencoders (VAEs) and generative adversarial networks (GANs). With respect to image, text, and music generation, these two categories of model have delivered considerable breakthroughs.

For the most part, this progress hasn’t yet reached the economics and finance disciplines; however, some work in economics has begun to make use of GANs. In the final section of the chapter, we will briefly discuss two recent applications of GANs in economics (Athey et. al. 2019 and Kaji et al. 2018) and speculate on potential future uses.

Variational Autoencoders

In Chapter 8, we introduced the concept of an autoencoder, which consisted of two networks with shared weights: an encoder and a decoder. The encoder transformed the model inputs into a latent state. The decoder took the latent state as an input and produced a reconstruction of the features input into the encoder. We trained the model by computing a reconstruction loss, which was a transformation of the difference between the inputs and their predicted values.

We used an autoencoder to perform dimensionality reduction, but discussed other uses of autoencoders, which primarily involved generative tasks, such as the creation of novel images, music, and texts. What we did not mention is that autoencoders suffer from two problems that hinder their performance on such tasks. Both problems, which we discuss as follows, are related to the way in which they generate latent states:
  1. 1.

    The location and distribution of latent states: The latent states of an autoencoder with N nodes are points in N. For many problems, these points will tend to cluster in the same area; however, the autoencoder does not allow us to explicitly determine how and where such points cluster in N. This might seem unimportant, but it will ultimately determine what latent states can be fed into the model. If, for instance, we are attempting to generate an image, it would be useful to know what constitutes a valid latent state and, thus, what can be fed into the model. Otherwise, we will use states that are far away from anything the model has observed, which will yield a novel, but perhaps unconvincing image.

     
  2. 2.

    The performance of latent states not present in training: An autoencoder is trained to reconstruct inputs for a set of examples. For the latent state associated with a set of features, the decoder should yield outputs that resemble the input features. If, however, we perturb the latent vector slightly, there’s no guarantee that the decoder will have the capacity to generate a convincing example from a point it has never visited.

     

Variational autoencoders (VAEs) were developed to overcome these limitations. Rather than having a latent state layer, VAEs have a mean layer, a log variance layer, and sampling layer. The sampling layer draws from a normal distribution defined by the mean and log variance parameters in the preceding layers. The output of the sampling layer is then passed to the decoder as the latent state during the training process. Passing the same features to the encoder twice will yield different latent states each time.

Beyond the differences in architecture, VAEs also modify the loss function to include the Kullback-Leibler (KL) divergence for each normal distribution in the sampling layer. The KL divergence penalizes the distance between each of the normal distributions and a normal distribution with both a mean and log variance of zero.

The combination of these features accomplishes three things. First, it eliminates the determinism of latent states. Each set of features will now be associated with a distribution of latent states, rather than a single latent state. This will tend to improve generative performance by forcing the model to treat each individual latent state feature as a continuous variable. Second, it eliminates the sampling problem. We can now draw valid states randomly by making use of the sampling layer. And third, it corrects the issue with the latent distribution in space. The KL divergence component of the loss will push the distribution means close to zero and force them to have similar variances.

The remainder of this section will focus on the implementation of VAEs in TensorFlow. For an extended overview of the development of VAE models and a detailed exploration of their theoretical properties, see Kingma and Welling (2019).

The example we’ll use in this chapter makes use of the GDP growth data we introduced in Chapter 8. As a refresher, it consisted of quarterly time series that spanned the period between 1961:Q2 and 2020:Q1 for 25 different OECD countries. In Chapter 8, we used dimensionality-reduction techniques to extract a small number of common components from the 25 series at each point in time.

In this chapter, we will instead use the GDP growth data to train a VAE that is capable of generating similar series. We will start in Listing 9-1 by importing the libraries we’ll use in this exercise and will then load and prepare the data. Notice that we transpose the GDP data, so that the columns correspond to a specific quarter and the rows correspond to countries. We’ll then convert the data to a np.array() and set parameters for the batch size and the number of output nodes in the latent space.
import tensorflow as tf
import pandas as pd
import numpy as np
# Define data path.
data_path = '../data/chapter9/'
# Load and transpose data.
GDP = pd.read_csv(data_path+'gdp_growth.csv',
        index_col = 'Date').T
# Print data preview.
print(GDP.head())
Time    4/1/61    7/1/61   10/1/61    1/1/62
AUS  -1.097616 -0.715607  1.139175  2.806800 ...
AUT  -0.349959  1.256452  0.227988  1.463310 ...
BEL   1.167163  1.275744  1.381074  1.346942 ...
CAN   2.529317  2.409293  1.396820  2.650176 ...
CHE   1.355571  1.242126  1.958044  0.575396 ...
# Convert data to numpy array.
GDP = np.array(GDP)
# Set number of countries and quarters.
nCountries, nQuarters = GDP.shape
# Set number of latent nodes and batch size.
latentNodes = 2
batchSize = 1
Listing 9-1

Prepare GDP growth data for use in a VAE

The next step is to define the VAE model architecture, which will consist of an encoder and a decoder, similar to the autoencoder model of Chapter 8. In contrast to the autoencoder, however, latent states will be sampled from a set of independent normal distributions during the training process. We’ll start by defining a function that performs the sampling task in Listing 9-2.
# Define function for sampling layer.
def sampling(params, batchSize = batchSize, latentNodes = latentNodes):
        mean, lvar = params
epsilon = tf.random.normal(shape=(
        batchSize, latentNodes))
        return mean + tf.exp(lvar / 2.0) * epsilon
Listing 9-2

Define function to perform sampling task in VAE

Notice that the sampling layer does not contain any parameters of its own. Rather, it takes a pair of parameters as inputs, draws epsilon from a standard normal distribution for each output node in the latent state, and then transforms each draw using the mean and lvar parameters that correspond to the nodes in that state.

Once we have defined a sampling layer, we can also define an encoder model, which will closely resemble the one we constructed for the autoencoder model. We’ll do this in Listing 9-3. The only initial difference is that we’ll take the full time series for a country as an input, rather than the cross-section of values across countries at a point in time.

Another difference appears in the mean and lvar layers, which were not present in the autoencoder. These layers have the same number of nodes as the latent state. This is because they consist of mean and log variance parameter values for normal distributions that are associated with each of the nodes in the latent state.

We next define a Lambda layer, which accepts the sampling function we defined earlier and passes it the mean and lvar parameters. We can see that the sampling layer generates an output for each of the features (nodes) in the latent state. Finally, we define a functional model, encoder, which takes the input features – quarterly GDP growth observations – and returns a mean layer, a log variance layer, and sampled outputs using the means and log variances to parameterize normal distributions.
# Define input layer for encoder.
encoderInput = tf.keras.layers.Input(shape = (nQuarters))
# Define latent state.
latent = tf.keras.layers.Input(shape = (latentNodes))
# Define mean layer.
mean = tf.keras.layers.Dense(latentNodes)(encoderInput)
# Define log variance layer.
lvar = tf.keras.layers.Dense(latentNodes)(encoderInput)
# Define sampling layer.
encoded = tf.keras.layers.Lambda(sampling, output_shape=(latentNodes,))([mean, lvar])
# Define model for encoder.
encoder = tf.keras.Model(encoderInput, [mean, lvar, encoded])
Listing 9-3

Define encoder model for VAE

In Listing 9-4, we’ll define functional models for the decoder model and the entire variational autoencoder. Similar to the decoder component of an autoencoder, it accepts the latent state as an input from the encoder and then produces a reconstruction of the inputs as an output. The full VAE model also bears similarity to an autoencoder, taking a time series as an input and transforming it into a reconstruction of the same time series.

The final step is to define the loss function, which consists of two components – the reconstruction loss and the KL divergence – and append it to the model, which we do in Listing 9-5. The reconstruction loss is no different from the one we used for the autoencoder. The KL divergence measures how far each of the sampling layer distributions is from a standard normal distribution. The further away they are, the higher the penalty.
# Define output for decoder.
decoded = tf.keras.layers.Dense(nQuarters, activation = 'linear')(latent)
# Define the decoder model.
decoder = tf.keras.Model(latent, decoded)
# Define functional model for autoencoder.
vae = tf.keras.Model(encoderInput, decoder(encoded))
Listing 9-4

Define decoder model for VAE

# Compute the reconstruction component of the loss.
reconstruction = tf.keras.losses.binary_crossentropy(
        vae.inputs[0], vae.outputs[0])
# Compute the KL loss component.
kl = -0.5 * tf.reduce_mean(1 + lvar - tf.square(mean) - tf.exp(lvar), axis = -1)
# Combine the losses and add them to the model.
combinedLoss = reconstruction + kl
vae.add_loss(combinedLoss)
Listing 9-5

Define VAE loss

Finally, in Listing 9-6, we compile and train the model. In Listing 9-7, we now have a trained variational autoencoder, which we can use to perform a variety of different generative tasks. We can, for instance, use the predict() method of vae to generate the reconstruction for a given time series input. We can also generate a realization of the latent state for a given input, such as GDP growth for the United States. We can also perturb these latent states by adding random noise and then use the predict() method of decoder to generate an entirely new time series based on the modified latent state.
# Compile the model.
vae.compile(optimizer='adam')
# Fit model.
vae.fit(GDP, batch_size = batchSize, epochs = 100)
Listing 9-6

Compile and fit VAE

# Generate series reconstruction.
prediction = vae.predict(GDP[0,:].reshape(1,236))
# Generate (random) latent state from inputs.
latentState = encoder.predict(GDP[0,:].reshape(1,236))
# Perturb latent state.
latentState[0] = latentState[0] + np.random.normal(1)
# Pass perturbed latent state to decoder.
decoder.predict(latentState)
Listing 9-7

Generate latent states and time series with trained VAE.

Finally, in Figure 9-2, we show 25 generated time series that are based on a latent state realization for the US GDP growth series. We then perturb that original state over a 5x5 grid, where the rows add evenly spaced values over the [–1, 1] interval to the first latent state and the columns add equally spaced values over the [–1, 1] interval to the second latent state. The series in the center of the grid, shown in red, adds [0, 0] and, thus, is the original latent state.
../images/496662_1_En_9_Chapter/496662_1_En_9_Fig2_HTML.png
Figure 9-2

VAE-generated time series for GDP growth for the United States

While this example was simple and the latent state contained only two nodes for the purpose of demonstration, the VAE architecture can be applied to a wide variety of problems. We can, for instance, add convolutional layers to the encoder and decoder and change the input and output shapes. That will give us a VAE that generates images. Alternatively, we could add LSTM cells to the encoder and encoder, which would give us a VAE that could generate text or music.1 Furthermore, an LSTM-based architecture could yield some improvements in time series generation over the dense network approach we adopted in this example.

Generative Adversarial Networks

Two families of models have dominated the generative machine learning literature: variational autoencoders and generative adversarial networks. VAEs, as we’ve seen, provide granular control over the generation of examples through the manipulation of latent states and the features they encode. GANs, in contrast, have been more successful at producing highly convincing examples of classes. For example, some of the most convincing generated images are produced using GANs.

As we discussed in the previous section, VAEs are a combination of two models: an encoder and a decoder, joined by a sampling layer. Similarly, GANs also consist of two models: a generator and a discriminator. The generator takes a random input vector, which we may think of as a latent state, and generates an example of a class, such as a real GDP growth time series (or an image, a sentence, or a musical score).

Once the generator component of a GAN has produced several examples of a class, they are passed to the discriminator, along with an equal number of true examples. In our case, this would be a combination of true and generated real GDP growth series. The discriminator is then trained to differentiate between the real and fake examples.

After the discriminator has finished the classification task, we can train the generator using an adversarial network, which combines both the generator and discriminator models. Just as was the case for the encoder and decoder components of the VAE, an adversarial network will share weights with both networks. The adversarial network will train the generator to maximize the loss of the discriminator network.

As Goodfellow et al. (2017) discuss, we may view the two networks as trying to maximize their respective payoffs in a zero sum game, where the discriminator receives v(g, d) and the generator receives −v(g, d). The generator chooses samples, g, to trick the discriminator; and the discriminator chooses probabilities, d, for each of those samples. The equilibrium, characterized by a set of generated images, g, is given in Equation 9-1.

Equation 9-1. The equilibrium condition for image generation in a GAN.
$$ {g}^{ast }=arg underset{g}{min}underset{d}{max }vleft(g,d
ight) $$

Consequently, when we train the adversarial part of the network, we must freeze the discriminator weights. This will constrain the network to improve the generation process, rather than weakening the discriminator. Iterating over these steps in the training process will ultimately yield the evolutionary equilibrium described in Equation 9-1.

Figure 9-3 illustrates the generator and discriminator networks of a GAN. To summarize, the generator yields novel examples, which are not drawn from the data. The discriminator combines those examples with true examples and then performs classification. And the adversarial network trains the generator by attaching it to a discriminator, but with frozen weights. Training over the network occurs iteratively.

Following the example from the section on VAEs, we’ll again make use of the GDP growth data, which we load and prepare in Listing 9-8. Our intention will be to train a GAN to generate credible GDP growth time series from a randomly drawn vector input. We will follow the approach to GAN construction described in Krohn et al. (2020).
../images/496662_1_En_9_Chapter/496662_1_En_9_Fig3_HTML.png
Figure 9-3

Depiction of the generator and discriminator from a GAN

import tensorflow as tf
import pandas as pd
import numpy as np
# Load and transpose data.
GDP = pd.read_csv(data_path+'gdp_growth.csv',
        index_col = 'Date').T
# Convert pandas DataFrame to numpy array.
GDP = np.array(GDP)
Listing 9-8

Prepare GDP growth data for use in a GAN

In Listing 9-9, we define the generative model. We again follow the simple VAE model and draw a vector with two elements as an input to the generator. Since the input to the generator can be seen as an analogy to the latent vector in a VAE, we should view the generator as a decoder. This means we’ll start with a narrow, bottleneck-type layer and will upsample to the output, which will be a generated GDP growth time series.

The simplest version of the generator would consist of an input layer that accepts the latent vector and an output layer, which upsamples the input layer. Since our output layer consists of GDP growth values, we’ll use a linear activation function. We’ll also include a hidden layer with a relu activation, since the model will otherwise be unable to capture non-linearities.
# Set dimension of latent state vector.
nLatent = 2
# Set number of countries and quarters.
nCountries, nQuarters = GDP.shape
# Define input layer.
generatorInput = tf.keras.layers.Input(shape = (nLatent,))
# Define hidden layer.
generatorHidden = tf.keras.layers.Dense(16, activation="relu")(generatorInput)
# Define generator output layer.
generatorOutput = tf.keras.layers.Dense(236, activation="linear")(generatorHidden)
# Define generator model.
generator = tf.keras.Model(inputs = generatorInput, outputs = generatorOutput)
Listing 9-9

Define the generative model of a GAN

We’ll next define the discriminator in Listing 9-10. It will take real and generated GDP growth series as inputs, each of which will have a length of nQuarters. It will then produce a probability of being a real GDP growth series for each of the input series. Note that we did not compile generator, but did compile discriminator. This is because we will use an adversarial network to train generator.
# Define input layer.
discriminatorInput = tf.keras.layers.Input(shape = (nQuarters,))
# Define hidden layer.
discriminatorHidden = tf.keras.layers.Dense(16, activation="relu")(discriminatorInput)
# Define discriminator output layer.
discriminatorOutput = tf.keras.layers.Dense(1, activation="sigmoid")(discriminatorHidden)
# Define discriminator model.
discriminator = tf.keras.Model(inputs = discriminatorInput, outputs = discriminatorOutput)
# Compile discriminator.
discriminator.compile(loss='binary_crossentropy', optimizer=tf.optimizers.Adam(0.0001))
Listing 9-10

Define and compile the discriminator model of a GAN

We have now defined a generator model and a discriminator model. We have also compiled the discriminator. The next step is to define and compile an adversarial model, which will be used to train the generator. The adversarial model will share weights with the generator and will use a frozen version of the weights for the discriminator – that is, the weights will not update when we train the adversarial network, but they will update when we train the discriminator.

Listing 9-11 defines the adversarial network. The input to the adversarial network is a latent vector, so it will have the same size as the input to generator. We will next define the output of the generator model as timeSeries, which will be a fake GDP growth time series. We can then set the trainability of discriminator to False, so that it does not update while we’re training the adversarial network. Finally, we’ll set the output of the network to be the discriminator’s output and define and compile a functional model, adversarial. In Listing 9-12, we’ll train discriminator and adversarial.
# Define input layer for adversarial network.
adversarialInput = tf.keras.layers.Input(shape=(nLatent))
# Define generator output as generated time series.
timeSeries = generator(adversarialInput)
# Set discriminator to be untrainable.
discriminator.trainable = False
# Compute predictions from discriminator.
adversarialOutput = discriminator(timeSeries)
# Define adversarial model.
adversarial = tf.keras.Model(adversarialInput, adversarialOutput)
# Compile adversarial network.
adversarial.compile(loss='binary_crossentropy', optimizer=tf.optimizers.Adam(0.0001))
Listing 9-11

Define and compile the adversarial model of a GAN

# Set batch size.
batch, halfBatch = 12, 6
for j in range(1000):
        # Draw real training data.
        idx = np.random.randint(nCountries,
        size = halfBatch)
        real_gdp_series = GDP[idx, :]
        # Generate fake training data.
        latentState = np.random.normal(size=[halfBatch, nLatent])
        fake_gdp_series = generator.predict(latentState)
        # Combine input data.
        features = np.concatenate((real_gdp_series,
        fake_gdp_series))
        # Create labels.
        labels = np.ones([batch,1])
        labels[halfBatch:, :] = 0
        # Train discriminator.
        discriminator.train_on_batch(features, labels)
        # Generate latent state for adversarial net.
        latentState = np.random.normal(size=[batch, nLatent])
        # Generate labels for adversarial network.
        labels = np.ones([batch, 1])
        # Train adversarial network.
        adversarial.train_on_batch(latentState, labels)
Listing 9-12

Train the discriminator and the adversarial network

We start by defining the batch size. We then enter the training loop, which consists of several steps. First, we draw random integers and use them to select rows in the GDP matrix, which each consists of a GDP growth time series. This will be the real samples in the discriminator’s training set. Next, we generate the fake data by drawing latent vectors and then passing them to generator. We then combine both types of series and assign them the corresponding labels (i.e., 1 = real and 0 = fake). We can now pass this data to the discriminator to perform a single batch of training.

We next perform an iteration of training for the adversarial network. Here, we’ll generate a batch of latent states, input them into generator, and then train with the objective of tricking the discriminator into classifying them as real. Notice that we’re iterating over the training of two models and won’t use normal stopping criteria for the training process. Rather, we will look for a stable evolutionary equilibrium where neither model appears to be able to gain an advantage.

In Figure 9-4, we plot the model losses over time. We can see that after approximately 500 training iterations, neither model appears to improve substantially, indicating that we have reached a stable evolutionary equilibrium.
../images/496662_1_En_9_Chapter/496662_1_En_9_Fig4_HTML.png
Figure 9-4

Discriminator and adversarial model losses by training iteration

Finally, we plot one of the GDP growth series produced by the GAN in Figure 9-5. Taking nothing more than white noise vector inputs and information about the discriminator’s performance, the adversarial network managed to train the generator to produce a fairly credible fake GDP growth series after 1000 training iterations. Of course, we could have improved performance considerably by allowing for more latent features and a more advanced model architecture, such as an LSTM.
../images/496662_1_En_9_Chapter/496662_1_En_9_Fig5_HTML.png
Figure 9-5

Example fake GDP growth series

Applications in Economics and Finance

Throughout this chapter, we concentrated on what might seem like an obscure example: generating simulated GDP growth series through the use of generative machine learning models; however, such exercises are common in Monte Carlo simulation studies, which are used to test the small sample properties of estimators in econometrics. Without generating realistic series and adequately capturing interdependencies between series, it is challenging to accurately evaluate the properties of estimators.

In fact, one of the earliest applications of GANs in the economics literature was intended to achieve precisely this objective. Athey et al. (2019) consider the possibility of using Wasserstein GANs to simulate data that appears similar to observations from an existing dataset that is insufficiently large to be used in a Monte Carlo simulation. The value of this is that it allows an econometrician to avoid the two common alternatives to this approach: (1) drawing randomly from the small dataset itself, which will result in many repetitions of the same observations, and (2) generating simulated series that typically fail to accurately capture dependencies between series in the dataset. Athey et al. (2019) demonstrate the value of their approach (and GANs more generally) by evaluating estimators using artificial data generated by a WGAN.

In addition to Athey et al. (2019), recent work in the economics literature (Kaji et al. 2018) examines whether WGANs can be used to perform indirect inference, which is typically used to estimate structural models in economics and finance. In Kaji et al. (2018), they attempt to estimate a model in which workers of different types are choosing from a wage and location menu. The parameters they want to recover are structural and cannot be directly estimated from the data, which requires them to use an indirect inference method. The approach they use is to couple model simulation with a discriminator, training the model until the simulated data is indistinguishable from the true data.

Beyond the existing applications, which are currently focused on model estimation, GANs and VAEs could also be used in off-the-shelf applications to image and text generation. While the use of image data remains limited in economics – even in discriminative models – GANs and VAEs offer the possibility of performing visual counterfactual simulations with economic data. In urban economics, for instance, we could infer how the placement of public infrastructure would have changed depending on the state of public policy and other factors.

Similarly, the growing natural language processing literature in economics and finance could make use of text generation to examine how, for instance, company press releases would differ when the underlying state of the economy or state of the industry changes.

Summary

Prior to this chapter, this book primarily discussed discriminative machine learning models. Such models perform classification or regression. That is, they take features from a training set and attempt to discriminate between different classes or make a continuous prediction for a target. Generative machine learning differs from discriminative machine learning, in that it generates new examples, rather than discriminating among examples.

Outside of the economics and finance disciplines, generative machine learning has been used to create compelling images, music, and text. It has also been used to improve Monte Carlo simulation (Athey et al. 2019) and perform indirect inference for structural models (Kaji et al. 2018) in economics.

In this chapter, we focused on two generative models: the variational autoencoder (VAE) and the generative adversarial network (GAN). The VAE model extended the autoencoder by including mean, variance, and sampling layers. This improved the autoencoder by imposing restrictions on its latent space, forcing states to cluster around the origin and have a log variance of 0.

Similar to autoencoders and VAEs, GANs also consist of multiple component models: a generator model, a discriminator model, and an adversarial model. The generator model creates novel examples. The discriminator model attempts to classify them. And the adversarial model trains the generator to create compelling examples that trick the discriminator. The training process for GANs involves finding a stable evolutionary equilibrium.

Finally, we demonstrated how both VAEs and GANs can be used to generate artificial GDP growth data. We also discussed how they are being applied within economics currently and how they might be applied in the future if they gain more widespread adoption.

Bibliography

Athey, S., G.W. Imbens, J. Metzger, and E. Munro. 2019. “Using Wasserstein Generative Adversarial Networks for the Design of Monte Carlo Simulations.” Working Paper No. 3824.

Blei, D.M., A.Y. Ng, and M.I. Jordan. 2003. “Latent Dirichlet Allocation.” Journal of Machine Learning Research 3 (993–1022).

Goodfellow, I., Y. Bengio, and A. Courville. 2017. Deep Learning. Cambridge, MA: MIT Press.

Goodfellow, I.J., J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. n.d. “Generative adversarial networks.” NIPS’2014. 2014.

Kaji, T., E. Manresa, and G. Pouliot. 2018. “Deep Inference: Artificial Intelligence for Structural Estimation.” Working Paper.

Kingma, D.P., and M. Welling. 2019. “An Introduction to Variational Autoencoders.” Foundations and Trends in Machine Learning 12 (4): 307–392.

Krohn, J., G. Beyleveld, and A. Bassens. 2020. Deep Learning Illustrated: A Visual, Interactive Guide to Artificial Intelligence. Addison-Wesley.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.4.181