In Figure 7.1, we represented a function using a Gaussian to get samples. One alternative is to use an multivariate Gaussian distribution to get a sample vector of length . In fact, you may want to try to generate a figure such as Figure 7.1 by replacing np.random.normal(0, 1, len(x)) with np.random.multivariate_normal(np.zeros_like(x), np.eye(len(x))).
You will see that the first statement is equivalent to the second, but now we can use the covariance matrix as a way to encode information about how data points are related to each other. By allowing the covariance matrix to be np.eye(len(x)), we are basically saying that each of the 10 points has a variance of 1 and the variance between them (that is, their covariances) is 0 (thus, they are independent). If we replace those zeros with other (positive) numbers, we could get covariances telling a different story. Thus, to represent functions probabilistically, we just need a multivariance Gaussian with a suitable covariance matrix, as we will see in the next section.