Algorithm

Now, we will better understand the Gaussian prototypical network by going through it step by step:

  1. Let's say we have a dataset, D = {(x1, y1,), (x2, y2), ... (xi, yi)}, where x is the feature and y is the label. Let's say we have a binary label, which means we have only two classes, 0 and 1. We will sample data points at random without replacement from each of the classes from our dataset, D, and create our support set, S.
  2. Similarly, we sample data points at random per class and create the query set, Q.
  3. We will pass the support set to our embedding function, f(). The embedding function will generate the embeddings for our support set, along with the covariance matrix.
  4. We calculate the inverse of the covariance matrix.
  5. We compute the prototype of each class in the support set as follows:

In this equation, is the diagonal of the inverse covariance matrix, denotes the embeddings of the support set and superscript c denotes the class.

  1. After computing the prototype of each class in the support set, we learn the embeddings for the query set, Q. Let's say x' is the embedding of the query point.
  2. We calculate the distance of the query point embeddings to the class prototypes as follows:

  1. After calculating the distance between the class prototype and query set embeddings, we predict the class of the query set as a class that has a minimum distance, as follows:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.127.37