Algorithm

Now, we'll see how entropy TAML works step by step:

  1. Let's say we've a model parameterized by a parameter and we've a distribution over tasks . First, we randomly initialize the model parameter, .
  2. Sample a batch of tasks from a distribution of tasks—that is, . Say, we've sampled three tasks then: .
  3. Inner loop: For each task in tasks , we sample k data points and prepare our train and test datasets:

Then, we calculate the loss on our training set , minimize the loss using gradient descent, and get the optimal parameters:

So, for each of the tasks, we sample k data points, prepare the train dataset, minimize the loss, and get the optimal parameters. Since we sampled three tasks, we'll have three optimal parameters: .

  1. Outer loop: We perform meta optimization. Here, we try to minimize the loss on our meta training set, . We minimize the loss by calculating the gradient with respect to our optimal parameter and update our randomly initialized parameter ; along with this, we'll add the entropy term. So our final meta objective becomes the following:

  1. We repeat steps 2 to 4 for n number of iterations.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.74.231