Meta learning loss

We sample some batch of tasks from the task distributions, learn their concepts via the concept generator, perform meta learning on those concepts, and then we compute the meta learning loss:

Our meta learning loss varies depending upon what meta learner we use, such as MAML or Reptile.

Our final loss function is a combination of both of these, concept discrimination and meta learning loss:

In the previous equation, lambda is a hyperparameter balancing between meta learning and concept discrimination loss. So, our objective becomes finding the optimal parameter that minimizes this loss:

We minimize the loss by calculating gradients and update our model parameters:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.5.15