MAML in supervised learning

MAML is pretty good at finding the optimal initial parameter, right? Now, we will see how can we use MAML in the supervised learning setting. Before going ahead, let's quickly define our loss functions. Loss function can be any function according to the task we are performing.

If we are performing regression, then we can use our loss function as a mean squared error:

If it is a classification task, then we can use a loss function such as cross-entropy loss:

Now let's see step-by-step, exactly how MAML is used in supervised learning:

  1. Let's say we have a model f parameterized by a parameter θ and we have a distribution over tasks p(T). First, we randomly initialize the model parameter θ.
  2. We sample some batch of tasks Ti from a distribution of tasks, that is, Ti ∼ p(T). Let's say we have sampled three tasks, then T = {T1, T2, T3}.
  1. Inner loop: For each task (Ti) in tasks (T), we sample k data points and prepare our train and test datasets:

Wait! What are the train and test sets? We use the train set in the inner loop for finding the optimal parameters and test set in the outer loop for finding the optimal parameter θ. Test set does not mean that we are checking model's performance. It basically acts as a train set in the outer loop. We can also call our test set as a meta-train set.

Now we apply any supervised learning algorithm on , calculate the loss and minimize the loss using gradient descent and get the optimal parameters , so. So, for each of the tasks, we sample k data points and minimize the loss on the train set and get the optimal parameters . As we sampled three tasks, we will have three optimal parameters, .

  1. Outer loop: We perform meta optimization in the test set (meta-train set)—that is, here we try to minimize the loss in the test set . We minimize the loss by calculating the gradient with respect to our optimal parameter calculated in the previous step and update our randomly initialized parameter θ using our test set (meta-train set):

  1. We repeat steps 2 to step 5 for n number of iterations. The following diagram gives you an overview of MAML in supervised learning:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.146.37.250