ADML

Now that we have seen what adversarial samples are and how to generate adversarial samples, we will see how to use these adversarial samples in meta learning. We train our meta learning model with both clean samples and adversarial samples. But what is the need for training the model with adversarial samples? It helps us to find the robust model parameter θ. Both the clean and adversarial samples are used in the inner and outer loops of the algorithm and contribute equally to update the model parameter. ADML uses this varying correlation between clean and adversarial samples to obtain a better and robust initialization of model parameters so that our parameter becomes robust to adversarial samples and generalizes well to new tasks.

So, when we have a task distribution p(T), we sample a batch of tasks Ti from the task distribution and, for each task, we sample k data points and prepare our train and test sets.

In ADML, we sample clean and adversarial samples for both train and test sets as ,, , and .

Now we calculate loss on our train set, minimize the loss by gradient descent, and find the optimal parameter θ'. Since we have clean and adversarial training sets, we perform gradient descent on both of these sets and find the optimal parameter for both clean and adversarial sets as  and  respectively:

Now, we go to the meta training phase where we find the optimal parameter θ by minimizing loss on the test set by computing gradient of our loss with respect to optimal parameter θ' obtained in the previous step.

So, we update our model parameter θ by minimizing loss on both clean and adversarial test sets by computing gradient of loss with respect to an optimal parameter and :

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.182.73