Summary

In this chapter, we've learned about gradient agreement algorithm. We've seen how the gradient agreement algorithm uses a weighted gradient to find the better initial model parameter, . We also saw how these weights are proportional to the inner product of the gradients of a task and an average of gradients of all of the tasks in a sampled batch of tasks. We also explored how the gradient agreement algorithm can be plugged with both MAML and the Reptile algorithm. Following this, we saw how to find the optimal parameter in a classification task using a gradient agreement algorithm.

In the next chapter, we'll learn about some of the recent advancements in meta learning such as task agnostic meta learning, learning to learn in the concept space, and meta imitation learning.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.163.197