What this book covers

Chapter 1, Introduction to Machine Learning, provides a brief introduction to ML, including some explanation of the core concepts, the types of problems, algorithms, and general workflow of creating and using a ML models. The chapter concludes by exploring some examples where ML is being applied.

Chapter 2, Introduction to Apple Core ML, introduces Core ML, discussing what it is, what it is not, and the general workflow for using it.

Chapter 3, Recognizing Objects in the World, walks through building a Core ML application from start to finish. By the end of the chapter, we would have been through the whole process of obtaining a model, importing it into the project, and making use of it.

Chapter 4, Emotion Detection with CNNs, explores the possibilities of computers understanding us better, specifically our mood. We start by building our intuition of how ML can learn to infer your mood, and then put this to practice by building an application that does just that. We also use this as an opportunity to introduce the Vision framework and see how it complements Core ML. 

Chapter 5, Locating Objects in the World, goes beyond recognizing a single object to being able to recognize and locate multiple objects within a single image through object detection. After building our understanding of how it works, we move on to applying it to a visual search application that filters not only by object but also by composition of objects. In this chapter, we'll also get an opportunity to extend Core ML by implementing customer layers. 

Chapter 6, Creating Art with Style Transfer, uncovers the secrets behind the popular photo effects application, Prisma. We start by discussing how a model can be taught to differentiate between the style and content of an image, and then go on to build a version of  Prisma that applies a style from one image to another. We wrap up this chapter by looking at ways to optimize the model. 

Chapter 7, Assisted Drawing with CNNs, walks through building an application that can recognize a users sketch using the same concepts that have been introduced in previous chapters. Once what the user is trying to sketch has been recognized, we look at how we can find similar substitutes using the feature vectors from a CNN. 

Chapter 8, Assisted Drawing with RNNs, builds on the previous chapter and explores replacing the the convolution neural network (CNN) with a recurrent neural network (RNN) for sketch classification, thus introducing RNNs and showing how they can be applied to images. Along with a discussion on learning sequences, we will also delve into the details of how to download and compile Core ML models remotely. 

Chapter 9, Object Segmentation Using CNNs, walks through building an ActionShot photography application. And in doing so, we introduce another model and accompanying concepts, and get some hands-on experience of preparing and processing data.

Chapter 10, An Introduction to Create ML, is the last chapter. We introduce Create ML, a framework for creating and training Core ML models within Xcode using Swift. By the end of this chapter, you will know how to quickly create, train, and deploy a custom models. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.14.251.57