What this book covers

Chapter 1, Fun with Filters, explores a number of interesting image filters (such as a black-and-white pencil sketch, warming/cooling filters, and a cartoonizer effect), and we'll apply them to the video stream of a webcam in real time.

Chapter 2, Hand Gesture Recognition Using a Kinect Depth Sensor, helps you develop an app to detect and track simple hand gestures in real time using the output of a depth sensor, such as Microsoft Kinect 3D Sensor or Asus Xtion.

Chapter 3, Finding Objects via Feature Matching and Perspective Transforms, helps you develop an app to detect an arbitrary object of interest in the video stream of a webcam, even if the object is viewed from different angles or distances, or under partial occlusion.

Chapter 4, 3D Scene Reconstruction Using Structure from Motion, shows you how to reconstruct and visualize a scene in 3D by inferring its geometrical features from camera motion.

Chapter 5, Using Computational Photography with OpenCV, helps you develop command-line scripts that take images as input and produce panoramas or High Dynamic Range (HDR) images. The scripts will either align the images so that there is a pixel-to-pixel correspondence or stitch them creating a panorama, which is an interesting application of image alignment. In a panorama, the two images are not that of a plane but that of a 3D scene. In general, 3D alignment requires depth information. However, when the two images are taken by rotating the camera about its optical axis (as in the case of panoramas), we can align two images of a panorama.

Chapter 6, Tracking Visually Salient Objects, helps you develop an app to track multiple visually salient objects in a video sequence (such as all the players on the field during a soccer match) at once.

Chapter 7, Learning to Recognize Traffic Signs, shows you how to train a support vector machine to recognize traffic signs from the German Traffic Sign Recognition Benchmark (GTSRB) dataset.

Chapter 8, Learning to Recognize Facial Emotions, helps you develop an app that is able to both detect faces and recognize their emotional expressions in the video stream of a webcam in real time.

Chapter 9, Learning to Recognize Facial Emotions, walks you through developing an app for real-time object classification with deep convolutional neural networks. You will modify a classifier network to train on a custom dataset with custom classes. You will learn how to train a Keras model on a dataset and how to serialize and save your Keras model to a disk. You will then see how to classify new input images using your loaded Keras model. You will train a convolutional neural network using the image data you have to get a good classifier that will have very high accuracy. 

Chapter 10, Learning to Detect and Track Objects, guides you as you develop an app for real-time object detection with deep neural networks, connecting it to a tracker. You will learn how object detectors work and how they are trained. You will implement a Kalman filter-based tracker, which will use object position and velocity to predict where it is likely to be. After completing this chapter, you will be able to build your own real-time object detection and tracking applications.

Appendix A, Profiling and Accelerating Your Appscovers how to find bottlenecks in an app and achieve CPU- and CUDA-based GPU acceleration of existing code with Numba. 

Appendix B, Setting Up a Docker Containerwalks you through replicating the environment that we have used to run the code in this book.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.58.216.74