Chapter 11. Sensing and Tracking Input from the Camera

In this chapter, we will learn how to receive and process data from input devices such as a camera or a Microsoft Kinect sensor.

The following recipes will be covered:

  • Capturing from the camera
  • Tracking an object based on color
  • Tracking motion using optical flow
  • Object tracking
  • Reading QR code
  • Building UI navigation and gesture recognition with Kinect
  • Building an augmented reality with Kinect

Capturing from the camera

In this recipe we will learn how to capture and display frames from a camera.

Getting ready

Include the necessary files to capture images from a camera and draw them to OpenGL textures:

#include "cinder/gl/gl.h"
#include "cinder/gl/Texture.h"
#include "cinder/Capture.h"

Also add the following using statements:

using namespace ci;
using namespace ci::app;
using namespace std;

How to do it…

We will now capture and draw frames from the camera.

  1. Declare the following members in your application class:
        Capture mCamera;
        gl::Texture mTexture;
  2. In the setup method we will initialize mCamera:
        try{
            mCamera = Capture( 640, 480 );
            mCamera.start();
        } catch( ... ){
            console() << "Could not initialize the capture" << endl;
  3. In the update method, we will check if mCamera was successfully initialized. Also if there is any new frame available, copy the camera's image into mTexture:
        if( mCamera ){
            if( mCamera.checkNewFrame() ){
                mTexture = gl::Texture( mCamera.getSurface() );
            }
        }
  4. In the draw method, we will simply clear the background, check if mTexture has been initialized, and draw it's image on the screen:
      gl::clear( Color( 0, 0, 0 ) ); 
        if( mTexture ){
            gl::draw( mTexture, getWindowBounds() );
        }

How it works…

The ci::Capture is a class that wraps around Quicktime on Apple computers, AVFoundation on iOS platforms, and Directshow on Windows. Under the hood it uses these lower level frameworks to access and capture frames from a webcam.

Whenever a new frame is found, it's pixels are copied into the ci::Surface method. In the previous code we check on every update method if there is a new frame by calling the ci::Capture::checkNewFrame method, and update our texture with its surface.

There's more…

It is also possible to get a list of available capture devices and choose which one you wish to start with.

To ask for a list of devices and print their information, we could write the following code:

vector<Capture::DeviceRef> devices = Capture::getDevices();
for( vector<Capture::DeviceRef>::iterator it = devices.begin(); 
 it != devices.end(); ++it ){
 Capture::DeviceRef device = *it;
 console() << "Found device:" 
  << device->getName() 
  << " with ID:" << device->getUniqueId() << endl;
}

To initialize mCapture using a specific device, you simply pass ci::Capture::DeviceRef as a third parameter in the constructor.

For example, if you wanted to initialize mCapture with the first device, you should write the following code:

vector<Capture::DeviceRef> devices = Capture::getDevices();
mCapture = Capture( 640, 480 devices[0] );
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.84.169