Depth stream

The process, and the relating code for getting our depth stream data displayed, is very similar to the one we detailed for the color stream data.

In this section, we will list and document the essential steps for working with the depth stream data. The example attached to this chapter will provide all the additional details.

In order to process the depth stream data obtained by the connected KinectSensor, sensor we need to enable the KinectSensor.DepthStream using the KinectSensor.DepthStream.Enable(ColorImageFormat colorImageFormat) API.

The KinectSensor.DepthFrameReady is the event that the sensor fires when a new frame from the depth stream data is ready. The Kinect sensor streams data out continuously, one frame at a time, till the sensor is stopped or the depth stream is disabled. To stop the sensor, you can use the KinectSensor.Stop() method, and to disable the depth stream, use the KinectSensor.DepthStream.Disable() method.

We can register to the KinectSensor.DepthFrameReady event to process the depth stream data available. The following code snippet defines the details of the sensor_DepthFrameReady event handler:

void sensor_DepthFrameReady(object sender, DepthImageFrameReadyEventArgs e)
{   using (DepthImageFrame depthFrame = e.OpenDepthImageFrame())
    {   if 
       (depthFrame != null)
        {   
         // Copy the pixel data from the image to the pixels array
         depthFrame.CopyDepthImagePixelDataTo(this.depthPixels);

//convert the depth pixels to colored pixels
ConvertDepthData2RGB(depthFrame.MinDepth, depthFrame.MaxDepth);

this.depthBitmap.WritePixels(
                new Int32Rect(0, 0, this.depthBitmap.PixelWidth, this.depthBitmap.PixelHeight),
                this.colorDepthPixels,
                this.depthBitmap.PixelWidth * sizeof(int),0);}}} 

The data we obtain for each single DepthFrame is copied to the DepthImagePixel[] depthPixels array, which holds the depth value of every single pixel in the depthFrame. We convert the depth pixels array to the byte[] colorDepthPixels using our custom void ConvertDepthData2RGB(int minDepth, int maxDepth) method. We finally display the colorDepthPixels using the depthBitmap.WritePixels method.

Note

The same consideration about performances and the same polling technique we develop for the color stream data manipulation can be applied to the depth stream data too. As an alternative to subscribing to the DepthFrameReady event, we can obtain the current depth frame using the public DepthImageFrame OpenNextFrame(int millisecondsWait); method of the DepthImageStream class.

DepthRange – the default and near mode

Using the Int32 DepthRange Range property of the DepthImageStream class, we can select the two available depth range modes:

  • DepthRange.Near: The Kinect sensor captures with high level of reliability depth points within a range of 0.4 to 3 m
  • DepthRange.Default: The Kinect sensor captures with high level of reliability depth points within a range of 0.8 to 4 m

The DepthImageStream.MaxDepth and DepthImageStream.MinDepth properties provide the minimum and maximum reliable depth value according to the DepthRange we select.

Extended range

The Kinect sensor is able to capture depth points even outside of the depth ranges defined previously. In this case, the level of reliability of the depth data is decreased.

In the following code, we convert the depth pixels to colored pixels, highlighting:

  • Yellow: The point where the depth information is not provided
  • Red: All the points that are closer than the Min depth dictated by the current DepthRange
  • Green: All the points that are more than the Max depth dictated by the current DepthRange
  • Blue: Ranges all the others

Using the this.depthPixels.Max(p => p.Depth) statement, we can notice that the Kinect sensor is able to render points well over the MaxDepth. In an appropriate environment this value can easily reach 10 meters.

void ConvertDepthData2RGB(int minDepth, int maxDepth)
{   int colorPixelIndex = 0;
    for (int i = 0; i < this.depthPixels.Length; ++i)
    {   // Get the depth for this pixel
        short depth = depthPixels[i].Depth;
        if (depth == 0) // yellow points
        {   this.colorDepthPixels[colorPixelIndex++] = 0;
            this.colorDepthPixels[colorPixelIndex++] = 255;
            this.colorDepthPixels[colorPixelIndex++] = 255;}
        else
        {   // Write out blue byte
            this.colorDepthPixels[colorPixelIndex++] = (byte)depth;
            // Write out green byte – full green for > maxdepth
            this.colorDepthPixels[colorPixelIndex++] = (byte)(depth >=maxDepth ? 255 : depth >> 8);
            // Write out red byte – full red for < mindepth
            this.colorDepthPixels[colorPixelIndex++] = (byte)(depth <=minDepth ? 255 : depth >> 10); }
        // If we were outputting BGRA, we would write alpha here.
        ++colorPixelIndex;}
    //establish the effective maxdepth for each single frame
    this.dataContext.MaxDepth = this.depthPixels.Max(p => p.Depth);}

Note

The complete code for the Depth stream data manipulation is provided in the CODE_02/DepthStream Visual Studio solution.

Mapping from the color frame to the depth frame

In order to map depth coordinate spaces to color coordinate spaces and vice versa, we can utilize three distinct APIs. The CoordinateMapper.MapDepthFrameToColorFrame and CoordinateMapper.MapColorFrameToDepthFrame enable us to map the entire image frame. The CoordinateMapper.MapDepthPointToColorPoint API is used for mapping one single point from the depth space to the color space. We suggest referring the MSDN for a detailed explanation of the APIs.

In this paragraph, we will list and document the essential steps for mapping a depth stream to the color stream. The CODE_02/CoordinateMapper example attached to this chapter will provide all the additional details.

KinectSensor.AllFramesReady is the event that the sensor fires when all the new frames for each of the sensor's active streams are ready.

We can register to this event to process the streams data available and implement the related event handler using the this.sensor.AllFramesReady += this.sensor_AllFramesReady statement. We initialize the depth and color data and maps using the sensor.CoordinateMapper.MapDepthFrameToColorFrame (DepthImageFormat depthImageFormat, DepthImagePixel[] depthPixels, ColorImageFormat colorImageFormat, ColorImagePoint[] colorPoints) API.

The following code snippet defines the details of the sensor_AllFramesReady event handler:

void sensor_AllFramesReady(object sender, AllFramesReadyEventArgs e)
{   
   rgbReady = false; depthReady = false;

   using (ColorImageFrame colorFrame = e.OpenColorImageFrame()){ 
     if (colorFrame != null) { 

        //Copy the pre-pixel color data to a pre-allocated array
        colorFrame.CopyPixelDataTo(colorPixels);
        rgbReady = true;}}

    using (DepthImageFrame depthFrame = e.OpenDepthImageFrame()){ 
        if (depthFrame != null) {

  //Copy the pre-pixel depth data to a pre-allocated array
  depthFrame.CopyDepthImagePixelDataTo(depthPixels);
  mappedDepthLocations = new ColorImagePoint[depthFrame.PixelDataLength];
  depthReady = true;}}
    if (rgbReady && depthReady){ 

    // Coping color image into bitMapBits
    for (int i = 0; i < colorPixels.Length; i += 4){   
        bitMapBits[i + 3] = 255;                //ALPHA
        bitMapBits[i + 2] = colorPixels[i + 2]; //RED
        bitMapBits[i + 1] = colorPixels[i + 1]; //GREEN
        bitMapBits[i] = colorPixels[i];        //BLUE
}

//Maps the entire depth frame to color space.
   this.sensor.CoordinateMapper.MapDepthFrameToColorFrame(
       this.sensor.DepthStream.Format, depthPixels, 
       this.sensor.ColorStream.Format, mappedDepthLocations);
    for (int i = 0; i < depthPixels.Length; i++){
        int distance = depthPixels[i].Depth;

//Overlay if distance > 400mm and <1200mm
if (  (distance > sensor.DepthStream.MinDepth) && (distance < 1200)){ 
            ColorImagePoint point = mappedDepthLocations[i];
            int baseIndex = (point.Y * 640 + point.X) * 4;

/*the point near the edge of the depth frame  correspond to a pixel beyond the edge of the color frame. We verify that the point coordinates lie within the color image. */
if (  (point.X >= 0 && point.X < 640) && (point.Y >= 0 && point.Y < 480)){  

 //Red overlay + depth image + grid  
    bitMapBits[baseIndex + 2] = 
    (byte)((bitMapBits[baseIndex + 2] + 255) >> 1);  
}}}  
//draw the WritableBitmap
  bitMap.WritePixels(new Int32Rect(0, 0, 
    bitMap.PixelWidth, bitMap.PixelHeight), 
    bitMapBits, bitMap.PixelWidth * sizeof(int), 0);
    this.mappedImage.Source = bitMap;}}

For a green overlay without a grid we could use: bitMapBits[baseIndex + 1] = (byte)((bitMapBits[baseIndex] + 255) >> 1. For a simple blue overlay without depth data we could use: bitMapBits[baseIndex] = 255;

In the following picture, the man is overlapped by a red color and a grid because he is located between 40 cm and 1.2 m from the sensor. We can notice that there isn't an overlay on a portion of the right hand and forearm because depth and color frames come from different sensors, for this pixels date in the two frames may not always line up exactly.

Mapping from the color frame to the depth frame

Overlapping entities located between 40 cm and 1.2 m with a red grid

With the CordinateMapper API we could easily implement a background subtraction technique with full motion tracking. If necessary, we can also map depth data on frames captured with external full HD color camera for enhanced green screen movie studio capabilities.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.200.143