Chapter 9. Flying a Mission with Crazyflie

Robots are fun and sometimes frustrating to program. Quadrotors are particularly difficult to control due to the number of flight factors and the complexity of flight programs to control these factors. Quadrotors are currently being tested as surveillance cameras and delivery vehicles for packages and fast food. In this chapter, we will explore the subject of programming quadrotors to fly to specific destinations. This application may be handy for delivering coffee and paperwork around the office. We will begin by using a barebones quadrotor and an inexpensive depth camera to sense the quadrotor's location.

This chapter will highlight the use of ROS communication to coordinate the locations of the quadrotor and the target. A Kinect sensor will be used to visualize the environment and the position of the quadrotor in it to coordinate its landing at a marked location. ROS tf transforms and pose messages will be generated to identify the reference frames and positions of the quadrotor and the target. The transforms enable the control commands to be published to bring the quadrotor to the target location. The navigation, flying, and landing mission implements a spectrum of ROS components—including nodes, topics, messages, services, launch files, tf transforms, rqt, and more— taught in this book.

We will set up this mission scenario between a Crazyflie 2.0, a Kinect for Windows v2 (called Kinect v2 in this chapter), and a target marker acting as the landing position on top of a TurtleBot. The following picture shows the arrangement of our setup. Feel free to follow these instructions to prepare an arrangement of quadrotor, target, and image sensor with the equipment you have available.

Flying a Mission with Crazyflie

Mission setup

For our mission, Crazyflie will be controlled to hover, fly and land. In this chapter, we will address the following to achieve this mission:

  • Detecting the Crazyflie on a Kinect v2 image
  • Establishing a tf framework to support the configuration of our camera and robot
  • Determining the Cartesian coordinates (x, y, z) of the Crazyflie with respect to the image
  • Publishing a tf transform of the coordinates
  • Controlling the Crazyflie to hover at its initial location
  • Locating a target on the video image, determining its coordinates, and publishing its pose
  • Controlling the Crazyflie to takeoff ,fly to the target and land

Mission components

The components we will use in this mission include a Crazyflie 2.0 quadrotor, a Crazyradio PA, a Kinect for the Windows v2 sensor, and a workstation computer. Chapter 7, Making a Robot Fly, describes the Crazyflie and Crazyradio and their operations. Chapter 4, Navigating the World with TurtleBot, is a good introduction to a depth sensor such as the Kinect v2. It is recommended to review these chapters before beginning this mission.

Kinect for Windows v2

Kinect v2 is an infrared time of flight depth sensor that operates at a higher resolution than the Kinect for Xbox 360. The modulated infrared beam measures how long it takes for the light to travel to the object and back, providing a more accurate measurement. This sensor has improved performance in dark rooms and in sunny outdoor conditions. With a horizontal field of view (FOV) of 70 degrees and a vertical FOV of 60 degrees, the infrared sensor can accurately detect distances ranging from 0.5 to 4.5 meters (20 inches to 14.75 feet) within this FOV. The image resolution for the depth camera is 512 x 424 at a rate of 30 frames per second. The Kinect v2 must be connected to a USB 3.0 port on the workstation computer in order to provide the image data. External electrical power for the Kinect is also required.

Kinect v2 produces a large amount of image data that can overwhelm the workstation computer if it is not equipped with a separate graphics processing unit (GPU). The ROS packages libfreenect2 and iai_kinect2 were developed to interface with Kinect v2 for image-processing applications. The iai_kinect2 package provides tools for calibrating, the interfacing, and viewing color and depth images from Kinect v2. Kinect images are used with OpenCV tools to process the images for object detection. The section OpenCV and ROS provides background information and describes how these two tools are interfaced.

Kinect's color images will be evaluated to locate markers for the Crazyflie and the target positions. These markers will enable the position and altitude of the quadrotor and the target to be determined with respect to the image frame. These positions are not related to real-world coordinates but applied in relation to the sensor's image frame. An ROS tf transform is published to advertise the location of the Crazyflie.

Crazyflie operation

Controlling a quadrotor is the subject of a vast amount of literature. To control Crazyflie, our plan is to follow the same type of control prepared by Wolfgang Hoenig in his success with the crazyflie metapackage (https://github.com/whoenig/crazyflie_ros). This package was developed as part of his research at the ACTLab at the University of Southern California (http://act.usc.edu/). Within his crazyflie_controller package, he created a controller that uses PID control for each of Crazyflie's four dimensions of control: pitch, roll, thrust, and yaw. Our software design mimics this approach but deviates in key areas where the singular image view of the Kinect required changes to the control parameters. We also changed the software to Python. A vast amount of testing was required to attain control of a Crazyflie in a hover state. When hover results were acceptable, testing advanced further to add the challenge of flying to the target. Further testing was required to improve flight control.

The controller software uses the difference between the Crazyflie's current position and the goal position (either hover or target) to send correction commands to fly closer to the goal position. This iteration continues every 20 milliseconds with a new position for Crazyflie detected and a new correction computed and sent. This is a closed-loop system that computes the difference between positions and commands the Crazyflie to fly in direction of the goal position.

During testing, Crazyflie lived up to its name and would arbitrarily fly to various corners of the room out of control. Implementing a new ROS node took care of this unwanted behavior. The node crazyflie_window was designed to be an observer of Crazyflie's location in the image frame. When Crazyflie's location came too close to the image's edge, a service command was sent to the controller and an appropriate command would be published to Crazyflie. This implementation resulted in no more flyaway behavior and saved on broken motor mounts.

Mission software structure

The code developed for this mission is contained in the crazyflie_autonomous package and divided into four different nodes:

  • crazyflie_detector in the detect_crazyflie.py file
  • target_detector in the detect_target.py file
  • crazyflie_controller in the control_crazyflie.py and pid.py and crazyflie2.yaml files
  • crazyflie_window in the watcher.py file

This mission also relies on a portion of Wolfgang Hoenig's crazyflie metapackage that is described in Chapter 7, Making a Robot Fly. The nodes used are as follows:

  • crazyflie_server (crazyflie_server.cpp from the crazyflie_driver package)
  • crazyflie_add (crazyflie_add.cpp from the crazyflie_driver package); This node runs briefly during Crazyflie startup to set initial parameters for the Crazyflie
  • joystick_controller (controller.py from the crazyflie_demo package)

A third set of nodes is generated by other packages:

  • baselink (static_transform_publisher from the tf package).
  • joy (the joy package).
  • kinect2_bridge (the iai_kinect2/kinect2_bridge package); the kinect2_bridge works between the Kinect v2 driver (libfreenect2) and ROS. Image topics are produced by the kinect2 node.

The relationship between these nodes is shown in the following node graph:

Mission software structure

Nodes and topics for Crazyflie mission

Note

All of the code for the Chapter 9, Flying a Mission with Crazyflie is available online at the Packt Publishing website. The code is too extensive to include in this chapter. Only important portions of the code are described in the following sections to aid the learning of the techniques used for this mission.

OpenCV and ROS

In the previous two chapters (Chapter 4, Navigating the World with TurtleBot and Chapter 6, Wobbling Robot Arms using Joint Control), we introduced and described a little about the capabilities of OpenCV. Since this mission heavily relies on the interface between ROS and OpenCV and also on the OpenCV library, we will go into further background and detail about OpenCV.

OpenCV is a library of powerful computer vision tools for a vast expanse of applications. It was originally developed at Intel by Gary Bradsky in 1999 as a C library. The upgrade to OpenCV 2.0 was released in October 2009 with a C++ interface. Much of this work was done at Willow Garage, headed by Bradsky and Vadim Pisarevsky. It is open source software with a BSD license and free for both academic and commercial use. OpenCV is available on multiple operating systems, including Windows, Linux, Mac OS X, Android, FreeBSD, OpenBSD, iOS, and more. The primary interface for OpenCV is C++, but programming language interfaces exist for Python, Java, MATLAB/Octave, and wrappers for C# and Ruby.

The OpenCV library contains more than 2,500 effective and efficient vision algorithms for a wide range of vision-processing and machine-learning applications. The fundamental objective is to support real-time vision applications, such as tracking moving objects and detecting and recognizing faces for surveillance. Many other algorithms support object identification, the tracking of human gestures and facial expressions, the production of 3D models of objects, the construction of 3D point clouds from stereo camera data, the modeling of scenes based on multiple image sources, to name just a few. This extensive library of tools is used throughout the industry and in academia and government as well.

To learn more about OpenCV, visit http://opencv.org/. This website provides access to excellent tutorials, documentation on structures and functions, and an API interface for C++ and to a lesser extent Python. The current version of OpenCV is 3.0, but ROS Indigo (and Jade) supports OpenCV2, whose current version is 2.4.12.

ROS provides the vision_opencv stack to integrate the power of the OpenCV library of tools. The wiki website for this interface stack is http://wiki.ros.org/vision_opencv.

The OpenCV software and the ROS vision_opencv stack were installed when you performed the ROS software install of the ros-indigo-desktop-full configuration in Chapter 1, Getting Started with ROS. To install only the OpenCV library with the ROS interface and Python wrapper, use the following command:

$ sudo apt-get install ros-indigo-vision-opencv

This vision_opencv stack currently provides two packages: cv_bridge, and image_geometry. The cv_bridge package is the connection between ROS messages and OpenCV. It provides for the conversion of OpenCV images into ROS images and vice versa. The image_geometry package contains a powerful library of image-processing tools for both Python and C++. Images can be handled with respect to the camera parameters provided the CameraInfo messages. It is also used with camera calibration and image rectification.

For this mission, we will use OpenCV algorithms to analyze the Kinect image and detect the Crazyflie and target within the scene. Using the location of the Crazyflie and target, the Crazyflie will be given commands to fly to the location of the target. This scenario hides the layers of complex computation and understanding required to perform this seemingly simple task.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.21.30