12

Mobile Robot Localization and Mapping

Localization and mapping are two of the basic operations needed for the navigation of a robot, the other operations being path planning and motion planning studied in Chapter 11. Other fundamental capabilities and functions of an integrated mobile robotic system, from the task/mission specification to the motion/task execution, are cognition of the task specification, perception of the environment, and control of the robot motion. This chapter is devoted to the localization and map building operations.

Specifically, the objectives of the chapter are (i) to provide the background concepts on stochastic processes, Kalman estimation, and Bayesian estimation employed in WMR localization, (ii) to discuss the sensor imperfections that have to be taken care in their use, (iii) to introduce the relative localization (dead reckoning) concept and present the kinematic analysis of dead reckoning in WMRs, (iv) to study the absolute localization methods (trilateration, triangulation, map matching), (v) to present the application of Kalman filters for mobile robot localization, sensor calibration, and sensor fusion, and (vi) to treat the simultaneous localization and mapping problem via the extended Kalman filter, Bayesian estimation, and particle filter.

Keywords

Mobile robot localization; mobile robot mapping; Kalman filter; extended Kalman filter; Bayesian learning; state prediction; least-squares estimator; sensor imperfections; relative localization; dead reckoning; absolute localization; localization by map matching; resampling; Kalman filter-based localization; Kalman filter-based sensor fusion; simultaneous localization and mapping; Bayesian estimator SLAM; particle filter SLAM

12.1 Introduction

As described in Section 11.3.1, localization and mapping are two of the three basic operations needed for the navigation of a robot. Very broadly, other fundamental capabilities and functions of an integrated robotic system from the task/mission specification to the motion control/task execution (besides path planning) are the following:

• Cognition of the task specification

• Perception of the environment

• Control of the robot motion

A pictorial illustration of the interrelations and organization of the above functions in a working wheel-driven mobile robot (WMR) is shown in Figure 12.1.

image

Figure 12.1 General organization of the functions of an autonomous mobile robot.

The robot must have the ability to perceive the environment via its sensors in order to create the proper data for finding its location (localization) and determining how it should go to its destination in the produced map (path planning). The desired destination is found by the robot through processing of the desired task/mission command with the help of the cognition process. The path is then provided as input to the robot’s motion controller which drives the actuators such that the robot follows the commanded path. The path planning operation was discussed in Chapter 11. Here we will deal with the localization and map building operations. Specifically, the objectives of this chapter are as follows:

• To provide the background concepts on stochastic processes, Kalman estimation, and Bayesian estimation employed in WMR localization

• To discuss the sensor imperfections that have to be taken care in their use

• To introduce the relative localization (dead reckoning) concept, and present the kinematic analysis of dead reckoning in WMRs

• To study the absolute localization methods (trilateration, triangulation, map matching)

• To present the application of Kalman filters for mobile robot localization, sensor calibration, and sensor fusion

• To treat the simultaneous localization and mapping (SLAM) problem via the extended Kalman filter (EKF), Bayesian estimation, and particle filter (PF)

12.2 Background Concepts

Stochastic modeling, Kalman filtering, and Bayesian estimation techniques are important tools for robot localization. Here, an introduction to the above concepts and techniques, which will be used in the chapter, is provided [13].

12.2.1 Stochastic Processes

Stochastic process is defined to be a collection (or ensemble) of time functions image of random variables corresponding to an infinitely countable set of experiments image, with an associated probability description, for example, image (see Figure 12.2). At time image, image is a random variable with probability image. Similarly, image is a random variable with probability image. Here, image is a random variable over the ensemble every time, and so we can define first and higher-order statistics for the variables image.

image

Figure 12.2 Ensemble representation of a stochastic process.

First-order statistics is concerned only with a single random variable image and is expressed by a probability distribution image and its density image for the continuous-time case, or its distribution function image for the discrete-time case.

Second-order statistics is concerned with two random variables image and image at two distinct time instances image and image. In the continuous-time case, we have the following probability functions for image and image.

image

image

image

image

Using the above probability functions we get the first- and second-order averages (moments) as follows:

image

image

image

image

The sample statistics time averages of order image over the sample functions of the continuous-time process image are defined as:

image

and the time averages of the discrete-time process image as:

image

Stationarity: A stochastic process is said to be stationary if all its marginal and joint density functions do not depend on the choice of the time origin. If this is not so, then the process is called nonstationary.

Ergodicity: A stochastic process is said to be ergodic if its ensemble moments (averages) are equal to its corresponding sample moments, that is, if:

image

Ergodic stochastic processes are always stationary. The converse does not always hold.

Stationarity and Ergodicity in the Wide Sense: Motivated by the normal (Gaussian) density, which is completely described by its mean and variance, in practice we usually employ only first- and second-order moments. If a process is stationary or ergodic up to second-order moments, then it is called a stationary or ergodic process in the wide sense.

Markov Process: A Markov process is the process in which for image the following property (called Markovian property) is true:

image (12.1)

This means that the p.d.f. of the state variable image at some time instant image depends only on the state image at the immediately previous time image (not on more previous times).

12.2.2 Stochastic Dynamic Models

We consider an n-dimensional linear discrete-time dynamic process described by:

image (12.2a)

image (12.2b)

where image, image, image are matrices of proper dimensionality depending on the discrete-time index image, and image, image are stochastic processes (the input disturbance and measurement noise, respectively) with properties:

image (12.3)

where image is the Kronecker delta defined as image, image image.

The initial state image is a random (stochastic) variable, such that:

image (12.4)

The above properties imply that the processes image and image and the random variable image are statistically independent. If they are also Gaussian distributed, then the model is said to be a discrete-time Gauss–Markov model (or Gauss–Markov chain), since as can be easily seen, the process image is Markovian. The continuous-time counterpart of the Gauss–Markov model has an analogous differential equation representation.

12.2.3 Discrete-Time Kalman Filter and Predictor

We consider a discrete-time Gauss–Markov system of the type described by Eqs. (12.2)(12.4), with available measurements image. Let image and image be the estimate of image with measurements up to image and image, respectively. Then, the discrete-time Kalman filter for the stochastic system (12.2a) with measurement process (12.2b) is described by the following recursive equations:

image (12.5)

image (12.6)

image (12.7)

image (12.8)

where image is the covariance matrix of the estimate image on the basis of data up to time i:

image (12.9a)

image (12.9b)

and the notation image, image, etc. is used. These equations can be represented in the block diagram (Figure 12.3).

image

Figure 12.3 Block diagram representation of the Kalman filter and predictor (image=unit matrix, image is the delay operator image).

An alternative expression of image is obtained by applying the matrix inversion Lemma, and is the following:

image (12.10)

The initial conditions for the state estimate Eq. (12.5) and the covariance equation (12.7) are:

image (12.11)

State Prediction: Using the filtered estimate image of image at the discrete-time instant image, we can compute a predicted estimate image, image, image of the state image on the basis of measurements up to the instant image. This is done by using the equations:

image (12.12a)

image (12.12b)

image (12.12c)

and so on. Therefore:

image (12.13)

12.2.4 Bayesian Learning

Formally, the probability of event A occurring, given that the event B occurs, is given by:

image

and similarly:

image

For notational simplicity, the above formulas are written as:

image (12.14)

where image is the probability of image and image occurring jointly.

Combining the two formulas (12.14), we get the so-called Bayes formula:

image (12.15a)

Now, if image where image is a hypothesis (about the truth or cause), and image where image represents observed symptoms or measured data or evidence, Eq. (12.15a) gives the so-called Bayes updating (or learning) rule:

image (12.15b)

where image. Here, image is the “prior” probability of the hypothesis image (before any evidence is obtained), image is the probability of the evidence, and image is the probability that the evidence is true when image holds. The term image is the “posterior” probability (i.e., the “updated” probability) of image after the evidence is obtained. Consider now the case that two evidences image and image about image are observed one after the other. Then, Eq. (12.15b) gives:

image

Clearly, the above equation says that image, that is, the order in which the evidence (data) are observed does not influence the resulting posterior probability of image given image and image.

Example 12.1

It is desired to derive the Kalman filter using the least-squares estimation approach.

Solution

The Kalman filter described by Eqs.(12.5)(12.11) can be derived using several approaches, viz., (i) orthogonality principle, (ii) Gaussian Bayesian technique, and (iii) the least-squares approach (various formulations). Here, the least-squares estimator (3.103) will be used which was derived for the measurement system (3.102). This estimator uses all available data image at once, and provides an off-line estimate of image. The derivation of the Kalman filter (12.5)(12.11) will be done by converting Eq. (3.103) to recursive form that updates (improves) the estimate image (found on the basis of image measurements image) using the next, image, measurement image. To this end, we define:

image (12.16a)

The matrices image and image are related as follows:

image

Therefore:

image (12.16b)

Now, using Eqs. (3.103) and (12.16a) we get:

image

image

To express image in terms of image we must eliminate image from the above equations. The first of these equations gives image. Thus, the second equation can be written as:

image (12.17a)

Equation (12.17a) provides the desired recursive least-squares estimator. The new estimate image is a function of the old estimate image, the new data pair image and the matrix image. Actually, the new estimate image is equal to the old estimate image plus a correction term equal to an adaptation (or learning) gain vector image multiplied by the prediction (learning) error or innovation process image. For this reason Eq. (12.17a) is also called “least-squares learning equation or rule.” We now have to find how image can be found from image.

From Eq. (12.16b) we obtain:1

image (12.17b)

for image (since image is a scalar). The full recursive least-squares estimation algorithm given by Eqs. (12.17a) and (12.17b) is initiated using selected initial values image and image. One may also start from image, by using the first image measurements to get image and image directly:

image

Now, using in Eqs. (12.17a) and (12.17b) the following variable substitutions:

image

we get the Kalman filter Eqs. (12.5)(12.11).

The initial conditions for the state estimate Eq. (12.5) and the covariance equation (12.7) are:

image

To see how the Kalman filter works, we consider a scalar time-invariant discrete-time Gauss–Markov system:

image

with image, image, image, and image. The Kalman filter Eqs. (12.5)(12.11) give:

image

image

image

image

Since image, the second equation tells us that image, that is, the one-step prediction accuracy is at minimum equal to the variance of the input disturbance image. From the third equation, we see that image. The fourth equation implies that image. Thus, image. This means that if image, the use of the first measurement image gives image, that is, it provides a drastic reduction in the estimation error. Using the given values image, image, image, and image, the variance equations give the results as shown in Table 12.1.

Table 12.1

Evolution of the Optimal Filter

Image

The steady-state value of image is found by setting image, in which case we obtain image. Since image, the acceptable solution is image. Thus, image, and

image


1Using the following matrix inversion lemma: image with image, image, and image.

12.3 Sensor Imperfections

The primary prerequisite for an accurate localization is the availability of reliable high-resolution sensors. Unfortunately, the practically available real-life sensors have several imperfections that are due to different causes. As we saw in Chapter 4, the usual sensors employed in robot navigation are range finders via sonar, laser and infrared technology, radar, tactile sensors, compasses, and GPS. Sonars have very low spatial bandwidth capabilities and are subject to noise due to wave scattering, and so the use of laser range sensing is preferred. But despite laser sensors possess a much wider bandwidth, they are still affected by noise. Furthermore, lasers have a restricted field of view, unless special measures are taken (e.g., incorporation of rotating mirrors in the design).

Since both sonar-based and laser-based navigation have the above drawbacks, robotic scientists have strengthened their attention to vision sensors and camera-based systems. Vision sensors can offer a wide field of view, millisecond sampling rates, and are easily applicable for control purposes. Some of the drawbacks of vision are the lack of depth information, image occlusion, low resolution, and the need for recognition and interpretation. However, despite the relative advantages and disadvantages, the use of monocular cameras as a sensor for navigation leads to a competitive selection.

In general, the sensor imperfections can be grouped in: sensor noise and sensor aliasing categories [48]. The sensor noise is primarily caused by the environmental variations that cannot be captured by the robot. Examples of this in vision systems are the illumination conditions, the blooming, and the blurring. In sonar systems, if the surface accepting the emitted sound is relatively smooth and angled, much of the signal will be reflected away, failing to produce a return echo. Another source of noise in sonar systems is the use of multiple sonar emitters (16–40 emitters) that are subject to echo interference effects. The second imperfection of robotic sensors is the aliasing, that is, the fact that sensor readings are not unique. In other words, the mapping from the environmental states to the robot’s perceptual inputs is many-to-one (not one-to-one). The sensor aliasing implies that (even if no noise exists) the available amount of information is in most cases not sufficient to identify the robot’s position from a single sensor reading. The above issues suggest that in practice special sensor signal processing/fusion techniques should be employed to minimize the effect of noise and aliasing, and thus get an accurate estimate of the robot position over time. These techniques include probabilistic and information theory methods (Bayesian estimators, Kalman filters, EKFs, PFs, Monte Carlo estimators, information filters, fuzzy/neural approximators, etc).

12.4 Relative Localization

Relative localization is performed by dead reckoning, that is, by the measurement of the movement of a wheeled mobile robot (WMR) between two locations. This is done repeatedly as the robot moves and the movement measurements are added together to form an estimate of the distance traveled from the starting position. Since the individual estimates of the local positions are not exact, the errors are accumulated and the absolute error in the total movement estimate increases with traveled distance. The term dead reckoning comes from the sailing days term “deduced reckoning” [4].

For a WMR, the dead reckoning method is called “odometry,” and is based on data obtained from incremental wheel encoders [9].

The basic assumption of odometry is that wheel revolutions can be transformed into linear displacements relative to the floor. This assumption is rarely ideally valid because of wheel slippage and other causes. The errors in odometric measurement are distinguished in:

• Systematic errors (e.g., due to unequal wheel diameters, misalignment of wheels, actual wheel base is different than nominal wheel base, finite encoder resolution, encoder sampling rate).

• Non-systematic errors (e.g., uneven floors, slippery floors, overacceleration, nonpoint contact with the floor, skidding/fast turning, internal and external forces).

Systematic errors are cumulative and occur principally in indoor environment. Nonsystematic errors are dominating in outdoor environments.

Two simple models for representing the systematic odometric errors are:

Error due to unequal wheel diameter:

image

Error due uncertainty about the effective wheel base:

image

where image, image are the actual diameters of the right and left wheels, respectively, and image is the wheel base. It has been verified that either image or image can cause the same final error in both position and orientation measurement. To estimate the nonsystematic errors, we use the worst possible case, that is, we estimate the biggest possible disturbance. A good measure of nonsystematic errors is provided by the average absolute orientation errors image clockwise and counterclockwise directions, namely image and image, respectively. Some ways for reducing systematic odometry errors are:

• Addition of auxiliary wheels (a pair of “knife-edge,” nonload-bearing encoder wheels; Figure 12.4).

image

Figure 12.4 Pictorial illustration of the use of encoder wheels for a differential drive WMR.

• Use of a trailer with encoder wheels.

• Careful systematic calibration of the WMR.

Methods for reducing nonsystematic odometry errors include:

• Mutual referencing (use of two robots that measure their positions mutually).

• Correction internal position error (two WMRs mutually correct their odometry errors).

• Use of mechanical or solid-state gyroscopes.

Clearly, more accurate odometry reduces the requirements on absolute position updates, but does not eliminate completely the necessity for periodic updating of the absolute position.

12.5 Kinematic Analysis of Dead Reckoning

The robot state (position and orientation) at instant image is image. The dead-reckoning (odometry)-based localization of WMRs is to estimate image at instant image, and linear and angular position increments image, and image, respectively, between the time instants image and image, where image, image, and image. Assuming that image is very small, we have the following approximations:

image

To determine the position and angle increments from the wheel movements (i.e., the encoder values), we must use the kinematic equations of the WMR under consideration. We therefore investigate the localization problem for differential drive, tricycle drive, synchrodrive, Ackerman steering, and omnidirectional drive.

12.5.1 Differential Drive WMR

In this case, two optical encoders are sufficient. They supply the increments image and image of the left and right wheels. If image is the distance of the two wheels, and image the instantaneous curvature radius, we have:

image

and therefore:

image

Now

image (12.18a)

and

image (12.18b)

These formulas give image and image in terms of the encoders’ values image and image.

12.5.2 Ackerman Steering

We work using the robot’s diagram of Figure 1.28 denoted by image the length of the vehicle (from the rotation axis of the back wheels to the axis of the front axis). We have:

image

where image is the actual steering angle of the vehicle (i.e., the angle of the virtual wheel located in the middle of the two front wheels), and image, image are the steering angles of the outer and inner wheel, respectively.

Working as in the differential drive case, we find:

image (12.19a)

image (12.19b)

Here, we also find:

image (12.20)

Two encoders are sufficient to provide image and image using the two relations (12.19a) and (12.19b). To use Eq. (12.20) we need a third encoder. But the result can be used to decrease the errors of the other two encoders.

12.5.3 Tricycle Drive

Here, the solution is the same as that for the Ackerman steering:

image (12.21)

image (12.22)

12.5.4 Omnidirection Drive

For the three-wheel case, we have the following relations:

image (12.23a)

image (12.23b)

image (12.23c)

12.6 Absolute Localization

12.6.1 General Issues

Landmarks and beacons form the basis for absolute localization of a WMR [10]. Landmarks may or may not have identities. If the landmarks have identities, no a priori knowledge about the navigating robot’s position is needed. Otherwise, if landmarks have no identities some rough knowledge of the position of the robot must be available. If natural landmarks are employed, then the presence or not of identities depends on the environment. For example, in an office building, one of the rooms may have a large three-glass window, which can be used as a landmark with identity.

The landmarks (beacons) are distinguished in:

• active artificial landmarks,

• passive artificial landmarks,

• natural landmarks.

An active landmark emits some kind of signal to enable easy detection. In general, the detection of a passive landmark needs more of the sensor. To simplify the detection of passive landmarks, we may use an active sensor together with a passive landmark that responds to this activeness. For example, use of bar codes on walls and visual light to detect them for building topological maps. It is preferable that natural landmarks can be employed with sustainable robustness of the navigational system. They can provide increased flexibility to the adaptation to new environments with minimum requirement of engineering design. For example, natural landmarks in indoor environments can be chairs, desks, and tables.

According to Borenstein [4], localization using active landmarks is distinguished in:

• trilateration,

• triangulation.

12.6.2 Localization by Trilateration

In trilateration, the location of a WMR is determined using distance measurements to known active beacon, image. Typically, three or more transmitters are used and one receiver on the robot. The locations of transmitters (beacons) must be known. Conversely, the robot may have one transmitter onboard and the receivers can be placed at known positions in the environment (e.g., on the walls of a room). Two examples of localization by trilateration are the beacon systems that use ultrasonic sensors (Figure 12.5) and the GPS system (Figure 4.24b).

image

Figure 12.5 Graphical illustration of localization by trilateration. The robot image lies at the intersection of the three circles image centered at the beacons image with radii imageimage.

Ultrasonic-based trilateration localization systems are appropriate for use in relatively small area environments (because of the short range of ultrasound), where no obstructions that interfere with the wave transmission exist. If the environment is large, the installation of multiple networked beacons throughout the operating area is of increased complexity, a fact that reduces the applicability a sonar-based trilateration. The two alternative designs are:

• Use of a single transducer transmitting from the robot with many receivers at fixed positions.

• Use of a single listening receiver on the robot, with multiple fixed transmitters serving as beacons (this is analogous to the GPS concept).

The position image of the robot in the world coordinate system can be found by least-squares estimation as follows. In Figure 12.5, let image be the known world coordinates of the beacons image and image the corresponding robot–beacon distances (radii of the intersecting circles). Then, we have:

image

Expanding and rearranging gives:

image

To eliminate the squares of the unknown variables image and image, we pairwise subtract the above equations, for example, we subtract the kth equation from the ith equation to get:

image

where:

image

Overall, we obtain the following overdetermined system of linear equations with two unknowns:

image

where A is the matrix:

image

The solution of this linear algebraic system is given by (see Eqs. (2.7) and (2.8a)):

image

under the assumption that the matrix image is invertible.

An alternative way of computing image is to apply the general iterative least-squares technique for nonlinear estimation. To this end, we expand the nonlinear function:

image

in Taylor series about an a priori position estimate image, and keep only first-order terms, namely:

image

where:

image

and image is the Jacobian matrix of image evaluated at image:

image

Therefore, we get the following iterative equation for updating the estimate image:

image

where image is known.

The iteration is repeated until image becomes smaller than a predetermined small number image or the number of iterations arrives at a maximum number image.

In practice, due to measurement errors in the positions of the beacons and the radii of the circles, the circles do not intersect at a single point but overlap in a small region. The position of the robot is somewhere in this region. Each pair of circles gives two intersection points, and so with three circles (beacons) we have six intersection points. Three of these points are clustered together more closely than the other three points. A simple procedure for determining the smallest intersection (clustering) region is as follows:

• Compute the distance between each pair of circle intersection points.

• Select the two closest intersection points as the initial cluster.

• Compute the centroid of this cluster (The x,y coordinates of the centroid are obtained by the averages of the x,y coordinates of the points in the cluster).

• Find the circle intersection point which is closest to the cluster centroid.

• Add this intersection point to the cluster and compute again the cluster centroid.

• Continue in the same manner until N intersection points have been added to the cluster, where N is the number of beacons (circles).

• The location of the robot is given by the centroid of the final cluster.

As an exercise, the reader may consider the case of two beacons (circles) and compute geometrically the positions of their intersection points. Also, the conditions under which the solution is nonsingular must be determined.

12.6.3 Localization by Triangulation

In this method, there are three or more active beacons (transmitters) mounted at known positions in the workspace (Figure 12.6). A rotating sensor mounted on the robot image registers the angles image, image, and image at which the robot “sees” the active beacons relative to the WMR’s longitudinal axis image. Using these three readings, we can compute the position image of the robot, where x stands for X0 and y stands for Y0.

image

Figure 12.6 Triangulation method. A rotating sensor head measures the angles image between the three active beacons image, image, and image and the robot’s longitudinal axis.

The active beacons must be sufficiently powerful in order to be able to transmit the appropriate signal power in all directions. The two design alternatives of triangulation are:

1. Rotating transmitter and stationary receivers

2. Rotating transmitter–receiver and stationary reflectors

It must be noted that when the observed angles are small, or the observation point is very near to a circle containing the beacons, the triangulation results are very sensitive to small angular errors. The three-point triangulation method works efficiently only when the robot lies inside the boundary of the triangle formed by the three beacons. The geometric circle intersection shows large errors when the three beacons and the robot all lie on or are very close to the same circle. In case the Newton–Raphson method is used to determine the robot’s location, no solution can be provided if the initial guess of the robot’s position/orientation is in the exterior of a certain bound. A variation of the triangular method creates a virtual beacon as follows: a range or angle obtained from a beacon at time image can be used at time image, provided that the cumulative movement vector, recorded since the reading was obtained, is added to the position vector of the beacon. This generates a new virtual beacon.

Referring to Figure 12.6, the basic trigonometric equation, which relates x,y, and image with the measured angles image of the beacons, is:

image

where image are the coordinates of the beacon image. Three beacons are sufficient for computing image under the condition that the measurements are noise free. Then, we will have a nonlinear algebraic system of three equations with three unknowns which can be solved by approximate numerical techniques such as the Newton–Raphson technique and its variations. But if the measurements are noisy (as it typically happens in practice), we need more beacons. In this case, we get an overdetermined noisy nonlinear algebraic system which can be solved by the iterative least-squares technique, in analogy to the above-mentioned trilateration method [11].

The results obtained are more accurate if the artificial beacons are optimally placed (e.g., if they are about image apart in the three-beacon case). Otherwise, the robot position and orientation may have large variations with respect to an optimal value. Moreover, if all beacons are identical, it is very difficult to identify which beacon has been detected. Finally, mismatch is likely to occur if the presence of obstacles in the environment obscures one or more beacons. A natural way to overcome the problem of distinguishing one beacon from another is to manually initialize and recalibrate the robot position when the robot is lost. However, such a manual initialization/recalibration is not convenient in actual applications. Therefore, many automated initialization/recalibration techniques have been developed for use in robot localization by triangulation. One of them uses a self-organizing neural network (Kohonen network) which is able to recognize and distinguish the beacons [12] (see also Section 12.7).

12.6.4 Localization by Map Matching

Map-matching-based localization is a method in which a robot uses its sensors to produce a map of its local environment [13]. When we talk about a map in real life, we usually imagine a geometrical map like the map of a country or a town with names of places and streets. To find our way in a map of a city using street names, we walk up to a street corner, read the name (i.e., the landmark) and compare the name with what is written in the map. Now, if we know where we wish to go, we plan a path to this goal. In our way, we use natural landmarks (e.g., street corners, buildings, streets, blocks) to keep track of where we are along the planned path. If we lose track we have to revert the reading signs and matching the street names to the map. If a road is closed, dead-end, or one-way street, then we plan an alternative path. This methodology is in principle used by WMRs in map generation and matching, of course with landmarks suitable for the robot’s environment in each case.

The robot compares its local map to a global map already stored in the computer’s memory. If a match is found, then the robot can determine its actual position and orientation in the environment. The prestored map may be a CAD model of the environment, or it can be created from prior sensor data.

The basic structure of a map-based localization system is shown in Figure 12.7.

image

Figure 12.7 Structure of map-based localization (search can be reduced if the initial position guess is close to the true position).

The benefits of the map-based positioning method are the following:

• No modification of the environment is required (only structured natural landmarks are needed).

• Generation of an updated map of the environment can be made.

• The robot can learn a new environment and enhance positioning accuracy via exploration.

Some drawbacks are as follows:

• A sufficient number of stationary, easily distinguishable, features are needed.

• The accuracy of the map must be sufficient for the tasks at hand.

• A substantial amount of sensing and processing power is required.

Maps are distinguished as follows:

Topological maps: They are suitable for sensors that provide information about the identity of landmarks. A real-life topological map example is the subway map which provides information only on how you get from one station (place) to another (no information about distances is given). To enable easy orientation when changing trains, color codings are provided. Topological maps model the world by a network of arcs and nodes.

Geometrical maps: These maps can be constructed from topological maps if the proper equipment for measuring distance and angle is available. Geometrical maps suit sensors that provide geometrical information (e.g., range), like sonar and laser range finders. In general, it is easier to use geometrical maps in conjunction with sensors of this type, than using topological maps, since less advanced perception techniques are needed. In extreme situations, the sensor measurements can be directly matched to the stored map.

Currently, used maps are often CAD drawings or they have been measured by hand. The human is deciding on which parts of the environment should be included in the map. It is also up to the human to decide which sensor is the most appropriate to use (taking of course into account its price), and how it will respond to the environment. Ideally, it is desired that the WMR is able to construct maps autonomously by itself. In this direction, a solid methodology called simultaneous localization and mapping has been developed which will be examined in the following text.

12.7 Kalman Filter-Based Localization and Sensor Calibration and Fusion

The Kalman filter described by Eqs. (12.5)(12.11) can be used for three purposes, namely:

• Robot localization

• Sensor calibration

• Sensor fusion

12.7.1 Robot Localization

The steps of the localization algorithm that is based on Kalman filter are the following:

Step 1—One-Step Prediction
The predicted robot position image at time image, given the known filtered position image at time image, is given by Eq. (12.12a), that is:

image (12.24)

where image represents the kinematic equation of the robot. The corresponding position uncertainty is given by the covariance matrix image described by Eq. (12.8), that is:

image (12.25)

with image (a known value). The respective prediction image of the sensors’ measurements is given by:

image

Step 2—Sensor Observation
At this step, a new set of sensor measurements image at time image is taken.

Step 3—Matching
The measurement innovation process:

image (12.26)

image (12.27)

is constructed.

Step 4—Position Estimation
Calculate the Kalman filter gain matrix image using Eq. (12.10), that is:

image (12.28)

and the updated position estimate image is given by Eqs. (12.5)(12.8), that is:

image (12.29)

image (12.30)

where image and image are given by Eqs. (12.26) and (12.25), respectively. The initial value image of the estimate’s covariance, as well as the values of image and image needs to be determined experimentally (before the application of the method) using the available information about the sensors and their models. Typically, image and image Taking into account that the measurement (sensor observation) vector image may involve data from several sensors (usually of different type), we can say that the Kalman filter provides also a sensor fusion and integration method (see Example 12.5).

In all cases, it is advised to check if the new measurements are statistically independent from the previous measurements and disregard them if they are statistically dependent, since they do not actually provide new information. This can be done at the matching step on the basis of the innovation process (or residual) image and its covariance image given by Eqs. (12.26) and (12.27). The process image expresses the difference between the new measurement image and its expected value image according to the system model.

To this end, we use the so-called normalized innovation squared (NIS) defined as:

image

The process image follows a chi-square image distribution with image degrees of freedom (more precisely when the innovation process image is Gaussian). Therefore, we can read from the chi-square, the bounding values (confidence interval) for a image random variable with image degrees of freedom, and discard a measurement, if its image is outside the desired confidence interval. More specifically, if the criterion:

image

is satisfied, then the measurement image is considered to be independent from image and so it is included in the filtering process. Measurements that do not satisfy the above criterion are ignored. The random variable image is defined as follows. Consider two random variables (sensor signals) image and image which are independent if the probability distribution of image is not affected by the presence of image. Then, the chi-square variable is given by:

image

where image is the observed frequency of events belonging to both the ith category of image and image category of image, and image is the corresponding expected frequency of events if image and image are independent. The use of chi-square (or contingency table) is described in books on statistics (see, e.g., in http://onlinestatbook.com/stat_sim/chisq_theory/index.html).

12.7.2 Sensor Calibration

The Kalman filter provides also a method for sensor calibration, which is based on the minimization of the error between the actual measurements and the simulated measurement of the sensor with the parameters at hand. The parameters that have to be calibrated may be intrinsic or extrinsic. Intrinsic parameters cannot be directly observed, but extrinsic parameters can. For instance, the focal distance, the location of the optical center, and the distortion of a camera are intrinsic parameters, but the position of the camera is an extrinsic parameter.

To perform sensor calibration with the aid of the Kalman filter, we use the parameters under calibration, as the state vector of the model, in place of the robot’s position/orientation. The prediction step gives simulated values of the parameters at the present state (i.e., with their present values). The state observation step provides a set of sensor captions taken at different robot positions, and not during the motion of the robot (as we did for robot localization, i.e., for position estimation and mapping). The use of Kalman filters for sensor calibration is general and independent of the nature of the sensor (range, vision, etc.). The only things that change are the state vector image and the measurement vector with the associated matrices image, image, and image, and the noises image and image.

12.7.3 Sensor Fusion

Sensor fusion is the process of merging data from multiple sensors such that to reduce the amount of uncertainty that may be involved in a robot navigation motion or task performing. Sensor fusion helps in building a more accurate world model in order for the robot to navigate and behave more successfully. The three fundamental ways of combining sensor data are the following:

• Redundant sensors: All sensors give the same information for the world.

• Complementary sensors: The sensors provide independent (disjoint) types of information about the world.

• Coordinated sensors: The sensors collect information about the world sequentially.

The three basic sensor communication schemes are [14]:

• Decentralized: No communication exists between the sensor nodes.

• Centralized: All sensors provide measurements to a central node.

• Distributed: The nodes interchange information at a given communication rate (e.g., every five scans, i.e., one-fifth communication rate).

The centralized scheme can be regarded as a special case of the distributed scheme where the sensors communicate to each other every scan. A pictorial representation of the fusion process is given in Figure 12.8.

image

Figure 12.8 Illustration of the sensor fusion process.

Consider, for simplicity, the case of two redundant local sensors (N=2). Each sensor processor image provides its own prior and updated estimates and covariances image, image and image, image, image. Assume that the fusion processor has its own total prior estimate image and image. The fusion problem is to compute the total estimate image and covariance matrix image using only those local estimates and the total prior estimate. The total updated estimate can be obtained using linear operations on the local estimates, as:

image

where:

image

When the local processors and the fusion processor have the same prior estimates, the above fusion equations can be simplified as:

image (12.31a)

image (12.31b)

This means that the common prior estimates (i.e., the redundant information) are subtracted in the linear fusion operation. The above equations constitute the fusion processor shown in Figure 12.8. Typically, the sensor processors image are Kalman filters.

Example 12.2

We will derive the formula for fusing the measurements image provided for a quantity image (e.g., the position of a WMR) by image independent sensors. It is assumed that the measurement image of the kth sensor is normally distributed (Gaussian) with variance image.

We will apply the maximum likelihood estimation method in which the joint probability distribution image of image is maximized with respect to the fused value image and fused variance image. The joint Gaussian distribution image is:

image

The likelihood function image is defined as the logarithm of image, that is:

image

To maximize L, we equate to zero the derivative of image with respect to image, that is:

image

Thus, the fused estimate image of the image sensors is equal to:

image

The variance image of image is given by:

image

The partial derivative image is:

image

Therefore:

image

or

image

To illustrate the meaning of the above formulas for image and image, we consider the case of two sensors image, for example, a laser range finder sensor and an ultrasonic range sensor, namely:

image

image

or

image

We see that the variance of the fused estimate is smaller than all the variances of the individual sensor measurements (similarly to the connection of pure resistances in parallel) (see Figure 12.9). The formula for image can be written as:

image

image

Figure 12.9 The variance image of the fused estimate image is smaller than both image and image.

Thus, we get the following sensor estimate updating formula:

image

image

with image and image. The updated variance image of image is found to be:

image

The above sequential equations for image and image represent a discrete Kalman filter (see Section 12.2.3 and Example 12.1) that can be used when the sensors collect and provide their data sequentially.

In the above analysis, the variable image was assumed to have a fixed value (e.g., when the WMR is not moving). If the variable image changes, and the change can be expressed by a dynamic stochastic system, then the dynamic Kalman filter should be used and image is decreasing with time (see Table 12.1).

12.8 Simultaneous Localization and Mapping

12.8.1 General Issues

The SLAM problem deals with the question if it is possible for a WMR to be placed at an unknown location in an unknown environment, and for the robot to incrementally build a consistent map of this environment while simultaneously determining its location within this map.

The foundations for the study of SLAM were made by Durrant-Whyte [15] by establishing a statistical basis for describing relationships between landmarks and manipulating geometric uncertainty. The key element was the observation that there must be a high degree of correlation between estimates of the location of several landmarks in a map and that these correlations are strengthened with successive observations. A complete solution to the SLAM problem needs a joint state, involving the WMR’s pose and every landmark position, to be updated following each landmark observation. Of course, in practice this would need the estimator to employ a very high dimensional state vector (of the order of the number of landmarks maintained in the map) with computational complexity reduction proportional to the number of landmarks. A wide range of sensors, such as sonars, cameras, and laser range finders, has been used to sense the environment and achieve SLAM ([58]).

A generic self-explained diagram showing the structure and interconnections of the various functions (operations) for performing SLAM is provided in Figure 12.10.

image

Figure 12.10 Interconnection diagram and flow chart of the SLAM functions.

The three typical methods for implementing SLAM are:

• EKF

• Bayesian estimator

• PF

EKF is an extension of Kalman filter to cover nonlinear stochastic models, which allows to study observability, controllability, and stability of the filtered system. Bayesian estimators describe the WMR motion and feature observations directly using the underlying probability density functions and the Bayes theorem for probability updating. The Bayesian approach has shown a good success in many challenging environments. The PF (or as otherwise called, sequential Monte Carlo method) is based on simulation [15]. In the following sections, all these SLAM methods will be outlined.

12.8.2 EKF SLAM

The motion of the WMR and the measurement of the map features are described by the following stochastic nonlinear model [16]:

image (12.32a)

image (12.32b)

where the state vector image contains the m-dimensional position image of the vehicle, and a vector image of n stationary d-dimensional map features, that is, image has dimensionality image:

image (12.33)

The l-dimensional input vector image is the robot’s control command, and the l-dimensional process image is a Gaussian stochastic zero-mean process with constant covariance matrix image. The function image represents the sensor model, and image represents the inaccuracies and noise. It is also assumed that image is a zero-mean Gaussian process.

Given a set of measurements image for the current map estimate image the expression:

image (12.34a)

gives an a priori noise-free estimate of the new locations of the robot and map features after the application of the control input image. In the same way:

image (12.34b)

provides a noise-free a priori estimate of the sensor measurements.

If image and image are linear, then the Kalman filter, Eqs. (12.5)(12.10), can be directly applied. But, in general image and image are nonlinear in which case we use the following linearizations (first-order Taylor approximations):

image (12.35a)

image (12.35b)

where image, image, and image are the Jacobian matrices:

image

Given that the landmarks are assumed stationary, their a priori estimate is:

image

Thus, the overall state model of the vehicle and map dynamics is:

image

image

or

image (12.36)

image (12.37)

where:

image (12.38a)

image (12.38b)

Using the augmented linearized model for the dynamics of the position and environmental landmarks, we can apply directly the same steps as those given in Section 12.7 for the linear localization problem.

For convenience, we repeat here the sequence of the covariances:

image (12.39)

Under the condition that the pair image is completely observable, the filter covariance tends to a constant matrix image which is given by the following steady-state (algebraic) Riccati equation:

image (12.40)

It is noted that in SLAM the complete observability condition is not satisfied in all cases. Clearly, the steady-state covariance matrix image depends on the values of image, image and image, as well as on the total number image of landmarks. The solution of (12.40) can be found by standard computer packages (e.g., Matlab).

The EKF-SLAM solution is well developed and possesses many of the benefits of the application of the EKF technique to navigation or tracking control problems. However, since EKF-SLAM uses linearized models of nonlinear dynamics and observation models, it leads sometimes to inevitable and critical inconsistencies. Convergence and consistence can only by assured in the linear case as illustrated in the following example.

Example 12.3

Consider a one-dimensional WMR (monorobot) with state image, and a one-dimensional landmark image as shown in Figure 12.11 [16].

image

Figure 12.11 A single-dimensional robot with a one-dimensional landmark.

The WMR’s position error dynamics is:

image (12.41)

and the landmarks dynamics is:

image (12.42)

The map generated by this system is a single static landmark image. The observation (measurement) model for this landmark is:

image (12.43)

where image is the landmark’s measurement error. Equations (12.41)(12.43) can be written in the standard form of Eqs. (12.2a) and (12.2b), that is:

image (12.44)

image (12.45)

where:

image

For the filtering problem, the term image is just an additive term, and has no influence on the filtered estimate. The filtered estimate is given by:

image

Here, the matrix image is equal to:

image (12.46)

which has the eigenvalues image and image. We see that image is always equal to 1 regardless of the filter gain matrix image. Therefore, the filter in this case is only marginally stable (i.e., it performs constant amplitude oscillations). This can easily be verified via simulation. To overcome this problem which is caused by a partially observable system, we can use several techniques to assure complete observability of the pair image, that is, the matrix:

image (12.47)

is of full rank image, in which case it is invertible. These methods include [17]:

• use of anchors or markers,

• use of fixed global references,

• use of an external sensor,

• use of the relative position of the landmark with respect to the position of the robot instead of global positioning.

Therefore, let us assume that we add an anchor in the measurement process of Eq. (12.43) with state image and noise image. Then, the measurement model of Eq. (12.45) has:

image (12.48)

Now, the gain matrix image becomes:

image (12.49)

and the matrix image is:

image (12.50)

The matrix gain image is given by Eqs. (12.10), (12.7), and (12.8). If image and image are the standard deviations of image and image, respectively, then using the present form of the matrices image, and image (see Eq. (12.48)), we can verify that:

image (12.51)

where:

image (12.52)

Now, we can see that the eigenvalues of image given by Eq. (12.50) always lie inside the unit circle of the complex image plane, and so adding the “anchor” the filter becomes stable.

12.8.3 Bayesian Estimator SLAM

In probabilistic (Bayesian) form, the SLAM problem is to compute the conditional probability [1821]:

image (12.53)

where:

image

image

image

represent the history of landmark observations image and the history of control inputs image, respectively. This probability distribution represents the joint posterior density of the landmark locations and WMR state (at time image) given the recorded observations and control inputs up to and including the time image together with the initial state image of the robot. The Bayesian SLAM is based on Bayesian learning, discussed in Section 12.2.4. Starting with an initial estimate for the distribution:

image (12.54)

at time image the joint posterior probability, following a control image and observation image is computed by Bayes learning (updating) formula (12.15b). The steps of the algorithm are as follows:

Step 1—Observation Model
Determine the observation model that describes the probability of making an observation image when the WMR location and landmark locations are known. In general, this model has the form:

image (12.55)


Of course, we tacitly assume that once the robot’s location and the map are defined, the observations are conditionally independent given the map and the current WMR state.

Step 2—WMR Motion Model
Determine the motion model of the vehicle which is described by the Markovian conditional probability:

image (12.56)


This probability indicates that the state image at time image depends only on the state image at time image and the exerted control image, and does not depend on the observations and the map.

Step 3—Time Update
We have:

image (12.57)

Step 4—Measurement Update
According to Bayes update law:

image (12.58)


Equations (12.57) and (12.58) provide a recursive algorithm for calculating the joint posterior probability image for the WMR state image and map image at time image using all observations image and all control inputs image up to and including time image. This recursion uses the WMR model image and the observation model image. We remark that here the map building can be formulated as computing the conditional density image. This needs the location image of the WMR to be known at all times, under the condition that the initial location is known. The map is then produced through the fusion of observations from different positions. On the other hand, the localization problem can be formulated as the problem of computing the probability distribution image. This requires the landmark locations to be known with certainty. The goal is to compute an estimate of the WMR location with respect to these landmarks.

The above formulation can be simplified by dropping the conditioning on historical variables in Eq. (12.54) and write the joint posterior probability as image. Similarly, the observation model image makes explicit the dependence of observations on both the vehicle and landmark locations. But, here the joint posterior probability cannot be partitioned in the standard way, that is, we have:

image (12.59)

Thus, care should be taken not to use the partition shown in Eq. (12.59) because this could lead to inconsistencies.

However, the SLAM problem has more intrinsic structure not visible from the above discussion. The most important issue is that the errors in landmark location estimates are highly correlated, for example, the joint probability density of a pair of landmarks, image, is highly peaked even when the independent densities image may be very dispersed. Practically, this means that the relative location image of any two landmarks image may be estimated much more accurately than their individual positions where image may be quite uncertain. In other words, the relative location of landmarks always improves and never diverges, regardless of the WMR’s motion. Probabilistically, this implies that the joint probability density on all landmarks image becomes monotonically more peaked as more observations are made.

Example 12.4

We consider a differential drive WMR mapping two-dimensional landmarks image, image (Figure 12.12) [16]. The state space of the robot is three dimensional, where the state vector is:

image (12.60)

image

Figure 12.12 Differential drive WMR on the plane image.

The robot is controlled by a linear velocity image and an angular velocity image.

Let image be the distance from the center of the wheel axis to the location of the center of projection for any given sensor, and image the discrete-time step. The dynamic model of the trajectory of the center of projection of the sensor, which includes the noises image and image, is:

image (12.61)

or in detailed form:

image (12.62)

Differentiating Eq. (12.62) with respect to image and image, we find the Jacobian matrices:

image

image (12.63)

The measurement model of the sensor (here a laser range scanner) is:

image (12.64)

where image and image are the range and bearing of an observed point landmark with respect to the laser center of projection. The position of the ith landmark is image and the measurement noises are image and image. The Jacobian matrix of this nonlinear model is:

image (12.65a)

where:

image (12.65b)

The measurement model of a global reference, fixed at the origin, for the nonlinear vehicle is:

image

which has the Jacobian matrix:

image (12.66)

where image. Thus, the overall measurement matrix image is:

image (12.67)

It can be verified that with the addition of the global reference at the origin, the observability matrix image given by Eq. (12.47) is of full rank and so the EKF is stable and provides a steady-state covariance matrix image (see Eq. (12.40)). This has also been verified experimentally [16].

12.8.4 PF SLAM

The purpose of the PF is to estimate the robot’s position and map parameters image (see Eq. (12.53)) for image on the basis of the data image, image. The Bayes method computes the image using the posterior probability image. The Markov sequential Monte Carlo (MSMC) method (PF) is based on the total probability distribution image[18,22,23].

Here, the Markov stochastic model given by Eqs. (12.32a) and (12.32b) of the system is described in a probabilistic way as follows [18]:

1. image is a first-order Markov process such that:

image corresponds to image with initial distribution image.

2. Assuming that image are known, the observations image are conditionally independent, that is, image is described by image.

Particle methods belong to the sampling statistical methods which generate a set of samples that approximate the filtering probability distribution image.2 Therefore, with image samples, the expectation with respect to the filtering distribution is approximately given by:

image

where image can give, by the Monte Carlo technique, all moments of the distribution up to a desired degree. The particle method which is mostly used is the so-called sequential importance resampling (SIR) method proposed by Gordon and colleagues [18]. In this method, the filtering distribution image is approximated by a set of image particles (multiple copies) of the variable of interest:

image

where image are weights that signify the relative quality of the particles, that is, they are approximations to the relative posterior probabilities (densities) of the particles such that:

image

SIR is a recursive (sequential, iterative) form of importance sampling where the expectation of a function image is approximated by a weighted average:

image

One of the problems encountered in PFs is the depletion of the particle population in some regions of space after some iterations. As most of the particles have drifted far enough, their weights become very small (close to zero) and they no longer contribute to estimates of image (i.e., they can be omitted).

The samples of image are drawn using a proposed probability distribution image. Usually, as proposed probability distribution we select the transition prior distribution image, which facilitates the computations.

The PF algorithm consists of several steps. At each step, for image, we do the following:

1. Draw from the proposed distribution image samples image:

image

2. Update the importance (or quality) weights up to a normalizing constant:

image (12.68a)


If as proposed distribution, the prior distribution image is used, that is, image, the above expression for image reduces to:

image (12.68b)

3. Compute the normalized weights as:

image (12.69)

4. Compute an estimate of the effective sample size (i.e., number of particles) ESS as:

image (12.70)

5. If image, where image is a maximum (threshold) number of particles, then perform population resampling as described below.

The PF method can be applied to the mobile robot localization problem.

Actually, we have three phases:

1. Prediction—use a model for the simulation of the effect that a control action has on the set of particles with the noise added (see Eq. (12.32a)).

2. Update—use information obtained from sensors to update the weights such that to improve the probability distribution of the WMR’s motion (see Eq. (12.32b)).

3. Resampling—draw image particles from the current set of particles with probabilities proportional to their weights and replace the current set of particles with this set selecting the weights as image.

Note 1: Resampling is performed if ESS in Eq. (12.70) is image (see Section 13.13, Phase 3).

Note 2: A Matlab code for EKF- and PF-based SLAM can be found in:

http://www.frc.ri.cmu.edu/projects/emergencyresponse/radioPos/index.html.

A schematic representation of the PF loop is as shown in Figure 12.13:

image

Figure 12.13 PF loop structure.

We recall that after a certain number of iterations, image, most weights become nearly zero and therefore the corresponding particles have negligible importance. This fact is indeed faced by the resampling process which replaces these low-importance particles with particles of higher importance.

12.8.5 Omnidirectional Vision-Based SLAM

The main issue in SLAM is that the robot can build a map of the environment and localize its position/posture in this map on the basis of noisy measurements of characteristic points (landmarks) from the images. The measurements of a catadioptric camera are particularly suited for use in conjunction with EKF in which every state variable or output variable is represented by its mean and covariance. The motion of the WMR and the measurements are described by Eqs. (12.32a) and (12.32b), where image and image are nonlinear functions of their arguments. The state vector image at time image image contains the position vector image of the vehicle, and the vector image of the map features. The disturbances/noises image and image are assumed Gaussian with zero means and known covariances. The EKF equations involve the Jacobian matrices of image and image. For the catadioptric camera system, these matrices have been derived in Example 9.4, and can be used to formulate the EKF equations in a straightforward way. Two examples of this application are provided in Refs. [24,25].

Example 12.5

In this example, we will develop the full EKF equations for the localization of a unicycle-type WMR using (fusing) two kinds of sensors [26]:

• Encoders on the powered wheels that provide a measure of the incremental rotation angles over a possibly varying sampling period image.

• A set of sonar sensors mounted on the platform of the WMR.

The WMR kinematic equations image, image, and image are discretized using the first-order approximation image, giving the discrete-time model:

image (12.71)

where image (or image for simplicity) and image (constant). This model is nonlinear in the state vector:

image (12.72a)

The control vector is:

image (12.72b)

The disturbance inputs image are zero-mean Gaussian white noises with known identical variances image. The model in Eq. (12.71) can be written in the compact form of Eq. (12.32a):

image (12.73)

where:

image (12.74)

with additive disturbance image. The linear approximation of Eq. (12.73) is (see Section 12.8.2):

image (12.75)

where image is the current estimate of image (based on measurements up to time image), and:

image (12.76a)

image (12.76b)

image (12.76c)

Let image be the coordinates of the ith sonar sensor in the vehicles coordinate frame image, and image the orientation angle of the ith sonar in image as shown in Figure 12.14.

image

Figure 12.14 WMR with incremental encoders on the wheels and sonar sensors on its platform.

The discrete-time kinematic equations in Oxy of the ith sonar are:

image (12.77)

Now, consider a plane (surface) image as shown in Figure 12.15, and a sonar image with beam width image (all sonars are assumed to have the same beam width image). Each plane image can be represented in Oxy by image and image, where:

• image is the (normal) distance of image from the origin image of the world coordinate frame.

• image is the angle between the normal line to the plane image and the Ox direction.

image

Figure 12.15 Geometry of sonar image.

The distance image of sonar image from the plane image is (see Figure 12.15):

image (12.78a)

for

image (12.78b)

The measurement vector image contains the encoder and sonar measurements and has the form:

image (12.79)

where image is a Gaussian zero-mean white measurement noise with covariance matrix image, and:

image (12.80a)

image (12.80b)

with image (image is the number of sonars), image the distance measurement with respect to the plane image provided by the ith sonar, image (image is the number of planes), and image

For simplicity, we assume (without loss of generality) that only one sensor and one plane are used (i.e., image and image, in which case:

image (12.81)

Linearizing the measurement Eq. (12.79) about image, where image is the estimate of image using measurement data up to time image, and noting that image does not depend on image, we get:

image (12.82)

where:

image (12.83a)

image (12.83b)

Now, having available the linearized model of Eqs. (12.75) and (12.82), we can use directly the linear Kalman filter Eqs. (12.5)(12.12). For the robot localization we use the steps described in Section 12.7.1.

Therefore:

Step 1: One-Step Prediction

image (12.84)

Step 2: Sensor Observation
A new set of sensors’ measurements image is taken at time image

Step 3: Matching
The measurement innovation process:

image (12.85)

is constructed.

Step 4: Position Estimation

image (12.86a)

image (12.86b)

image (12.86c)

with image a given symmetric positive definite matrix.
Any valid controller that guarantees the stability of the closed-loop system can be used. Here, a dynamic state feedback linearizing and decoupling controller derived by the method of Example 6.7 will be used [26,27]:
We select the output vector:

image (12.87a)

and differentiate it to obtain:

image (12.87b)


Clearly, image does not influence image, and so we introduce a dynamic compensator

image (12.88)

where image is the linear WMR acceleration. Therefore, Eq. (12.87a) takes the form:

image (12.89)


Differentiating Eq. (12.89) we get:

image (12.90)

where image is the nonsingular decoupling matrix:

image (12.91)

with image.
Thus, defining new inputs image and image such that:

image (12.92)

and solving Eq. (12.90) for image we get:

image

which is the desired dynamic state feedback linearizing and decoupling controller:

image (12.93a)

image (12.93b)

image (12.93c)

with:

image (12.93d)

Numerical Results—We consider the following values of the system parameters and initial conditions:

• Initial WMR position: image

• Sonar position in image: image

• Plane position: image

• Disturbance/noise:

image

The desired trajectory starts at image, image and is a straight line that forms an angle of image with the world coordinate image axis as shown in Figure 12.16.

image

Figure 12.16 Desired trajectory of the WMR.

The new feedback controller image is designed, as usual, using the linear PD algorithm:

image

image

Figure 12.17A shows the trajectory obtained using odometric and sonar measurements and the desired trajectory. Figure 12.17B shows the desired orientation image, and the real orientation image obtained using the EKF fusion. As we see, the performance of the EKF fusion is very satisfactory.

image

Figure 12.17 (A) Actual trajectory obtained using both kinds of sensors versus the desired trajectory (ent) and (B) corresponding curves for the orientation image (ent…) and image (—).

However, the experiment showed that the trajectories obtained by pure odometric measurements, and by combined odometric and sonar data are similar. This is perhaps due to modeling reasons. Here, the case of using both sensors was treated jointly by a single EKF. A better picture of the improvement can be obtained by using the sensor fusion process of Figure 12.8 by first computing separately the (local) estimates provided by each individual kind of sensor, and then finding the combined estimate according to (12.31a) and (12.31b) which eliminates the effect of any redundant information.

Further improvement in the accuracy of the state vector estimation can be obtained using the PF approach. This is because the EKF assumes Gaussian disturbances and noise processes which in general is not so, whereas in the PF no assumptions are made about the probability distributions of the stochastic disturbances and noises. In PF, a set of weighted particles (state vector estimates) is employed, which evolve in parallel. Each iteration of the PF involves particle updating and weight updating, and convergence is assured through resampling by which particles with low weights are replaced by particles of high weights. Indeed, simulation experiments using PF for the above sensor fusion system example, with number of particles image, have shown that the PF is superior than the EKF. As the number of particles was increased, better estimates of the WMR state vector were obtained, of course with higher computational effort.

Example 12.6

It is desired to give a solution to the problem of estimating the leader’s velocity image in a leader–follower vision-based control system (Figure 12.18).

image

Figure 12.18 Leader–follower system with leader pose image and follower pose image.

Solution

The image processing algorithms give the range and bearing data [28], that is:

image (12.94a)

image (12.94b)

Differentiating image and image we get:

image (12.95a)

image (12.95b)

where:

image (12.95c)

Defining the angle image as:

image (12.96)

and assuming that image and image, we find that the state vector under estimation is described by the nonlinear model:

image (12.97a)

where image is a Gaussian stochastic disturbance input process with zero-mean and known covariance matrix and:

image (12.97b)

The measured output image is:

image (12.97c)

where image is the zero-mean sensor measurement Gaussian noise with known covariance matrix.

Discretizing Eqs. (12.97a)(12.97c) with first-order approximation and sampling period image gives the nonlinear discrete-time model:

image (12.98a)

image (12.98b)

where:

image (12.98c)

We will apply the EKF technique to the nonlinear model given by Eqs. (12.98a)(12.98c) which is linearized, as usual, about the current estimate image to give (see Eqs. (12.32a)(12.35b)):

image (12.99a)

where the estimate image is based on measurements image, and:

image (12.99b)

image (12.99c)

image (12.99d)

The measurement Eq. (12.97c) is linear by its own, and can be written as:

image (12.100a)

where:

image (12.100b)

The linear state-space and measurement model given by Eqs. (12.99a)(12.99d), (12.100a), and (12.100b) has the standard form of Eqs. (12.2a) and (12.2b) or Eqs. (12.75) and (12.82), and so the formulation of the EKF equations is straightforward. As an exercise the reader can write down these equations and write a computer program with proper values of the parameters.

Example 12.7

Given that a sensor is available to measure the range and the bearing of observed landmarks, it is desired to outline an algorithm that updates the pose (position and orientation) of a car-like WMR.

Solution

The algorithm is similar to the velocity estimation algorithm of the previous example. Here, the kinematic model of the car-like WMR is:

image (12.101)

where “image” is the decay parameter of the steering angle image, which is assumed to be constrained as image, and image is an input gain.

The control vector is:

image

To use the EKF for updating the position and orientation of the WMR, we first discretize (as usual) this model and get (see Eqs. (12.98a) and (12.98b)):

image (12.102a)

image (12.102b)

where image and image are zero-mean Gaussian white processes with known covariance matrices image and image, respectively, and image is the position of the ith landmark:

image (12.102c)

The output function image is:

image (12.102d)

The function image is given by:

image (12.103a)

where:

image (12.103b)

The linearized state and measurement model is found to be:

image (12.104a)

image (12.104b)

where:

image

image

image

image

image

image

image

where image, image, and image.

Again, the model given by Eqs. (12.104a) and (12.104b) is a standard linear time-varying stochastic model to which the EKF is applied directly. The four steps:

• One-step prediction

• Sensor observation

• Matching

• Pose estimation

provide the solution (see Section 12.7.1). In the matching step, we can use the image criterion to validate the matching (independence) of each landmark (sensor) measurement. Measurements that do not satisfy this criterion are ignored. It is remarked that the control input image in the term image of Eq. (12.104a) is assumed to be known (since our aim here is purely the estimation of image). This input can be selected by a proper method (Chapters 59) for achieving a desired control objective as it was done in Example 12.5.

References

1. Papoulis A. Probability, random variables and stochastic processes New York, NY: Mc Graw-Hill; 1965.

2. Meditch JS. Stochastic optimal linear estimation and control New York, NY: Mc Graw-Hill; 1969.

3. Anderson BDO, Moore JB. Optimal filtering Prentice Hall, NJ: Englewood Cliffs; 1979.

4. Borenstein J, Everett HR, Feng L. Navigating mobile robots: sensors and techniques Wellesley, MA: A.K. Peters Ltd; 1999.

5. Adams MD. Sensor modeling design and data processing for automation navigation Singapore: World Scientific; 1999.

6. Davies ER. Machine vision: theory, algorithms, practicalities San Francisco, CA: Morgan Kaufmann; 2005.

7. Bishop RH. Mechatronic systems, sensors and actuators: fundamentals and modeling Boca Raton, FL: CRC Press; 2007.

8. Leonard JL. Directed sonar sensing for mobile robot navigation Berlin: Springer; 1992.

9. Kleeman, L. Advanced sonar and odometry error modeling for simultaneous localization and map building. In: Proceedings of the 2004 IEEE/RSJ international conference on intelligent robots and systems, Sendai, Japan, 2004, p. 1866–71.

10. Betke M, Gurvis L. Mobile robot localization using landmarks. IEEE Trans Rob Autom. 1997;13(2):251–263.

11. Andersen CS, Concalves JGM. Determining the pose of a mobile robot using triangulation: a vision based approach. Technical Report No I. 195-159, European Union Joint Research Center, December 1995.

12. Hu H, Gu D. Landmark-based navigation of industrial mobile robots. Int J Ind Rob. 2000;27(6):458–467.

13. Castellanos JA, Tardos JD. Mobile robot localization and map building: a multisensor fusion approach Berlin: Springer; 1999.

14. Chang KC, Chong CY, Bar-Shalom Y. Joint probabilistic data association in distributed sensor networks. IEEE Trans Autom Control. 1986;31:889.

15. Durrant-Whyte HF. Uncertainty geometry in robotics. IEEE Trans Rob Autom. 1988;4(1):23–31.

16. Vidal Calleja TA. Visual navigation in unknown environments. Ph.D. Thesis, IRI, Univ. Polit. de Catalunya, Barcelona, 2007.

17. Guivant JE, Nebot EM. Optimization of the simultaneous localization and map-building algorithm for real-time implementation. IEEE Trans Rob Autom. 2001;17(3):242–257.

18. Gordon NJ, Salmond DJ, Smith AFM. Novel approach to nonlinear/nonGaussian Bayesian estimation. Proc IEE Radar Signal Process. 1993;140(2):107–113.

19. Rekleitis I, Dudek G, Milios E. Probabilistic cooperative localization and mapping in practice. Proc IEEE Rob Autom Conf. 2003;2:1907–1912.

20. Rekleitis I, Dudek G, Milios E. Multirobot collaboration for robust exploration. Ann Math Artif Intell. 2001;31(1–4):7–40.

21. Bailey T, Durrant–Whyte H. Simultaneous localization and mapping (SLAM), Part I. IEEE Rob Autom Mag. 2006;13(2):99–110 Part II, ibid, (3):108–17.

22. Doucet A, De Freitas N, Gordon NJ. Sequential Monte Carlo methods in practice Berlin: Springer; 2001.

23. Crisan D, Doucet A. A survey of convergence results on particle filtering methods for practitioners. IEEE Trans Signal Process. 2002;50(3):736–746.

24. Rituerto A, Puig L, Guerrero JJ. Visual SLAM with an omnidirectional camera. In: Proceedings of twentieth international conference on pattern recognition (ICPR), Istanbul, Turkey, 23–26 August, 2010, p. 348–51.

25. Kim JM, Chung MJ. SLAM with omnidirectional stereo vision sensor. In: Proceedings of 2003 IEEE/RSJ international conference on intelligent robots and systems, Las Vegas, NV, October, 2003, p. 442–47.

26. Rigatos GG, Tzafestas SG. Extended Kalman filtering for fuzzy modeling and multi-sensor fusion. Math Comput Model Dyn Sys. 2007;13(3):251–266.

27. Oriolo G, DeLuca A, Venditteli M. WMR control via dynamic feedback linearization: design implementation and experimental validation. IEEE Trans Control Sys Technol. 2002;10(6):835–852.

28. Das AK, Fierro R, Kumar V, Southall B, Spletzer J, Taylor CJ. Real-time mobile robot. In: Proceedings of 2001 international conference on robotics and automation, Seoul, Korea, 2001, p. 1714–19.


2For convenience, we drop the dependence on the inputs image.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.241.82