Pk|k=Pk|k1KkCPk|k1Pk|k1CTKkT+KkSkKkT=Pk|k1KkCPk|k1Pk|k1CTKkT+Pk|k1CTKkT=Pk|k1KkCPk|k1

si237_e

Example 6.19

There is a mobile robot that drives around on a plane and measures its position with a GPS sensor 10 times in a second (sampling time Ts = 0.1 s). The position measurement is disturbed with zero-mean Gaussian noise with variance 10 m2. The mobile robot drives with the speed of 1 m/s in the x direction and with the speed of 0 m/s in the y direction. The variance of Gaussian noise of the velocity is 0.1 m2/s2. At the beginning of observation, the mobile robot is situated in the origin x = [0, 0]T, but our initial estimate about the mobile robot’s position is x^=[3,3]Tsi238_e with initial variance

P0=100010

si239_e

What is the time evolution of the position estimate and the variance of the estimate?

Solution

The solution of the problem can be obtained with the aid of simulation tools provided in Matlab. Let us determine the robot’s movement model where the state estimate represents the robot’s position in a plane x^k|k1=[xk,yk]Tsi240_e and input u = [vx, vy]T represents the robot’s speed in x and y directions, respectively. The model for system state prediction is therefore

x^k|k1=1001x^k1|k1+Ts00Tsu

si241_e

The position measurement model of a GPS sensor is

z^k=1001x^k|k1

si242_e

Listing 6.10 provides an implementation of the solution in Matlab. The simulation results are shown graphically in Figs. 6.206.22. The simulation results confirm that the algorithm provides a position estimate that is converging to the true robot position from the false initial estimate. This is expected since the mean of the measurement noise is zero. However, the estimate variance is decreasing with time and falls much below the measurement variance (the internal kinematic model is the trusted mode). The Kalman filter enables optimum fusion of data from different sources (internal kinematic model and external measurement model) if variances of these sources are known. The reader should feel free to experiment with system parameters and observe the estimate variance evolution and convergence rate.

Listing 6.10

Implementation of the solution of Example 6.19

1 %  Linear model of the system in state − space

2 Ts = 0 . 1 ; %  Sampling time

3 A = [1 0;  01 ] ;

4 B = [ Ts0;  0 Ts ] ;

5 C = [1 0;  01 ] ;

6 F= B; %  Noisei s  added directly to the input.

7

8 xTrue= [ 0 ; 0 ] ; %  True Initial  state

9 x= [ 3 ; 3 ] ; % Initial  state estimate

10 P =  diag([10 10]) ; %  Variance of theInitial  state estimate

11 Q =  diag([1 1]/10) ; %  Noise variance of the movement actuator

12 R =  diag([10 10]) ; %  Noise vairace of the GPS  measurements

13

14 figures _ k f 1 _ i n i t ;

15

16 %  Loop

17 N =  150;

18 for k= 1:N

19    u= [ 1 ; 0 ] ; %  Movement command

20

21    %  Simulation of the true mobile robot position and measurement

22    xTrue= A *xTrue+ B *u+  F*sqrt(Q)*randn(2 , 1) ;

23    zTrue= C *xTrue+  sqrt(R)*randn(2 , 1) ;

24

25    %  Position estimate based on known inputs and measurements

26    % %%  Prediction

27    xPred= A *x+ B *u ;

28    PPred= A *P* A. ’ +  F* Q *F . ’ ;

29

30    % %%  Correction

31    K =  PPred* C. ’/(C *PPred* C. ’ + R) ;

32    x=  xPred+ K*( zTrue − C * xPred ) ;

33    P =  PPred − K * C * PPred ;

34

35    figures_kf1_update ;

36 end

f06-20-9780128042045
Fig. 6.20 True (dashed line) and estimated (solid line) trajectory with measurements (dotted line) from Example 6.19. The final position of the robot is marked with a circle.
f06-21-9780128042045
Fig. 6.21 True position (dashed line) and position estimate (solid line) of the mobile robot based on measurements (dotted line) from Example 6.19.
f06-22-9780128042045
Fig. 6.22 Time evolution of mobile robot position variance from Example 6.19.

6.5.2 Extended Kalman Filter

The Kalman filter was developed for linear systems, and it is normally assumed that all disturbances and noises can be described with a zero-mean Gaussian distribution. Noise with a Gaussian distribution is invariant to linear transformations, or in other words, if Gaussian noise is transformed by a linear function, it remains Gaussian noise; it only has different noise parameters that can be calculated explicitly given the known linear function. This is the reason behind the computational efficiency of the Kalman filter. In the case the input Gaussian noise is transformed by a nonlinear function, the output noise is no longer Gaussian, but it can normally still be approximated with a Gaussian noise.

If any of the state transitions or output equations of the system are a nonlinear function, the conventional Kalman filter may no longer produce an optimum state estimate. To overcome the problem of nonlinearities, an extended Kalman filter (EKF) was developed where the system nonlinearities are approximated with local linear models. The local linear model can be obtained from a nonlinear system with the procedure of function linearization (first-order Taylor series expansion) around the current state estimate. In the process of linearization the sensitivity matrices (Jacobian matrices) for current values of estimated states and measurements are obtained. The obtained linear model enables approximation of the noise that is not necessarily Gaussian with a Gaussian noise distribution.

The use of noise linearization in the process of noise modeling enables a computationally efficient implementation of the EKF algorithm. Hence, the EKF algorithm is commonly used in practice. The accuracy of linear approximation depends on the noise variance (big uncertainties or noise amplitudes may lead to poor linear approximation, since the signal may be outside the linear region) and the level of nonlinearity. Due to the error that is caused by linearization the convergence of the filter can worsen or the estimation may not even converge to the true solution.

A nonlinear system may be written in a general form:

xk=fxk1,uk1,wk1zk=hxk+vk

si243_e  (6.37)

where the noise wk can occur at the system input or influence the states directly.

The EKF for a nonlinear system (6.37) has a prediction part:

x^k|k1=fx^k1|k1,uk1Pk|k1=APk1|k1AT+FQk1FT

si244_e  (6.38)

and a correction part:

Kk=Pk|k1CTCPk|k1CT+Rk1x^k|k=x^k|k1+Kk(zkz^k)Pk|k=Pk|k1KkCPk|k1

si245_e  (6.39)

In the prediction part (6.38) the nonlinear state transition system is used for calculation of state prediction estimate. To calculate the noise covariance matrix, the Jacobian matrix A that describes the noise propagation from previous to current states and the Jacobian matrix F that describes the noise propagation from inputs to the states need to be determined from the state model (6.37):

A=fx^x^k1|k1,uk1

si246_e  (6.40)

F=fwx^k1|k1,uk1

si247_e  (6.41)

In the correction part (6.39) the measurement estimate is made based on predicted state estimate z^k=hx^k|k1si248_e. The Jacobian matrix C that describes the noise propagation from states to outputs also needs to be determined:

C=hxx^k|k1

si249_e  (6.42)

The noise covariance matrices are Qk=Ew(k)wT(k)si219_e and Rk=Ev(k)vT(k)si220_e. Among many applications, the EKF has been successfully applied for solving the wheeled mobile robot localization problem [7, 8] and map building [9].

Example 6.20

Awheeled mobile robot with a differential drive moves on a plane. The robot input commands are the translational velocity vk and angular velocity ωk that are both disturbed independently by Gaussian noise with zero mean and variances varvk=0.1m2/s2si252_e and varωk=0.1rad2/s2si253_e, respectively.

The mobile robot has a sensor that enables measurement of the distance to the marker that is positioned in the origin of the global coordinate frame. The robot is also equipped with a compass that enables measurement of the mobile robot’s orientation (heading). The considered setup is presented in Fig. 6.23. The distance measurement is disturbed by a Gaussian noise with zero mean and variance 0.5 m2, and the angle measurement is also disturbed by Gaussian noise with zero mean and variance 0.3 rad2.

f06-23-9780128042045
Fig. 6.23 Setup considered in Example 6.20. The mobile robot has a sensor for measuring the distance to marker M (positioned in the origin of the global frame) and a compass to determine the robot orientation in the global frame (heading).

At the beginning of observation the true initial pose of the robot is x0 = [1, 2, π/6]T; however, the initial estimated pose is x^0=[3,0,0]Tsi254_e with initial variance:

P0=900090000.6

si255_e

What is the time evolution of the mobile robot pose estimate (x^k|k1=xk,yk,φkTsi256_e) and the variance of the estimation if the command u=vk,ωkT=0.5,0.5Tsi257_e is sent to the robot in every sampling time step Ts = 0.1 s?

Solution

The solution can be obtained with the aid of simulation tools provided in Matlab. Let us determine the kinematic motion model of the mobile robot. In this case the kinematic model is nonlinear:

x^k|k1=x^k1|k1+Tsvk1cos(φk1)Tsvk1sin(φk1)Tsωk1

si258_e

The distance and angle measurement model is

z^k=xk2+yk2φk

si259_e

Listing 6.11 provides an implementation of the solution in Matlab. The simulation results are shown in Figs. 6.246.28. Simulation results confirm that the algorithm can provide an estimate of the pose that converges to the true pose of the mobile robot although the initial estimate was biased. The innovation terms settle around zero.

Listing 6.11

Implementation of the solution of Example 6.20

1 Ts =0 . 1 ; %  Sampling time

2 xTrue= [ 1 ; 2;  pi / 6 ] ; %  True Initial  pose

3 x= [ 3 ; 0; 0 ] ; % Initial  pose estimate

4 P =  diag([9  90 . 6 ] ) ; % Initial  covariance matrix of the pose estimate

5 Q =  diag ( [ 0 . 1 0 . 1 ] ) ; %  Noise covariance matrix of movement actuator

6 R =  diag ( [ 0 . 5 0 . 3 ] ) ; %  Noise covariance matrix of distance and

7  %  angle measurement

8 enableNoise= 1; %  Enable noise: 0 or 1

9 N =  300;%  Number of simulation sample times

10 figures_ekf1_init ;

11

12 %  Loop

13 for k= 1:N

14    u= [ 0 . 5 ; 0 . 5 ] ; %  Movement command  (translational and angular velocity)

15    uNoisy=  u+  sqrt(Q)*randn(2 , 1)* enableNoise ;

16

17    %  Simulation of the true mobile robot state (pose)

18    xTrue=  xTrue+  Ts *[ uNoisy (1) *cos(xTrue (3) ) ; …

19          uNoisy (1) *sin(xTrue (3) ) ; …

20          uNoisy (2) ] ;

21    xTrue (3) =  wrapToPi(xTrue (3) ) ;

22

23    %  Simulation of the true noisy measurements

24    zTrue= [sqrt(xTrue (1) ˆ2+  xTrue (2) ˆ2 ) ; …

25       xTrue (3) ] +  sqrt(R)*randn(2 , 1)* enableNoise ;

26    zTrue (1) =  abs( zTrue (1) ) ;

27    zTrue (2) =  wrapToPi( zTrue (2) ) ;

28

29    % %%  Prediction (pose and speed estimation based on known inputs)

30        xPred=  x+  Ts *[u(1) *cos(x (3) ) ; …

31        u(1) *sin(x (3) ) ; …

32    u(2) ] ;

33    xPred (3) =  wrapToPi( xPred (3) ) ;

34

35    %  Jacobian matrices

36    A = [1  0 − Ts * u (1) * sin(x (3) ) ; …

37       0 1 Ts*u(1) *cos(x (3) ) ; …

38       0 0  1 ] ;

39    F= [ Ts*cos(x (3) )0; …

40       Ts*sin(x (3) )0; …

41       0 Ts ] ;

42    PPred= A *P* A. ’ +  F* Q *F . ’ ;

43

44    %  Estimated measurements

45    z= [sqrt( xPred (1) ˆ2+  xPred (2) ˆ2) ; …

46       xPred (3) ] ;

47

48    % %%  Correction

49    d=  sqrt( xPred (1) ˆ2+  xPred (2) ˆ2) ;

50    C = [ xPred (1) /d xPred (2) /d0 ; …

51       0 0  1 ] ;

52    K =  PPred* C. ’/(C *PPred* C. ’ + R) ;

53    inov=  zTrue − z ;

54    inov (2) = wrapToPi( inov (2) ) ;

55    x=  xPred+ K *inov ;

56    P =  PPred − K * C * PPred ;

57

58    figures_ekf1_update ;

59 end

f06-24-9780128042045
Fig. 6.24 True (dashed line) and estimated (solid line) trajectory from Example 6.20.
f06-25-9780128042045
Fig. 6.25 Pose estimate (solid line) and true state (dashed line) of the mobile robot with a nonzero initial estimation error from Example 6.20.
f06-26-9780128042045
Fig. 6.26 Distance and angle measurements from Example 6.20.
f06-27-9780128042045
Fig. 6.27 Robot pose estimate variances from Example 6.20.
f06-28-9780128042045
Fig. 6.28 Time evolution of innovation from Example 6.20.

When the states of the system are not measurable directly (as in this example), the question of system state observability arises. Observability analysis approaches can normally provide only sufficient conditions for system observability. However, observability analysis should be performed before designing a state estimation, since it can aid the designer choosing the appropriate set of measurement signals that make the estimation feasible. Although advanced mathematical tools could be used to check system observability, the observability analysis can also be performed with a simple graphical check that is based on the definition of indistinguishable states (see Section 6.3.3). The observability analysis of the system from this example is made in Fig. 6.29. The results in Fig. 6.29 demonstrate that the states of the system are normally distinguishable, except for some special cases (Fig. 6.29D). However, if the system states are observed long enough and mobile robot control inputs are adequately excited, the system is observable.

f06-29-9780128042045
Fig. 6.29 Observability analysis of the system from Example 6.20. (A) Three particular initial robot poses that all have the same heading and distance from the marker. (B) Updated situation from Fig. (A) after the robots have traveled the same distance in a forward direction. From the measurement point of view, the robot poses are distinguishable. (C) Three particular initial robot poses that all have the same heading and distance from the marker: a special case. (D) Updated situation from Fig. (C) after the robots have traveled the same distance in a forward direction. From the measurement point of view, the robot in pose 1 is indistinguishable from the robot in pose 2, but the robot in pose 3 is distinguishable from the other two poses. (E) Updated situation from Fig. (C) after the robots have traveled the same not-straight path. From the measurement point of view, the robot poses are all distinguishable.

Example 6.21

Let us use the same mobile robot as in Example 6.20, but let us use a slightly different sensor. Assume that the mobile robot has s sensor for measuring the distance and angle to the marker that is in the origin of the global coordinate frame (Fig. 6.30). The angle measurements are in the range α ∈ [−π, π]. The distance measurement is disturbed by a Gaussian noise with zero mean and variance 0.5 m2, and the angle measurement is disturbed by the Gaussian noise with zero mean and variance 0.3 rad2.

f06-30-9780128042045
Fig. 6.30 Setup considered in Example 6.21. The mobile robot has a sensor for measuring the distance and angle to marker M (positioned in the origin of the global frame).

What is the time evolution of the robot pose estimate (x^k|k1=xk,yk,φkTsi256_e) and estimate variance if the input commands u=vk,ωkT=0.5,0.5Tsi257_e are sent to the robot in every sampling time step Ts = 0.1 s?

Solution

The solution can be obtained with the aid of simulation in Matlab. Let us determine the mobile robot motion model. The kinematic model of the wheeled mobile robot is the same as in Example 6.20:

x^k|k1=x^k1|k1+Tsvk1cos(φk1)Tsvk1sin(φk1)Tsωk1

si262_e

In this case, the distance and angle measurement model is

z^k=xk2+yk2atan20yk,0xkφk

si263_e

where atan2y,xsi264_e is a four-quadrant extension of arctanyxsi265_e:

atan2y,x=arctanyxifx>0arctanyx+πifx<0andy0arctanyxπifx<0andy<0π2ifx=0andy>0π2ifx=0andy<0undefinedifx=0andy=0

si266_e  (6.43)

Listing 6.12 provides an implementation of the solution in Matlab. The simulation results in Figs. 6.316.35 show that the estimated states converge to the true robot pose. Although this example is similar to Example 6.20, the estimate may not converge to the true robot pose in the presence of only slightly different environmental conditions. The case of convergence to the wrong solution is shown in Figs. 6.366.40. Since both sensor outputs (distance and angle) are relative measurements, the estimated states may not converge to the true solution, even though the innovation (difference between measurement and measurement prediction) is in the vicinity of a zero value (Fig. 6.40).

f06-31-9780128042045
Fig. 6.31 True (dashed line) and estimated (solid line) trajectory from Example 6.21.
f06-32-9780128042045
Fig. 6.32 Pose estimate (solid line) and true state (dashed line) of the mobile robot with a nonzero initial estimation error from Example 6.21.
f06-33-9780128042045
Fig. 6.33 Distance and angle measurements from Example 6.21.
f06-34-9780128042045
Fig. 6.34 Robot pose estimate variance from Example 6.21.
f06-35-9780128042045
Fig. 6.35 Time evolution of innovation from Example 6.21.
f06-36-9780128042045
Fig. 6.36 True (dashed line) and estimated (solid line) trajectory from Example 6.21 (representative case).
f06-37-9780128042045
Fig. 6.37 Pose estimate (solid line) and true state (dashed line) of the mobile robot with a nonzero initial estimation error from Example 6.21 (representative case).
f06-38-9780128042045
Fig. 6.38 Distance and angle measurements from Example 6.21 (representative case).
f06-39-9780128042045
Fig. 6.39 Robot pose estimate variance from Example 6.21 (representative case).
f06-40-9780128042045
Fig. 6.40 Time evolution of innovation from Example 6.21 (representative case).

Listing 6.12

Implementation of the solution of Example 6.21

1 Ts =0 . 1 ; %  Sampling time

2 xTrue= [ 1 ; 2;  pi / 6 ] ; %  True Initial  pose

3 x= [ 3 ; 0; 0 ] ; % Initial  pose estimate

4 P =  diag([9  90 . 6 ] ) ; % Initial  covariance matrix of the pose estimate

5 Q =  diag ( [ 0 . 1 0 . 1 ] ) ; %  Noise covariance matrix of movement actuator

6 R =  diag ( [ 0 . 5 0 . 3 ] ) ; %  Noise covariance matrix of distance and

7                                 %  angle measurement

8 enableNoise= 1; %  Enable noise: 0 or 1

9 N =  300;%  Number of simulation sample times

10 figures_ekf2_init ;

11

12 %  Loop

13 for k= 1:N

14     u= [ 0 . 5 ; 0 . 5 ] ; %  Movement command  (translational and angular velocity)

15     uNoisy=  u+  sqrt(Q)*randn(2 , 1)* enableNoise ;

16

17     %  Simulation of the true mobile robot state (pose)

18     xTrue=  xTrue+  Ts *[ uNoisy (1) *cos(xTrue (3) ) ; …

19         uNoisy (1) *sin(xTrue (3) ) ; …

20         uNoisy (2) ] ;

21     xTrue (3) =  wrapToPi(xTrue (3) ) ;

22

23     %  Simulation of the true noisy measurements (distance and angle)

24     zTrue= [sqrt(xTrue (1) ˆ2+  xTrue (2) ˆ2) ; …

25         atan2(0− xTrue (2) ,  0− xTrue (1) )− xTrue (3) ] + …

26         sqrt(R)*randn(2 , 1)* enableNoise ;

27     zTrue (1) =  abs( zTrue (1) ) ;

28     zTrue (2) =  wrapToPi( zTrue (2) ) ;

29

30     %%%  Prediction (pose and speed estimation based on known inputs)

31     xPred=  x+  Ts *[u(1) *cos(x (3) ) ; …

32         u(1) *sin(x (3) ) ; …

33         u(2) ] ;

34     xPred (3) =  wrapToPi( xPred (3) ) ;

35

36     %  Jacobian matrices

37     A = [1  0 − Ts * u (1) * sin(x (3) ) ; …

38          0 1 Ts*u(1) *cos(x (3) ) ; …

39          0 0  1 ] ;

40     F= [ Ts*cos(x (3) )0; …

41          Ts*sin(x (3) )0; …

42          0 Ts ] ;

43     PPred= A *P* A. ’ +  F* Q *F . ’ ;

44

45

46     %  Estimated measurements

47     z = [sqrt( xPred (1) ˆ2+  xPred (2) ˆ2) ; …

48         atan2(0− xPred (2) ,  0− xPred (1) )  − xPred (3) ] ;

49     z (2) =  wrapToPi( z (2) ) ;

1  %%%  Correction

2  d=  sqrt( xPred (1) ˆ2+  xPred (2) ˆ2) ;

3  C = [ xPred (1) /d xPred (2) /d  0; …

4     − xPred (2) / d ˆ2 xPred (1) /dˆ2−1];

5  K = PPred * C . ’ / ( C * PPred * C . ’ + R) ;

6  inov=  zTrue − z ;

7

8  %  Selection of appropriate innovation due to noise and angle wrapping

9  inov (2) =  wrapToPi( inov (2) ) ;

10

11  x=  xPred+ K *inov ;

12  P =  PPred − K * C * PPred ;

13

14  figures_ekf2_update ;

15 end

A graphical check of observability made in Fig. 6.41 can provide additional insight into the reasons for estimation bias. The analyses show that there are states that are clearly indistinguishable from the measurement point of view, regardless of control inputs. Moreover, for every measurement there is an infinite number of states that agree with the measurement.

f06-41-9780128042045
Fig. 6.41 Observability analysis of the system from Example 6.21. (A) Three particular initial robot poses with same distance from the marker. (B) Updated situation from Fig. (A) after the robots have traveled the same distance in a forward direction. From the measurement point of view, all three robot poses are distinguishable. (C) Three particular initial robot poses that all have the same angle to and distance from the marker: a special case. (D) Updated situation from Fig. (C) after the robots have traveled the same distance in a forward direction. From the measurement point of view, all three robot poses are indistinguishable. (E) Updated situation from Fig. (C) after the robots have traveled the same not-straight path. From the measurement point of view, all three robot poses are indistinguishable.

Therefore, the system is not observable. The estimation bias can be eliminated if multiple markers are observed simultaneously, since this would provide enough information to select the appropriate solution, as it is shown in Example 6.22.

Example 6.22

This is an upgraded Example 6.21, in a way that now two spatially separated markers, instead of only one, are observed simultaneously. In this case the sensors measure the distances and angles to two markers (with respect to the mobile robot pose), which are positioned at xM1 = 0, yM1 = 0 and at xM2 = 5, yM2 = 5 in the plane (Fig. 6.42). Otherwise, all the data are identical to Example 6.21.

f06-42-9780128042045
Fig. 6.42 Setup considered in Example 6.22. The mobile robot has a sensor for measuring the distance and angle to markers M1 and M2.

Solution

In this case only the correction part of the algorithm needs to be modified. The measurement vector now contains four elements, the distance and angle to the first marker, and also the distance and angle to the second marker:

z^k=(xM1xk)2+(yM1yk)2atan2yM1yk,xM1xkφk(xM2xk)2+(yM2yk)2atan2yM2yk,xM2xkφk

si267_e

Let us determine the output matrix C (see Eq. 6.42) with linearization around the current predicted state estimate (xk, yk):

C=xkd1ykd10ykd12xkd121xkd2ykd20ykd22xkd221

si268_e

where d1=(xM1xk)2+(yM1yk)2si269_e and d2=(xM2xk)2+(yM2yk)2si270_e. Let us also determine the measurement noise covariance matrix that is

R=0.500000.300000.500000.3

si271_e

Listing 6.13 provides an implementation of the solution in Matlab. The simulation results are shown in Figs. 6.436.47. The obtained results confirm that the estimated states converge to the true robot pose.

f06-43-9780128042045
Fig. 6.43 True (dashed line) and estimated (solid line) trajectory from Example 6.22.
f06-44-9780128042045
Fig. 6.44 Pose estimate (solid line) and true state (dashed line) of the mobile robot with a nonzero initial estimation error from Example 6.22.
f06-45-9780128042045
Fig. 6.45 Distance and angle measurements from Example 6.22.
f06-46-9780128042045
Fig. 6.46 Robot pose estimate variance from Example 6.22.
f06-47-9780128042045
Fig. 6.47 Time evolution of innovation from Example 6.22.

Listing 6.13

Implementation of the solution of Example 6.22

1  Ts  = 0 . 1 ; %  Sampling time

2  xTrue = [ 1 ; 2;  pi / 6 ] ; %  True Initial  pose

3  x = [ 3 ; 0; 0 ] ; % Initial  pose estimate

4  P =  diag([9  90 . 6 ] ) ; % Initial  covariance matrix of the pose estimate

5  Q =  diag ( [ 0 . 1 0 . 1 ] ) ; %  Noise covariance matrix of movement actuator

6  R =  diag ( [ 0 . 5 0 . 3 ] ) ; %  Noise covariance matrix of distance and

7           %  angle measurement

8  enableNoise = 1; %  Enable noise: 0 or 1

9  N =  300;%  Number of simulation sample times

10 marker = [0 0;  55 ] ; %  Positions of markers

11

12 figures_ekf3_init ;

13

14 %  Loop

15 for k = 1:N

16     u = [ 0 . 5 ; 0 . 5 ] ; %  Movement command  (translational and angular velocity)

17     uNoisy =  u+  sqrt(Q)*randn(2 , 1)* enableNoise ;

18

19      %  Simulation of the true mobile robot state (pose)

20      xTrue =  xTrue+  Ts *[ uNoisy (1) *cos(xTrue (3) ) ; …

21          uNoisy (1) *sin(xTrue (3) ) ; …

22          uNoisy (2) ] ;

23      xTrue (3) =  wrapToPi(xTrue (3) ) ;

24

25      %  Simulation of the true noisy measurements (distance and angle)

26      zTrue = [ ] ;

27      for m = 1: s i z e ( marker ,  1)

28           dist =  sqrt(( marker (m,1)− xTrue (1) ) ˆ2 + ( marker (m,2) − xTrue (2) ) ˆ2) ;

29           alpha =  atan2( marker (m,2)− xTrue (2) , marker ( m,1)− xTrue (1) ) − xTrue (3) ;

30           zz= [ dist ;  alpha ] +  sqrt(R)*randn(2 , 1)* enableNoise ;

31           zz (1) =  abs( zz (1) ) ;

32           zz (2) =  wrapToPi( zz (2) ) ;

33           zTrue = [ zTrue ;  zz ] ;

34      end

35

36  %%%  Prediction (pose and speed estimation based on known inputs)

37  xPred =  x+  Ts *[u(1) *cos(x (3) ) ; …

38              u(1) *sin(x (3) ) ; …

39              u(2) ] ;

40  xPred (3) =  wrapToPi( xPred (3) ) ;

41

42  %  Jacobian matrices

43  A = [1  0 − Ts * u (1) * sin(x (3) ) ; …

44       0 1 Ts*u(1) *cos(x (3) ) ; …

45       0 0  1 ] ;

46  F = [ Ts*cos(x (3) )0; …

47       Ts*sin(x (3) )0; …

48       0 Ts ] ;

49  PPred = A *P* A. ’ +  F* Q *F . ’ ;

50

51  %%%  Correction , measurement estimation

52  z = [ ] ;

53  C = [ ] ;

54  for m = 1: s i z e ( marker ,1)

55      dist =  sqrt(( marker (m,1)− xPred (1) ) ˆ2 + ( marker (m,2) − xPred (2) ) ˆ2) ;

56      alpha =  atan2( marker (m,2)− xPred (2) , marker ( m,1)− xPred (1) ) − xPred (3) ;

57      zz = [ dist ;  alpha ] ;

58      zz (2) =  wrapToPi( zz (2) ) ;

59      z = [ z ;  zz ] ;

60

61      %  Create matrix C  for correction

62      c = [ xPred (1) / dist  xPred (2) / dist   0; …

63          − xPred (2) / d i s t ˆ2 xPred (1) / dist ˆ2−1];

64          C = [ C ; c ] ;

65 end

66

67      %  Measurement covariance matrix

68      RR =  diag( repmat ( [R(1 ,1) R(2 ,2) ] ,  1 ,s i z e ( marker ,  1) ) ) ;

69      K =  PPred* C. ’/(C *PPred* C. ’ + RR) ;

70

71  inov =  zTrue − z ;

72  %  Selection of appropriate innovation due to noise and angle wrapping

73  for m = 1: s i z e ( marker ,  1)

74  inov (2* m )=  wrapToPi( inov (2* m ) ) ;

75  end

76

77  x =  xPred+ K*( inov ) ;

78  P =  PPred − K * C * PPred ;

79

80  figures_ekf3_update ;

81 end

Again a simple graphical observability check can be performed. In this case one particular situation that has indistinguishable states can again be found, as shown in Fig. 6.48. At first glance the considered states are indistinguishable from the measurement point of view. But since we assume that the measurement data also contains marker IDs, this particular special situation has distinguishable states, and therefore the system is observable.

f06-48-9780128042045
Fig. 6.48 Observability analysis of the system from Example 6.22. (A) Two particular poses that are symmetrical from the measurement point of view. (B) Updated situation from Fig. (A) after the robots have traveled the same relative path. From the measurement point of view, the both robot poses are distinguishable if measurement data contains IDs of the markers.

6.5.3 Kalman Filter Derivatives

Alongside the Kalman filter for linear systems and the EKF for nonlinear systems [10], many other extensions of the above filters have been proposed. Let us take a look at some of the most common and important approaches.

Unscented Kalman filter [11] is normally used in systems with significant nonlinearities where the EKF may not provide satisfactory results. In this case, the covariance matrices are estimated statistically based on a small subset of points that are transformed over the nonlinear function; the transformed points are used to estimate the mean value and the covariance matrix. These points are known as sigma points that are spread around the estimated value by some algorithm (normally there are 2n + 1 points for n dimensions).

Information filter uses the information matrix and information vector instead of the covariance matrix and state estimate. The information matrix represents the inverse of the covariance matrix, and the information vector is the product of the information matrix and estimated state vector. The information filter is dual to the Kalman filter where the correction step is computationally much simpler to evaluate (no matrix summation). However, the prediction step becomes more computationally demanding.

The Kalman-Bucy filter is a form of the Kalman filter for continuous systems.

6.6 Particle Filter

Up until now the Bayesian filter was considered to be normally used for discrete state-space systems with a finite number of values that the state-space variables can take. If we want to apply the Bayesian filter for continuous variables, these variables can be quantified to a finite number of values. This kind of Bayesian filter implementation for continuous state variables is commonly known as the histogram filter.

For continuous state-space variables an explicit solution of the Bayesian filter (6.29) would need to be determined:

p(xk|z1:k,u0:k1)=p(zk|xk)p(zk|z1:k1,u0:k1)p(xk|xk1,uk1)p(xk1|z1:k1,u0:k2)dxk1

si272_e  (6.44)

where we took into account the complete state assumption and that the system is a Markov process (see Section 6.2). The current state probability distribution (6.44) is needed in calculation of the most probable state estimate (mathematical expectation):

Ex^k|k=xk|kp(xk|z1:k,u0:k1)dxk|k

si273_e

An explicit solution of Eq. (6.44) is possible only for a limited class of problems where Gaussian noise distribution and system linearity are assumed; the result is a Kalman filter. In the case of nonlinear systems, the system nonlinearity (in the motion model, actuator model, and/or sensor model) can be linearized, and this leads to the EKF.

A particle filter is a more general approach where noise distribution need not be Gaussian and the system may be nonlinear. The basic idea behind the particle filter is that the a posteriori state estimate probability distribution (6.29), after the measurement was made, is approximated with a set of N particles. Every particle in the set presents the value of the estimated state xkisi274_e that is randomly sampled from the probability distribution (Monte Carlo simulation approach). Every particle represents a hypothesis about the system state. The representation of the probability distribution with a set of randomly generated particles is a nonparametric description of probability distribution, which is therefore not limited to only Gaussian distributions. The description of probability distribution with particles enables modeling of nonlinear transformation of noise (actuator and/or sensor model). The particles can therefore be used to describe the noise distributions that are transformed over nonlinear functions from inputs or outputs to the system states.

Algorithm 5 presents the basic implementation of the particle filter.

Algorithm 5

Particle Filter Algorithm. The Particle_filter Function Is Called in Every Time Step With Current Values of Inputs and Measurement and the Previous State Estimate

functionParticle_filter(x^k1|k1si192_e, uk−1, zk)

Initialization:

if k > 0 then

 Initialize a set of N particles xkisi274_e based on a random sampling of the probability distribution p(x0).

end if

Prediction:

Transform every particle x^k1|k1isi277_e based on the known movement model and known input uk−1, to which add a randomly generated value based on the noise properties that are part of the movement model. The movement model is given with p(xk|xk−1, uk−1). The obtained prediction is given as a set of particles x^k|k1isi278_e.

Correction:

For every particle x^k|k1isi278_eestimate measurement output that would be measured if the system state would correspond to the particle state.

Based on the measurement made and comparison against estimated measurements with particles, evaluate the particle importance.

This is followed by importance sampling, a sampling approach based on the particle importance with random selection of the particles in a way that the particle selection probability is proportional to the particle importance (i.e., p(zk|x^k|k1i)si280_e). More important particles should therefore be selected more times, and less important particles fewer times.

Filtered state estimatex^k|ksi226_e can be selected as the average value of all the particles.

end function

In the initial step of the particle filter an initial population of the particles needs to be selected, x^0isi282_e for i ∈ 1, …, N, which is distributed based on the belief (probability distribution) p(x0) of the system states at the beginning of observation. In the case the initial state is not known a uniform distribution should be used; therefore, the particles should be spread uniformly over the entire state-space.

In the prediction part a new state for each particle is evaluated based on the known system input. To the state estimate of every particle a randomly generated noise is added, which is expected at the system input. In this way state predictions for every particle are obtained, x^k|k1isi278_e. The added noise ensures that the particles are scattered, since this enables estimation of true value in the presence of different disturbances.

In the correction part of particle filter the importance of the particles is evaluated, in a way that the difference between the obtained measurement zk and all the estimated particle measurements (z^kisi284_e) is calculated from estimated particle states. The difference between the measurement and estimated measurement is commonly known as innovation or measurement residual, and it can be evaluated for every particle as

innovki=zkz^ki

si285_e

which is low for most probable particles.

Based on measurement residual the importance of every particle, that is, the probability p(zk|x^k|k1i)si280_e can be determined, which represents the weight wkisi287_e for particle i. The weight can be determined with Gaussian probability distribution as

wki=det2πR12e12(innovki)TR1(innovki)

si288_e

where R is the measurement covariance matrix.

A very important step in the particle filter is importance sampling based on the weights wkisi287_e. The set of N particles is randomly sampled in a way that the probability of selection a particular particle from a set is proportional of the particle weight wkisi287_e. Therefore, the particles with larger value of the weight should be selected more times than the particles with lower value of the weight; the particles with the smallest value of the weight should not even be selected. The approach of generating a new set of particles can be achieved in various ways, one of the possible approaches is given below:

 The particle weights wkisi287_e are normed with the sum of all the weights i=1Nwkisi292_e; in this way the new weights wnki=wkii=1Nwkisi293_e are obtained.

 The cumulative sum of the normed weights is calculated in order to obtain the cumulative weights wcki=j=1iwnkisi294_e, as shown in Fig. 6.49.

f06-49-9780128042045
Fig. 6.49 Importance sampling in the correction part of the particle filter. The particle weights are arranged in a sequence one after another and then normed in a way that the sum of all the particles is one. More probable particles take more space on the unity interval, and vice-versa. The new population of particles is determined in a way that N numbers from the unity interval are selected randomly (with uniform distribution), and the particles that correspond to these numbers are selected.

 N numbers between 0 and 1 are selected randomly (from a uniform and 1 are selected randomly (from a uniform distribution) and then it is determined to which weights these random numbers correspond. The comparison between the cumulative weights wckisi295_e and the randomly generated numbers is therefore made. As seen from Fig. 6.49, the particles with larger weights are more probable to be selected (the weights in Fig. 6.49 occupy more space).

 The selected particles are used in the correction part to determine the a posteriori state estimate x^k|k=i=1Nwkix^k|k1isi296_e.

In the case the system is not moving (the current state is equal to the previous state) it is recommended that the importance sampling is not made, but the previous particles are selected as the current ones and only the weights are modified, as shown in [12].

The application of particle filter is shown in Example 6.23.

Example 6.23

Use a particle filter for estimation of the most likely state in Example 6.22. In the implementation of particle filter use N = 300 particles. All the other data are the same as in Example 6.22.

Solution

Listing 6.14 provides an implementation of the solution in Matlab. The simulation results are shown in Figs. 6.50 and 6.51.

f06-50-9780128042045
Fig. 6.50 True (dashed line), estimated (solid line) trajectory, and generated particles in the final step of simulation from Example 6.23.
f06-51-9780128042045
Fig. 6.51 Pose estimate (solid line) and true state (dashed line) of the mobile robot with a nonzero initial estimation error from Example 6.23.

Listing 6.14

Implementation of the solution of Example 6.23

1 Ts = 0 . 1 ; %  Sampling time

2 xTrue = [ 1 ; 2;  pi / 6 ] ; %  True Initial  pose

3 x = [ 3 ; 0; 0 ] ; % Initial  pose estimate

4 P =  diag([9  90 . 6 ] ) ; % Initial  covariance matrix of the pose estimate

5 Q =  diag ( [ 0 . 1 0 . 1 ] ) ; %  Noise covariance matrix of movement actuator

6 R =  diag ( [ 0 . 5 0 . 3 ] ) ; %  Noise covariance matrix of distance and

7           %  angle measurement

8 enableNoise = 1; %  Enable noise: 0 or 1

9 N =  300;%  Number of simulation sample times

10 marker = [0 0;  55 ] ; %  Positions of markers

11

12 %  ParticleInitial i z a t i o n

13 nParticles =  300;

14 xP =  repmat (xTrue ,  1 ,nParticles )+  diag([4  41]) *randn(3 ,nParticles ) ;

15 W =  ones ( nParticles ,  1)/ nParticles ; %  Allp a r t i c l e s  have equal probability

16

17 figures_pf1_init ;

18

19 %  Loop

20 for k = 1:N

21     u = [ 0 . 5 ; 0 . 5 ] ; %  Movement command  (translational and angular velocity)

22     u_sum =  u+  sqrt(Q)*randn(2 , 1)* enableNoise ;

23

24     %  Simulation of the true mobile robot state (pose)

25     xTrue =  xTrue+  Ts *[u_sum(1) *cos(xTrue (3) ) ; …

26          u_sum(1) *sin(xTrue (3) ) ; …

27          u_sum(2) ] ;

28     xTrue (3) =  wrapToPi(xTrue (3) ) ;

29

30     %  Simulation of the true noisy measurements (distance and angle)

31     zTrue = [ ] ;

32     for m = 1: s i z e ( marker ,  1)

33          dist =  sqrt(( marker (m,1)− xTrue (1) ) ˆ2 + ( marker (m,2) − xTrue (2) ) ˆ2) ;

34          alpha =  atan2( marker (m,2)− xTrue (2) , marker ( m,1)− xTrue (1) ) − xTrue (3) ;

35          zz = [ dist ;  alpha ] +  sqrt(R)*randn(2 , 1)* enableNoise ;

36          zz (1) =  abs( zz (1) ) ;

37          zz (2) =  wrapToPi( zz (2) ) ;

38          zTrue= [ zTrue ;  zz ] ;

39     end

40

41     %  Prediction

42     for p= 1: nParticles

43          %  Particles are moved according to the noise model

44          un =  u+  sqrt(Q)*randn(2 , 1) *1;

45          xP ( : , p)=  xP ( : , p)+  Ts *[ un(1) *cos(xP(3 ,p) ) ; …

46                   un(1) *sin(xP(3 ,p) ) ; …

47                   un(2) ] ;

48          xP(3 ,p)=  wrapToPi(xP(3 ,p) ) ;

49 end

50

51  %  Correction

52 for p= 1: nParticles

53     %  Estimated measurement for everyp a r t i c l e

54     z = [ ] ;

55     for m = 1: s i z e ( marker ,  1)

56          dist =  sqrt(( marker (m,1)− xP (1 , p ) ) ˆ2 + ( marker (m,2) − xP (2 , p ) ) ˆ2) ;

57          alpha =  atan2( marker (m,2)− xP (2 , p ) , marker ( m,1)− xP (1 , p ) ) − xP (3 , p ) ;

58          zz = [ dist ;  alpha ] ;

59          zz (1) =  abs( zz (1) ) ;

60          zz (2) =  wrapToPi( zz (2) ) ;

61          z = [ z ;  zz ] ;

62     end

63 

1

2           Innov = zTrue − z;%  Determine innovation

3

4           %  Selection of appropriate innovation due to

5           %  noise and angle wrapping

6           for m = 1: s i z e ( marker ,  1)

7               i i i =  zTrue (2* m ) − ( z (2* m ) + [ 0 ; 2* pi;  −2* pi ] ) ;

8             [ tmp ,  index ] =  min(abs( i i i ) ) ;

9              Innov (2* m )= i i i ( index ) ;

10          end

11

12  %  Determinep a r t i c l e  weights ( p a r t i c l e  probability)

13  %  Measurement covariance matrix

14  RR =  diag( repmat (diag(R) ,s i z e ( marker ,  1) , 1) ) ;

15  W (p)=  exp(−0.5* Innov . ’* inv ( RR ) * Innov ) + 0.0001;

16 end

17

18 iNextGeneration=  obtainNextGenerationOfParticles ( W , nParticles ) ;

19 xP=  xP ( : , iNextGeneration ) ;

20

21  %  The new state estimatei s  the average of all  thep a r t i c l e s

22 x=  mean(xP,  2) ;

23 x (3) =  wrapToPi(x (3) ) ;

24  %  For robot orientation use the mostl i k e l y p a r t i c l e  instead of the

25  %  average angle of all  thep a r t i c l e s .

26  [ gg , ggi ] =  max ( W ) ;

27 x (3) =  xP(3 , ggi ) ;

28

29  figures_pf1_update ;

30 end

Listing 6.15

Function used in Listings 6.14 and 6.16

1 function iNextGeneration=  obtainNextGenerationOfParticles ( W , nParticles )

2      %  Selection based on thep a r t i c l e  weights

3      CDF =  cumsum ( W )/sum( W ) ;

4      i S e l e c t   =  rand( nParticles ,  1) ; %  Random  numbers

5      %  Indices of the newp a r t i c l e s

6      CDFg = [ 0 ; CDF] ;

7     indg= [ 1 ; (1: nParticles ) . ’ ] ;

8     iNextGeneration_float=  interp1(CDFg,  indg , iSelect , ’linear ’) ;

9     iNextGeneration= round( iNextGeneration_float+ 0.5) ; %  Round the indices

10 end

In Example 6.23 the distances and angles to the markers are measured, so the angle wrapping must be taken into account. The example of localization where only distances to the markers are measured in order to estimate the mobile robot pose is shown in Example 6.24.

Example 6.24

Use a particle filter for estimation of the most likely state in Example 6.23, but consider only measurement of the distance to the markers. In the implementation of the particle filter use N = 500 particles. All the other data are the same as in Example 6.23.

Solution

Listing 6.16 provides an implementation of the solution in Matlab. The simulation results are shown in Figs. 6.52 and 6.53, where it is clear that the estimated value converges to the true value, as in Example 6.23.

f06-52-9780128042045
Fig. 6.52 True (dashed line), estimated (solid line) trajectory, and generated particles in the final step of simulation from Example 6.24.
f06-53-9780128042045
Fig. 6.53 Pose estimate (solid line) and true state (dashed line) of the mobile robot with a nonzero initial estimation error from Example 6.24.

Listing 6.16

Implementation of the solution of Example 6.24

1 Ts =0 . 1 ; %  Sampling time

2 xTrue= [ 1 ; 2;  pi / 6 ] ; %  True Initial  pose

3 x= [ 3 ; 0; 0 ] ; % Initial  pose estimate

4 P =  diag([9  90 . 6 ] ) ; % Initial  covariance matrix of the pose estimate

5 Q =  diag ( [ 0 . 1 0 . 1 ] ) ; %  Noise covariance matrix of movement actuator

6 R =  diag ( [ 0 . 5 0 . 3 ] ) ; %  Noise covariance matrix of distance and

7  %  angle measurement

8 enableNoise= 1; %  Enable noise: 0 or 1

9 N =  300;%  Number of simulation sample times

10 marker= [0 0;  55 ] ; %  Positions of markers

11 R = R(1 ,1) ; %  Only distance measurement

12

13 %  ParticleInitial i z a t i o n

14 nParticles =  500;

15 xP=  repmat (xTrue ,  1 ,nParticles )+  diag([4  41]) *randn(3 ,nParticles ) ;

16 W =  ones ( nParticles ,  1)/ nParticles ; %  Allp a r t i c l e s  have equal probability

17

18 figures_pf2_init ;

19

20 %  Loop

21 for k= 1:N

22     u= [ 0 . 5 ; 0 . 5 ] ; %  Movement command  (translational and angular velocity)

23     u_sum=  u+  sqrt(Q)*randn(2 , 1)* enableNoise ;

24

25      %  Simulation of the true mobile robot state (pose)

26     xTrue=  xTrue+  Ts *[u_sum(1) *cos(xTrue (3) ) ; …

27          u_sum(1) *sin(xTrue (3) ) ; …

28          u_sum(2) ] ;

29     xTrue (3) =  wrapToPi(xTrue (3) ) ;

30

31      %  Simulation of the true noisy measurements (distance)

32     zTrue= [ ] ;

33     for m = 1: s i z e ( marker ,  1)

34           dist =  sqrt(( marker (m,1)− xTrue (1) ) ˆ2 + ( marker (m,2) − xTrue (2) ) ˆ2) ;

35 

1     zz =[ dist ]  + sqrt(R)*randn(1 , 1)* enableNoise ;

2     zz (1) =  abs( zz (1) ) ;

3     zTrue= [ zTrue ;  zz ] ;

4 end

5

6  %  Prediction

7 for p= 1: nParticles

8      %  Particles are moved according to the noise model

9     un=  u+  sqrt(Q)*randn(2 , 1) *1;

10     xP ( : , p)=  xP ( : , p)+  Ts *[ un(1) *cos(xP(3 ,p) ) ; …

11          un(1) *sin(xP(3 ,p) ) ; …

12          un(2) ] ;

13     xP(3 ,p)=  wrapToPi(xP(3 ,p) ) ;

14 end

15

16  %  Correction

17 for p= 1: nParticles

18      %  Estimated measurement for everyp a r t i c l e

19     z= [ ] ;

20     for m = 1: s i z e ( marker ,  1)

21           dist =  sqrt(( marker (m,1)− xP (1 , p ) ) ˆ2 + ( marker (m,2) − xP (2 , p ) ) ˆ2) ;

22          zz= [ dist ] ;

23          zz (1) =  abs( zz (1) ) ;

24          z= [ z ;  zz ] ;

25 end

26

27     Innov=  zTrue − z ; %  Determine innovation

28

29      %  Determinep a r t i c l e  weights ( p a r t i c l e  probability)

30      %  Measurement covariance matrix

31      RR =  diag( repmat (diag(R) ,s i z e ( marker ,  1) , 1) ) ;

32      W (p)=  exp(−0.5* Innov . ’* inv ( RR ) * Innov ) + 0.0001;

33 end

34

35 iNextGeneration=  obtainNextGenerationOfParticles ( W , nParticles ) ;

36 xP=  xP ( : , iNextGeneration ) ;

37

38  %  The new state estimatei s  the average of all  thep a r t i c l e s

39 x=  mean(xP,  2) ;

40 x (3) =  wrapToPi(x (3) ) ;

41  %  For robot orientation use the mostl i k e l y p a r t i c l e  instead of the

42  %  average angle of all  thep a r t i c l e s .

43  [ gg , ggi ] =  max ( W ) ;

44 x (3) =  xP(3 , ggi ) ;

45

46  figures_pf2_update ;

47 end

A particle filter is an implementation of the Bayesian filter for continuous systems (continuous state space); it enables description of nonlinear systems and can take into account arbitrary distribution of noise. Applicability of the particle filter is computationally intensive for high-dimensional systems, since many particles are required to achieve filter to converge. The number of required particles increases with space dimensionality.

The particle filter is robust and is able to solve the problem of global localization and the problem of robot kidnapping. In the problem of global localization the initial pose (the value of the states) is not known. Therefore, the mobile robot can be anywhere in the space. The kidnapping problem occurs when the mobile robot is moved (kidnapped) to an arbitrary new location. Robust localization algorithms must be able to solve these problems.

References

[1] Montgomery D.C., Runger G.C. Applied Statistics and Probability for Engineers. Englewood Cliffs, NJ: John Wiley & Sons; 2003.

[2] Hermann R., Krener A.J. Nonlinear controllability and observability. IEEE Trans. Autom. Control. 1977;22(5):728–740.

[3] Respondek W. Introduction to geometric nonlinear control; linearization, observability, decoupling. In: ICTP; 2002:173–222. vol. 8 of ICTP Lecture Notes Series..

[4] Martinelli A. Nonlinear unknown input observability: analytical expression of the observable codistribution in the case of a single unknown input. In: 2015 Proceedings of the Conference on Control and its Applications. 2015:9–15.

[5] Fox D., Hightower J., Liao L., Schulz D., Borriello G. Bayesian filtering for location estimation. IEEE Pervasive Comput. 2003;2:24–33.

[6] Kalman R.E. A new approach to linear filtering and prediction problems. Trans. ASME J. Basic Eng. 1960;82(Series D):35–45.

[7] Teslić L., Škrjanc I., Klančar G. Using a LRF sensor in the Kalman-filtering-based localization of a mobile robot. ISA Trans. 2010;49:145–153.

[8] Teslić L., Škrjanc I., Klančar G. EKF-based localization of a wheeled mobile robot in structured environments. J. Intell. Robot. Syst. 2010;62:187–203.

[9] Klančar G., Teslić L., Škrjanc I. Mobile-robot pose estimation and environment mapping using an extended Kalman filter. Int. J. Syst. Sci. 2013;1–16.

[10] Anderson B.D.O., Moore J.B. Optimal Filtering. Englewood Cliffs, NJ: Prentice-Hall; 1979.

[11] S.J. Julier, Process models for the navigation of high-speed land vehicles, PhD thesis, University of Oxford, 1997.

[12] Thrun S., Burgard W., Fox D. Probabilistic Robotics. Cambridge, MA: MIT Press; . 2005;4.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.218.215