Chapter 4

PD Control with Neural Compensation

Abstract

In this chapter, we use a neural network to estimate the nonlinear terms of the exoskeleton robots, such as friction, gravity forces, and unmodeled dynamics, to guarantee stability of the closed-loop system with simple PD control law. This approach reduces the computation time with a simple neural network (NN). The learning rules obtained for the NN are very closed to the backpropagation rules [64]. This NN does not need offline previous learning, and the initial parameters are independent of the robot dynamics.

The other part of the chapter uses a modified algorithm to overcome the two drawbacks of PD control at the same time. First, the high-gain observer is joined with the normal PD control, which achieves stability with the knowledge of the friction and gravity. We use both the perturbation method and Lyapunov analysis to prove the stability, where we can see the relation between the observer error and the observer gain.

Second, we also use the RBF neural network to estimate the nonlinear terms of friction and gravity. The learning rules are obtained from the tracking error analysis. We show that the closed-loop system with high-gain observer and neural compensator is stable if the weights have certain learning rules and the observer is fast enough. Some experimental tests are carried out in order to validate the modified PD control.

Keywords

Neural networks; PD control; High-gain observer

4.1 PD control with high gain observer

The dynamics of a serial n-link rigid robot manipulator can be written as [131]

M ( q ) q + C ( q , q ) q + G ( q ) + F q = τ

Image (4.1)

where qnImage denotes the links positions, qnImage denotes the links velocity, M(q)nxnImage is the inertia matrix, C(q,q)nxnImage is the centripetal and Coriolis matrix, G(q)nImage is the gravity vector, FRnxnImage is a positive definite diagonal matrix of frictional terms (Coulomb friction), and τnImage is the input control vector.

The following properties [131] of the robot's model (4.1) are used.

Property 4.1. The centripetal and Coriolis matrix can be written as

C ( q , q ) = k = 1 n C k ( q ) q k

Image

where Ck,ij(q)=(Bijqk+BikqjBjkqi)Image.

Property 4.2. The centripetal and Coriolis matrix satisfies the following relationship:

C ( q , q ) K c q

Image

where Kc=12maxqRnk=1nCk(q)Image.

Property 4.3. The centripetal and Coriolis matrix are bounded with respect to q.

4.1.1 Singular perturbation method

A nonlinear system is said to be singularly perturbed if it has the following form:

x = f ( x , z ) ϵ z = g ( x , z , ϵ )

Image (4.2)

where xRnImage, zRmImage, ϵ>0Image is a small constant parameter.

It is assumed that (4.2) has a unique solution with a unique equilibrium point (x,z)=(0,0)Image, and has a unique root z=ϕ(x)Image. (4.2) is in the standard form. Then it is possible to express the slow subsystem:

x = f ( x , ϕ ( x ) )

Image (4.3)

Since the second equation of (4.2) can be written as z=g(x,z,ϵ)ϵImage, its velocity is very fast when ϵ0Image. Let ϵ=0Image in (4.2):

x = f ( x , z ) 0 = g ( x , z , 0 )

Image (4.4)

The fast subsystem is

d z d τ = g ( x , z ( τ ) , 0 )

Image (4.5)

where τ=tϵImage and x is treated as a fixed unknown parameter. The slow subsystem is called the quasisteady-state system and the fast subsystem is called the boundary-layer system. For singular perturbation analysis the following two assumptions should be satisfied:

  1. 1.  The equilibrium point z=0Image of (4.5) is asymptotically stable uniformly in x and t0Image (the initial time).
  2. 2.  The eigenvalues of δgδzImage evaluated along x(t)Image of (4.3) and z=ϕ(x)Image have real parts smaller than a fixed negative number, i.e.,

Re [ λ ( δ g δ z ) ] c < 0

Image

  1. where c is a positive constant.

With these assumptions, the following theorem in the Vasileva's form is established [74].

Theorem 4.1

If the previous two assumptions are satisfied for (4.2), then the solutions of (4.3) and (4.5) approach the solutions of (4.2) in an interval [t0,T]Image and [t1,T]Image, respectively, where 0t0t1<TImage.

The motion equations of the serial n-link rigid robot manipulator (4.1) can be rewritten in the state space form [100]:

x 1 = x 2 x 2 = f ( x 1 , x 2 ) + g ( x 1 ) u y = x 1

Image (4.6)

where x1=qnImage is the vector of joint positions, x2=qnImage is the vector of joint velocities, ynImage is the measurable position vector.

f ( x 1 , x 2 ) = M ( x 1 ) 1 [ C ( x 1 , x 2 ) x 2 G ( x 1 ) F x 2 ] g ( x 1 ) = M ( x 1 ) 1 u

Image (4.7)

The high-gain observer has the following form [100]:

x ˆ 1 = x ˆ 2 + 1 ε H p ( y x ˆ 1 ) x ˆ 2 = 1 ε 2 H v ( y x ˆ 1 )

Image (4.8)

where xˆ1nImage, xˆ2nImage denote the estimated values of x1Image, x2Image, respectively; ε is chosen as a small positive parameter; and HpImage, HvImage are positive definite matrices chosen such that [HpIHv0]Image is a Hurwitz matrix. Let us define the observer error as

x ˜ 1 = x 1 x ˆ 1 , x ˜ 2 = x 2 x ˆ 2

Image

From (4.6) and (4.8) the dynamic of observer error is

x ˜ 1 = x ˜ 2 1 ε H p x ˜ 1 x ˜ 2 = 1 ε 2 H v x ˜ 1 + f ( x 1 , x 2 ) + g ( x 1 ) u

Image (4.9)

If we define a new pair of variables,

z ˜ 1 = x ˜ 1 , z ˜ 2 = ε x ˜ 2

Image

(4.9) can be rewritten as

ε z ˜ 1 = z ˜ 2 H p z ˜ 1 ε z ˜ 2 = H v z ˜ 1 + ε 2 [ f ( x 1 , x 2 ) + g ( x 1 ) u ]

Image (4.10)

or in matrix form:

ε z ˜ = A z ˜ + ε 2 B [ f ( x 1 , x 2 ) + g ( x 1 ) u ]

Image (4.11)

where A=[HpIHv0]Image, B=[0I]Image.

The PD control based on a high-gain observer is

u = M ( x 1 ) x 2 d + C ( x 1 , x ˆ 2 ) x 2 d + F x ˆ 2 + G ( x 1 ) K p ( x 1 x 1 d ) K d ( x ˆ 2 x 2 d )

Image (4.12)

where x1dnImage is the desired position, x2dImage is the desired velocity, and KpImage, KdImage ϵ RnxnImage are constant positive matrices. We assume that the desired trajectory and its first two derivatives are bounded. The compensation terms in (4.12) are M(x1)x2d+C(x1,xˆ2)x2d+Fxˆ2+G(x1)Image.

The tracking errors of PD control are

x 1 = x 1 x 1 d , x 2 = x 2 x 2 d

Image

From (4.6) and (4.12) the tracking errors are

x 1 = x 1 x 1 d = x 2 x 2 = x 2 x 2 d = f ( x 1 , x 2 ) + g ( x 1 ) u ( x 1 , x 1 d , x ˆ 2 , x 2 d , x 2 d ) x 2 d

Image

or

x 1 = x 2 x 2 = H ( x 1 , x 2 , x 1 d , x 2 d , x ˜ 2 )

Image (4.13)

where

H = M ( x 1 + x 1 d ) 1 [ F x ˜ 2 K p x 1 + K d x ˜ 2 K d x 2 C ( x 1 + x 1 d , x 2 d ) x ˜ 2 C ( x 1 + x 1 d , x 2 + x 2 d ) x 2 ]

Image

Including the control (4.12) into (4.10), then

ε z ˜ 1 = z ˜ 2 H p z ˜ 1 ε z ˜ 2 = H v z ˜ 1 + ε 2 H

Image

The closed-loop is the combination of (4.6), (4.12), and (4.8),

x 1 = x 2 x 2 = H ϵ z ˜ 1 = z ˜ 2 H p z ˜ 1 ϵ z ˜ 2 = H v z ˜ 1 + ϵ 2 H

Image (4.14)

with the equilibrium point (x1,x2,z˜1,z˜2)=(0,0,0,0)Image. Clearly, (4.14) has the singularly perturbed form (4.2).

Let ϵ=0Image:

0 = z ˜ 2 H p z ˜ 1 0 = H v z ˜ 1

Image

which implies that

z ˜ 1 = 0 = x 1 x ˆ 1 z ˜ 2 = 0 = ϵ ( x 2 x ˆ 2 )

Image

which has an equilibrium point (x1,x2)=(xˆ1,xˆ2)Image. The system (4.14) therefore is in the standard form. Substituting the equilibrium point into the first two equations of (4.14), we obtain the quasisteady-state model:

x 1 = x 2 x 2 = H ( x 1 , x 2 , x 1 d , x 2 d , 0 ) = M ( x 1 + x 1 d ) 1 [ K p x 1 K d x 2 C ( x 1 + x 1 d , x 2 + x 2 d ) x 2 ]

Image (4.15)

The boundary layer system of (4.14) is

δ δ τ z ˜ 1 ( τ ) = z ˜ 2 ( τ ) H p z ˜ 1 ( τ ) δ δ τ z ˜ 2 ( τ ) = H v z ˜ 1 ( τ )

Image (4.16)

where τ=tϵImage. (4.16) can be written as

δ δ τ z ˜ ( τ ) = A z ˜ ( τ )

Image (4.17)

where z˜(τ)=[z˜1T(τ),z˜2T(τ)]TImage and A is defined as in (4.11). The following theorem will show the stability properties of the equilibrium points (x1,x2)=(0,0)Image and (z˜1,z˜2)=(0,0)Image for (4.15) and (4.17), respectively.

Theorem 4.2

The equilibrium point (x1,x2)=(0,0)Image of (4.15) and the equilibrium point (z˜1,z˜2)=(0,0)Image of (4.17) are asymptotic stable.

Proof

For the first system, consider the following candidate Lyapunov function:

V 1 ( x 1 , x 2 ) = 1 2 x 2 T M ( x 1 + x 1 d ) x 2 + 1 2 x 1 T K P x 1

Image (4.18)

with its derivative with respect to time and along (4.15),

V 1 = x 2 T [ K p x 1 K d x 2 C ( x 1 + x 1 d , x 2 + x 2 d ) x 2 ] + 1 2 x 2 T M ( x 1 + x 1 d ) x 2 + x 1 T K P x 2 = x 2 T [ 1 2 M ( x 1 + x 1 d ) C ( x 1 + x 1 d , x 2 + x 2 d ) ] x 2 x 2 T K d x 2 = x 2 T K d x 2 0

Image

Applying LaSalle's theorem, the only solutions of (4.15) evolving in the set

Ω V 1 = { ( x 1 , x 2 ) | x 2 = 0 }

Image

are

x 1 = 0 0 = [ M ( x 1 + x 1 d ) 1 K p ] x 1

Image

which implies that x1=0Image (using Property 4.2 of the robot's dynamic).

For the second system, since A is a Hurwitz matrix, there exists a positive definite matrix P such that

A T P + P A = Q

Image

where Q is a positive definite matrix.

Consider the candidate Lyapunov function:

V 2 ( z ˜ 1 , z ˜ 2 ) = z ˜ T P z ˜

Image

with its derivative with respect to time and along (4.17),

V 2 = z ˜ T ( A T P + P A ) z ˜ = z ˜ T Q z ˜ < 0

Image

 □

The advantage of this approach is that the singularly perturbed analysis may divide the original problem in two systems: the slow subsystem or quasisteady state system and the fast subsystem or boundary layer system. Then both systems can be studied independently with the boundary layer system faster enough compared with the slow subsystem. These two subsystems explain why the dynamic of the high-gain observer is faster than the dynamic of the robot and the PD control. If the slow subsystem is considered as static the high-gain observer in the fast time scale τ can make the estimated states (xˆ1,xˆ2)Image converge to the real states (x1,x2)Image. In the slow time scale t the slow subsystem composed by the robot and the PD control can use then the estimated states (xˆ1,xˆ2)Image.

Remark 4.1

From the point of the singular perturbation analysis, one can see that the high-gain observer (4.8) has a faster dynamic than the robot (4.6) and the PD control (4.12). Under the assumption of ϵ=0Image, the observer error and the tracking error of PD control are asymptotic stable if the joint velocities are measurable.

Since it is impossible for the high-gain observer (4.8) to have ϵ=0Image, it is necessary to find a positive value of ϵ for which the stability properties are valid. For this purpose, we proposed a modified version of [120].

Theorem 4.3

If there exists a continuous interval Γ=(0,ϵ)Image such that for all ϵΓImage satisfies

( d 2 K 2 2 2 ) ϵ 4 + ( ( 1 d ) d β 1 K 2 ) ϵ 2 + ( ( 1 d ) α 1 K 1 ) ϵ + ( 1 d ) 2 β 1 2 2 ( 1 d ) d α 1 α 2 < 0

Image (4.19)

where α1Image, α2Image, β1Image, K1Image, K2Image are nonnegative constants, 0<d<1Image, ϵImage is the positive root of (4.19), then the origin (x,z˜)=(0,0)Image of (4.14) is asymptotically stable.

Proof

Let us select Lyapunov function for (4.14) as

V c l ( x , z ) = ( 1 d ) V 1 ( x 1 , x 2 ) + d V 2 ( z ˜ 1 , z ˜ 2 )

Image (4.20)

whose derivative with respect to time and along the trajectories of (4.14),

V c l = ( 1 d ) [ x 2 T M ( x 1 + x 1 d ) x 2 + 1 2 x 2 T M ( x 1 + x 1 d ) x 2 + x 1 T K P x 2 ] + d ϵ [ z ˜ T P z ˜ + z ˜ T P z ˜ ] = ( 1 d ) [ x 2 T M ( x 1 + x 1 d ) H ( x 1 , x 2 , x 1 d , x 2 d , 0 ) + 1 2 x 2 T M ( x 1 + x 1 d ) x 2 + x 1 T K P x 2 ] + ( 1 d ) [ x 2 T M ( x 1 + x 1 d ) [ H ( x 1 , x 2 , x 1 d , x 2 d , ϵ ) H ( x 1 , x 2 , x 1 d , x 2 d , 0 ) ] ] + d ϵ [ z ˜ T ( P A + A T P ) z ˜ ] + d ϵ [ 2 z ˜ T P ϵ 2 B H ( x 1 , x 2 , x 1 d , x 2 d , ϵ ) ]

Image

From Theorem 4.2, we have

V c l = ( 1 d ) x T [ 0 0 0 K d ] x + ( 1 d ) [ x 2 T M ( x 1 + x 1 d ) [ H ( x 1 , x 2 , x 1 d , x 2 d , ϵ ) H ( x 1 , x 2 , x 1 d , x 2 d , 0 ) ] ] d ϵ z ˜ T Q z ˜ + d ϵ [ 2 z ˜ T P ϵ 2 B H ( x 1 , x 2 , x 1 d , x 2 d , ϵ ) ]

Image (4.21)

At this point, we need to solve

[ x 2 T M ( x 1 + x 1 d ) [ H ( x 1 , x 2 , x 1 d , x 2 d , ϵ ) H ( x 1 , x 2 , x 1 d , x 2 d , 0 ) ] ] = x 2 T [ F x ˜ 2 + K d x ˜ 2 C ( x 1 + x 1 d , x 2 d ) x ˜ 2 ] = x T 1 ϵ [ 0 0 0 F + K d C ( x 1 + x 1 d , x 2 d ) ] z ˜

Image (4.22)

Notice that this matrix is bounded because F and KdImage are constant matrices and property 6 of the robot's dynamic.

Using the robot's dynamic Property 4.1 and Property 4.2 and (4.22), we can conclude that

[ 2 z ˜ T P ϵ 2 B H ( x 1 , x 2 , x 1 d , x 2 d , ϵ ) ] z ˜ T P ϵ [ 0 0 0 2 M ( x 1 + x 1 d ) 1 ( F K d + C ( x 1 + x 1 d , x 2 d ) ) ] z ˜ + z ˜ T P ϵ 2 [ 0 0 M ( x 1 + x 1 d ) 1 K p M ( x 1 + x 1 d ) 1 ( K d + C ( x 1 + x 1 d , x 2 d ) ) ] x

Image (4.23)

Applying (4.22) and (4.23) to (4.21),

V c l = ( 1 d ) x T [ 0 0 0 K d ] x + ( 1 d ) 1 ϵ x T [ 0 0 0 F + K d C ( x 1 + x 1 d , x 2 d ) ] z ˜ d z ˜ T ( 1 ϵ Q P [ 0 0 0 2 M ( x 1 + x 1 d ) 1 ( F K d + C ( x 1 + x 1 d , x 2 d ) ) ] ) z ˜ + d ϵ z ˜ T P [ 0 0 M ( x 1 + x 1 d ) 1 K p M ( x 1 + x 1 d ) 1 ( K d + C ( x 1 + x 1 d , x 2 d ) ) ] x ( 1 d ) α 1 ψ 2 ( x ) + ( 1 d ) 1 ϵ β 1 ψ ( x ) ϕ ( z ˜ ) d [ α 2 ϵ K 1 ] ϕ 2 ( z ˜ ) + d ϵ K 2 ψ ( x ) ϕ ( z ˜ )

Image (4.24)

where α1=[000Kd]Image, α2=QImage, β1=[000F+KdC(x1+x1d,x2d)]Image, ψ(x)=xImage, ϕ(z˜)=z˜Image

K 1 = P [ 0 0 0 2 M ( x 1 + x 1 d ) 1 ( F K d + C ( x 1 + x 1 d , x 2 d ) ) ] K 2 = P [ 0 0 M ( x 1 + x 1 d ) 1 K p M ( x 1 + x 1 d ) 1 ( K d + C ( x 1 + x 1 d , x 2 d ) ) ]

Image

(4.24) can be written in a matrix form as

V c l [ ψ ( x ) ϕ ( z ) ] T T [ ψ ( x ) ϕ ( z ) ]

Image

where T=[(1d)α1(1d)β12ϵϵdK22(1d)β12ϵϵdK22d[α2ϵK1]]Image. T has to be a positive definite matrix, and T is positive definite if there exists a continuous interval Γ=(0,ϵ)Image such that for all ϵ∈ Γ satisfies

( d 2 K 2 2 2 ) ϵ 4 + ( ( 1 d ) d β 1 K 2 ) ϵ 2 + ( ( 1 d ) α 1 K 1 ) ϵ + ( 1 d ) 2 β 1 2 2 ( 1 d ) d α 1 α 2 < 0

Image

Then ϵImage will be the upper bound of ϵ. □

Remark 4.2

Since (4.19) has 4 possible solutions the theorem will be valid if there exists a positive real root of (4.19) such that in the interval [0,ϵ]Image (4.19) is negative.

4.1.2 Lyapunov method

The motion equations of the serial n-link rigid robot manipulator (4.1) can be rewritten in the state space form (4.6). We assume u to be a bounded feedback control u=η(x)Image, supu=uImage, so (4.6) can be rewritten as

x i = x i 1 , i = 1 n 1 x n = Φ ( x ) y = x 1

Image (4.25)

where Φ(x)=F(x)+G(x)η(x)Image. If the solution of (4.25) exists, for any t[0,]Image, it is reasonable to make following assumption.

A4.1: The function Φ(x)Image is bounded over [0,]Image.

Let us construct the high-gain observer as

x ˆ i = x ˆ i 1 + α i ε i ( y y ˆ ) x ˆ n = α n ε n ( y y ˆ ) y ˆ = x ˆ 1

Image (4.26)

where ε is a small positive parameter and positive constant αiImage are chosen such that the roots of

s n + α 1 s n 1 + + a n 1 s + a n = 0

Image (4.27)

have negative real parts. The observer error is defined as

x ˜ = x x ˆ , x = [ x 1 x n ] T

Image

Theorem 4.4

Under the assumption A4.1, the observer error between the high gain observer (4.26) and nonlinear system (4.25) is asymptotically stable:

lim t x ˜ ( t ) = 0

Image (4.28)

Proof

The error dynamics corresponding to (4.25) and (4.26) are

x ˜ i = x ˜ i 1 α i ε i x ˜ 1 x ˜ n = α n ε n x ˜ 1 + Φ ( x )

Image (4.29)

If we define ξ=[x˜1,εx˜2,εn1x˜n]Image, the error equation (4.29) is

ε ξ = A ξ + ε n b Φ ( x )

Image (4.30)

where A=[a1anI]Image, b=[00,1]TImage. Because αiImage satisfies (4.27), there exists a positive definite matrix P such that

A T P + P A = I

Image

Now let us define a Lyapunov function as V=ξTPξImage, considering (4.30) we have

V = 1 ε ξ T ( A T P + P A ) ξ + 2 ε n 1 Φ T ( x ) b T P ξ = 1 ε ξ T ξ + 2 ε n 1 Φ T ( x ) b T P ξ 1 ε ξ 2 + 2 ε n 1 P b Φ ( x ) ξ

Image (4.31)

Under the assumption A4.1, we may conclude that PbΦ(x)Image is bounded as

c T : = sup t [ 0 , T ] P b Φ ( x )

Image

So (4.31) is

V 1 ε ξ 2 + 1 ε K s ξ , K s = 2 ε n c T

Image (4.32)

It is noted that if

K s < ξ

Image (4.33)

then V<0Image, t[0,T]Image, ξ is bounded. It is easy to see that, for ε<1Image, ξix˜iImage, i=1nImage, the condition (4.33) means Ks<x˜Image. As Ks=2εncTImage, for and finite T, KsImage can be made arbitrarily small for values of ε<1Image. Furthermore, if KsImage can be small as Ks<αξImage, 0<α<1Image, and Φ(x)Image is bounded over [0,]Image (A4.1), then

V 1 ε ξ 2 + 1 ε α ξ 2 = 1 ε ( 1 α ) ξ 2 < 0

Image

So

1 ε ( 1 α ) 0 ξ 2 V ( 0 ) V ( ) <

Image

That means ξL2LImage, from the error equation (4.30) ξLImage. Using Barlalat's lemma, we have (4.28). □

Remark 4.3

The high gain observer (4.26) is not dependent on the nonlinear plant (4.25) or (4.1); only output y is needed. The observer (4.26) can also be regarded as to estimate the derivatives of the output; see (4.1).

4.2 PD control with neural compensator

It is well known that the PD control with friction and gravity compensation may reach asymptotic stable [71]. Using neural networks to compensate the nonlinearity of the robot dynamic may be found in [85] and [78]. In [85] the authors used neural networks to approximate the whole nonlinearity of the robot dynamic. With this neural feedforward compensator and a PD control, they can guarantee a good track performance.

4.2.1 PD control with single layer neural compensation

If G and F are unknown, a neural network can be used to approximate them as

G + F f = W ˆ t σ ( x ) + η

Image (4.34)

where η is bounded modeling error, ηTΛ1ηηImage, Λ1Image is any matrix such that Λ1=Λ1T>0Image, σ(x)Image is the activation function, x is the input to the NN,

x = [ q , q ˙ , q d , q ˙ d ] T

Image (4.35)

f is the other uncertainties.

The neural PD control is

τ = K v r W ˆ t σ ( x )

Image (4.36)

Let us define the tracking error as

x 1 = x 1 x 1 d , x 2 = x 2 x 2 d

Image (4.37)

where x1=qImage, x2=q˙Image, x1d=qdImage, x2d=q˙dImage, and the auxiliary error is defined as

r = x 2 + Λ x 1 , Λ = Λ T > 0

Image (4.38)

If the Lyapunov function candidate is

V = 1 2 r T M r + 1 2 t r ( W ˜ t T K w 1 W ˜ t )

Image

and the updating law is Wˆt=Kwσ(x)rTImage, using

r T η r T Λ 1 1 r + η T Λ 1 η r T Λ 1 1 r + η

Image

So

V r T ( K v Λ 1 1 ) r + η

Image (4.39)

where ηImage is upper bound of η. To guarantee stability, we need Kv>Λ1Image. The tacking error is bounded, and rKvΛ112Image converges to ηImage.

Or as in [86], rT(KvΛ1)rImage can be transformed into

V K v r 2 + α r

Image

where α is a positive constant. In order to assure V0Image, we need r>αKvImage. This means

r 0  when  K v

Image

4.2.2 PD control with a multilayer feedforward neural compensator

The friction and gravity in (4.1) can be also approximated by the multilayer neural network:

P ( q , q ) = W Φ ( V x ) + P ˜

Image (4.40)

where P(q,q)=G(q)+FqImage, where WImage, VImage are fixed bounded weights, P˜Image is the approximated error, whose magnitude depends on the values of WImage and VImage.

The estimation of P(q,q)Image may be defined as Pˆ(q,q)Image:

P ˆ ( q , q ) = W ˆ Φ ( V ˆ x )

Image (4.41)

In order to implement the neural networks, the following assumption for P˜Image in (4.40) is needed:

P ˜ T Λ 1 P ˜ η , η > 0

Image (4.42)

It is clear that Φ satisfies the Lipschitz condition:

Φ ˜ t : = Φ ( V T x ) Φ ( V ˆ t T x ) = D σ V ˜ t T x + ν σ , D σ = Φ T ( Z ) Z | Z = V ˆ t T x , ν σ Λ 2 l V ˜ t T x Λ 2

Image (4.43)

where Λ is a positive definite matrix, l>0Image,

W ˜ = W W ˆ , V ˜ = V V ˆ

Image (4.44)

Remark 4.4

One can see that this condition is similar with [85] (Taylor series). The upper bound found for νσImage will be essential for proving stability of the PD control with a high gain observer and neural compensator.

We first assume the velocities are measurable, so the PD control is

u = K p ( x 1 x 1 d ) K d ( x 2 x 2 d ) + W ˆ Φ ( V ˆ x )

Image (4.45)

where x1dnImage is the desired position and x2dImage is the desired velocity. They are defined in (4.37). We assume the references are bounded as

x 1 d d 1 , x 2 d d 2

Image

The input control vector is τ=uImage. KpImage and KdImage are positive defined matrices corresponding to proportional and derivative coefficients.

Theorem 4.5

If the following learning laws for the weights of neural networks (4.41) are used

W ˆ t = d t K w σ ( V ˆ t T x ) x 2 T d t K w D σ V ˜ t T x x 2 T V ˆ t = d t K v x 2 T D σ W ˆ t T x + d t l K v x T V ˜ t x Λ 3

Image (4.46)

where 0<Λ1=Λ1Tn×nImage,

d t = { 0 if x 2 R 2 χ 2 4 λ min ( Γ ) 1 if x 2 R 2 > χ 2 4 λ min ( Γ ) Γ = K d 1 4 Λ 3 1 k c d 1 I R , R = R T > 0 χ = η + k c d 1 2 + λ max ( M ) d 2

Image

then:

(I) The weights of neural networks WˆtImage, VˆtImage, and tracking error x2Image are bounded.

(II) For any T(0,)Image the tracking error x2Image converges to the residual set

D x 2 = { x 2 | x 2 R 2 χ 2 4 λ min ( Γ ) }

Image (4.47)

Proof

From (4.45) the closed-loop system is

M ( x 1 ) x 2 + C ( x 1 , x 1 ) x 2 + K p x 1 + K d x 2 W ˆ t Φ ( V ˆ t x ) + W Φ ( V x ) + P ˜ = 0

Image (4.48)

The proposed candidate Lyapunov function is

V 1 = 1 2 x 2 T M x 2 + 1 2 x 1 T K p x 1 + 1 2 t r ( W ˜ t T K w 1 W ˜ t ) + 1 2 t r ( V ˜ t T K v 1 V ˜ t )

Image (4.49)

where KwImage and KvImage are any positive definite constant matrices. The derivative of (4.49) is

V 1 = x 2 T M ( x 1 ) x 2 + 1 2 x 2 T M ( x ) x 2 + x 2 T K p x 1 + t r ( W ˜ t T K w 1 W ˜ t ) + t r ( V ˜ t T K v 1 V ˜ t )

Image (4.50)

Using (4.48),

x 2 T M x 2 = x 2 T M x 2 d x 2 T [ C x 2 + K p x 1 + K d x 2 W ˆ t Φ ( V ˆ t x ) + W Φ ( V x ) + P ˜ ] = x 2 T M x 2 d x 2 T C x 2 x 2 T C x 2 d x 2 T K p x 1 x 2 T K d x 2 x 2 T [ W ˆ t Φ ( V ˆ t x ) + W Φ ( V x ) + P ˜ ]

Image (4.51)

Using Property 4.2 and (4.51), (4.50) becomes

V 1 = x 2 T M x 2 d x 2 T C x 2 d x 2 T K d x 2 x 2 T [ W ˆ t Φ ( V ˆ t x ˆ ) + W Φ ( V x ˆ ) + P ˜ ] + t r ( W ˜ t T K w 1 W ˜ t ) + t r ( V ˜ t T K v 1 V ˜ t )

Image (4.52)

The term WˆtΦ(Vˆtx)+WΦ(Vx)Image can be expressed as

W Φ ( V x ) W Φ ( V ˆ t x ) + W Φ ( V ˆ t x ) W ˆ t Φ ( V ˆ t x ) = W ˜ t T Φ ( V ˆ t T x ) + W T D σ V ˜ t T x + W T ν σ = W ˜ t T Φ ( V ˆ t T x ) + W ˆ T D σ V ˜ t T x + W ˜ T D σ V ˜ t T x + W T ν σ

Image (4.53)

In view of the matrix inequality,

X T Y + ( X T Y ) T X T Λ 1 X + Y T Λ Y

Image (4.54)

which is valid for any X,Yn×kImage and for any positive defined matrix 0<Λ=ΛTn×nImage [153].

x2T[P˜+WTνσ]Image can be estimated as

x 2 T [ P ˜ + W T ν σ ] x 2 P ˜ + 1 4 x 2 T Λ 1 1 x 2 + [ W T ν σ ] T Λ 1 [ W T ν σ ] x 2 η + 1 4 x 2 T Λ 1 1 x 2 + l V ˜ t T x Λ 3 2

Image (4.55)

where Λ3:=WΛ1WTImage. Using

x 2 T C ( x 1 , x 2 ) x 2 d x 2 C ( x 1 , x 2 ) x 2 d x 2 d x 2 T k c I x 2 + k c x 2 d 2 x 2 x 2 T M ( x 1 ) x 2 d λ max ( M ) x 2 d x 2

Image (4.56)

So

V 1 x 2 T Γ x 2 + χ x 2 + L w + L v x 2 T R x 2 λ min ( Γ ) ( x 2 χ 2 λ min ( Γ ) ) 2 + χ 2 4 λ min ( Γ ) x 2 T R x 2 + L w + L v χ 2 4 λ min ( Γ ) x 2 T R x 2 + L w + L v

Image (4.57)

where

L w = t r [ ( K w 1 W ˜ t Φ ( V ˆ t T x ) x 2 T D σ V ˜ t T x x 2 T ) W ˜ T ] L v = t r [ ( K v 1 V ˜ W ˆ T D σ T x x 2 T l K v x T V ˜ x Λ 3 ) V ˜ T ]

Image

If x2TRx2χ24λmin(Γ)Image, using adaptive law (4.46),

V 1 χ 2 4 λ min ( Γ ) x 2 T R x 2 0

Image

If x2TRx2<χ24λmin(Γ)Image, V1Image keeps constant, V1Image is bounded, that is, (I). So the total time during which x2R2>χ24λmin(Γ)Image is finite. Let TkImage denote the time interval during which x2R2>χ24λmin(Γ)Image:

  • •  If only finite times that x2R2Image stay outside the circle of radius χ24λmin(Γ)Image (and then reenter), x2R2Image will eventually stay inside of this circle.
  • •  If x2R2Image leave the circle infinite times, since the total time x2R2Image leave the circle is finite,

k = 1 T k < , lim k T k = 0

Image

So x2R2Image is bounded via an invariant set argument. From (4.48), x2Image is also bounded. Let x2R2Image denote the largest tracking error during the TkImage interval. Bounded x2R2Image imply that

lim k [ x 2 , k R 2 χ 2 4 λ min ( Γ ) ] = 0

Image

So x2,kR2Image will convergence to χ24λmin(Γ)Image (II) is achieved. □

The neural networks compensation does not require structure information of the uncertainties. The RBF neural networks are more powerful [50]; we can also use RBF neural networks to approximate the friction and gravity in (4.1). The RBF neural networks is

y j = i = 1 N w i , j ϕ i ( V x ) + b

Image (4.58)

where N is a hidden nodes number, wi,jImage is the weight connecting the hidden layer and the output layer. x is the input vector xmImage (m is input node number), VN×mImage is the weight matrix in the hidden layer, ϕi(Vx)Image is the radial basis function, which we select it as the Gaussian function ϕi(Vx)=exp{Vxcj2σj2}Image. Here, cjImage and σj2Image represent the center and spread of the basis function. b is the threshold. The significance of the threshold is that the output values have a nonzero mean. It can be combined with the first term as w0,j=bImage, ϕ0(Vx)=1Image, so yj=i=0Nwi,jϕi(Vx)Image. The friction and gravity of (4.1) can be approximated by a RBF neural network (4.58) as

P ( q , q ) = W Φ ( V x ) + P ˜

Image (4.59)

where WM×NImage, Φ(x)MImage. P(q,q)=G(q)+FqImage, P˜Image is the approximated error. WImage and VImage are unknown optimal matrices to make P˜Image minimum. The estimation of P(q,q)Image may be defined as Pˆ(q,q)Image:

P ˆ ( q , q ) = W ˆ t Φ ( V ˆ t x )

Image (4.60)

In order to implement neural networks, Assumption A4.1 for approximation error P˜Image is necessary.

It is clear that the Gaussian function used in RBF neural networks satisfy the Lipschitz condition. So we may conclude that

Φ ˜ t = Φ ( V T x ) Φ ( V ˆ t T x ) = D σ V ˜ t T x + ν σ

Image (4.61)

where νσΛ2lV˜tTxΛ2Image, Dσ=ΦT(Z)Z|Z=VˆtTxImage, Λ is a positive definite matrix, l is a positive constant, W˜t=WˆtWImage, V˜t=VˆtVImage.

The learning rules of WˆtImage and VˆtImage are derived from the stability analysis of the closed-loop system when the neural compensator (4.60) is combined with normal PD control as

τ = K p ( x 1 x 1 d ) K d ( x 2 x 2 d ) + W ˆ t Φ ( V ˆ t x )

Image (4.62)

where KpImage and KdImage are positive defined matrices corresponding to proportional and derivative coefficients, x1dnImage is the desired position. x2dImage is the desired velocity.

Remark 4.5

From the definition of the Lyapunov function (4.49), we may see that the learning rules (4.46) will minimize the tracking error x2Image. This structure is different from normal neural networks which are used for approximation of nonlinear function. The first terms Kwx2ΦT(Vˆtx)Image and KvDσTWˆtTx2Image are exactly corresponding to the backpropagation scheme. The second terms KwxTVˆtTDσImage and KvlΛ3VˆtxxTImage are additional ones, which can assure a stable learning. Deriving an update law with guaranteed stability is a very effective technique; some similar analysis may be found in [85].

4.3 PD control with velocity estimation and neural compensator

If neither the velocity x2Image nor the friction and gravity are known, the normal PD control should be combined with velocities estimation and neural compensations:

τ = K p ( x 1 x 1 d ) K d ( x ˆ 2 x 2 d ) + W ˆ Φ ( V ˆ s )

Image (4.63)

where s=(x1T,xˆ2T)TImage. The structure of the new PD controller is shown in Fig. 4.1.

Image
Figure 4.1 Neural PD control with high gain observer.

The dynamic of the tracking error can be formed from (4.6) and (4.63)

x 1 = x 2 x 2 = x 2 x 2 d = H 2 x 2 d

Image (4.64)

where

H 2 = M ( x 1 + x 1 d ) 1 [ K p x 1 + K d x ˜ 2 K d x 2 + W ˆ Φ ( V ˆ s ) W Φ ( V s ) P ˜ C ( x 2 + x 2 d ) ]

Image (4.65)

The closed-loop system is

x 1 = x 2 , x 2 = H 2 x 2 d ε z ˜ 1 = z ˜ 2 K 1 z ˜ 1 , ε z ˜ 2 = K 2 z ˜ 1 + ε 2 H 2

Image (4.66)

The new updating law, which is obtained from next theorem is

W ˆ t = d t K w Ψ [ Φ T ( V ˆ t s ) + s T V ˆ t T D σ ] V ˆ t = d t K v [ D σ T W ˆ t T Ψ + l Λ 3 V ˆ t s ] s T

Image (4.67)

where Ψ=2dεηM(x1xˆ1)TP12M12dε2ηMxˆ2TP22M1(1d)x1dImage,

d t = { 0 if y R 1 2 a + λ min 2 ( K d ) 4 b 2 1 if y R 1 2 > a + λ min 2 ( K d ) 4 b 2

Image (4.68)

y=[x,z˜]TImage, a=Ψη+λmax(Λ11)4Ψ2Image, b=kc2x1d+λmax(M)x2dImage. We can see that another condition is needed: M is known. This requirement is necessary if both velocity and friction are unknown (see [78]). When we realize the high-gain observer (4.8), we can see that the observer error is less than 2ε2Ci,TImage. Can we find the largest value of ε, which can assure that the closed-loop system is stable? The following theorem gives the answer to this question.

Theorem 4.6

If (a) the learning laws for the weights of neural networks (4.60) are used as in (4.67), (b) the gain of the observer satisfies

ε < 1 / 2 η M P 1 [ 0 0 0 K d ]

Image (4.69)

P1Image is the solution of the following Lyapunov equation:

A T P 1 + P 1 A = δ I , δ > 0

Image (4.70)

then we have:

(I) The weights of neural networks WˆtImage, VˆtImage, observer and tracking errors z˜Image and xImage are bounded.

(II) For any T(0,)Image the tracking and observer error y=[x,z˜]TImage converges to the residual set

D y = { y | y R 1 2 a + λ min 2 ( K d ) 4 b 2 }

Image

where a and b are defined in (4.68). R1Image is a positive defined matrix such that R1=[R11R21R12R22]Image, Ri,jImage are positive constants.

Proof

Let us select the following candidate Lyapunov function for (4.66):

V 2 = ( 1 d ) V 1 + d z ˜ T P 1 z ˜

Image (4.71)

where 0<d<1Image, V1Image is the Lyapunov function in Theorem 4.4 as in (4.49). Since PD control (4.63) is different from (4.62), we cannot apply V1Image in Theorem 4.5. From (4.66), we know z˜=1εAz˜+εBH2Image. Because

H 2 H 2 , x ˜ 2 = 0 = 1 ε M 1 K d z ˜ 2 x 2 = [ H 2 , x ˜ 2 = 0 x 2 d ] + [ H 2 H 2 , x ˜ 2 = 0 ]

Image

the derivative of (4.71) is

V 2 = ( 1 d ) [ x 2 T M H 2 , x ˜ 2 = 0 + x 2 T M x 2 + x 2 T K p x 1 T + t r ( W ˜ t T K w 1 W ˜ t ) + t r ( V ˜ t T K v 1 V ˜ t ) ] + ( 1 d ) x 2 T K d x ˜ 2 d ε z ˜ T z ˜ + [ 2 d ε z ˜ T P B H 2 ]

Image (4.72)

Compare (4.72) and (4.48) and the first term of right-hand side is the same as (4.50):

V 2 = ( 1 d ) ( V 1 + x 2 T K d x ˜ 2 ) d ε z ˜ T z ˜ + 2 d ε z ˜ T P 1 B H 2

Image (4.73)

From (4.65),

2 d ε z ˜ T P 1 B H = 2 d ε z ˜ T P 1 ( C x 2 + K p x 1 + K d x 2 K d x ˜ 2 ) 2 d ε z ˜ T P 1 B M 1 ( W ˆ Φ ( V ˆ s ) + W Φ ( V s ) + P ˜ )

Image (4.74)

[x2TMx2dx2TCx2d]Image can be estimated as

x 2 T M x 2 d x 2 T C x 2 d x 2 d x 2 T k c x 2 + k c x 2 d 2 x 2 + λ max ( M ) x 2 d x 2 T

Image

ΨT(WνσP˜)Image may be estimated as

Ψ T ( W ν σ P ˜ ) Ψ η + 1 4 Ψ T Λ 1 1 Ψ + l V ˜ t T x Λ 3 2

Image (4.75)

dεz˜TP1(Cx2)Image becomes

d ε z ˜ T P 1 [ C ( x 1 , x 2 ) x 2 ] 2 d ε z ˜ T P 1 C 0 ( x 1 ) x 2

Image

So

2 d ε z ˜ T P 1 [ C ( x 1 , x 2 ) x 2 + K p x 1 + K d x 2 K d x ˜ 2 ] 2 d ε z ˜ T P 1 [ 0 0 K p K d + 2 C 0 ] x 2 d z ˜ T P 1 [ 0 0 0 K d ] z ˜

Image

(4.73) becomes

V 2 y T T y ( 1 d ) x 2 ( λ min ( K d ) x 2 b ) + a y T R 1 y + L w 1 + L v 1

Image (4.76)

We want to make T as a positive defined matrix, i.e.,

0 < R 11 < ( 1 d ) α , R 21 > ( 1 d ) β 2 ε + ε d k 2 2 0 < R 22 < d [ 1 ε k 1 ] , 1 ε > k 1

Image (4.77)

Since (1d)β2εεdk22R12<0Image, R12Image can be any positive constant. So if condition (4.69) is satisfied, R1Image is selected as (4.77), yTTy>0Image. So

V 3 ( a + λ min 2 ( K d ) 4 b 2 ) y T R 1 y + L w 1 + L v 1

Image (4.78)

where (4.78) has a same structure as in Theorem 4.4, with the similar proof (I) and (II) may be established. □

Remark 4.6

Since y=[x,z˜]TImage is not measurable, the dead zone dtImage in (4.68) cannot be realized. We will use the available date y=xˆxdImage to determine the dead zone. The dead zone can be represented as

y R 1 2 > a + λ min 2 ( K d ) 4 b 2 + y R 1 2 y R 1 2

Image

Because y=xx˜Image,

y R 1 2 y R 1 2 x R 1 2 + x ˜ R 1 2 x R 1 2 + z ˜ R 1 2 = x ˜ R 1 2 + z ˜ R 1 2 4 ε 2 λ max ( R 1 ) C i , T

Image

So, the new dead zone is modified as

d t = { 0 if y R 1 2 a + λ min 2 ( K d ) + 4 ε 2 λ max ( R 1 ) C i , T 4 b 2 1 if y R 1 2 > a + λ min 2 ( K d ) 4 b 2 + 4 ε 2 λ max ( R 1 ) C i , T

Image

4.4 Simulation

To develop the simulations, a two-link planar robot manipulator is considered. It is assumed that each link has its mass concentrated as a point at the end. The manipulator is in vertical position, with gravity and friction. The robot parameters are: m1=m2=1Image, l1=1Image, l2=2Image. The two friction coefficients are 0.3, and the gravity is 9.8. So the real matrices of (4.1) are

M = [ 4 cos ( q 2 ) + 6 2 cos ( q 2 ) + 4 2 cos ( q 2 ) + 4 4 ] C = [ 4 cos ( q 2 . ) sin ( q 2 ) 2 q 1 sin ( q 2 ) 2 cos ( q 2 ) sin ( q 2 ) 0 ] G = [ 19.6 cos ( q 1 + q 2 ) + 19.6 cos q 1 19.6 cos ( q 1 + q 2 ) ] , F = [ 0.3 0 0 0.3 ]

Image

where q=[q1,q2]TImage. All data are given with the appropriate unities.

The following PD coefficients are chosen:

K p = [ 31 0 0 45 ] , K d = [ 60 0 0 80 ]

Image

The matrices P and Q are selected as

P = [ 5 0 1 2 0 0 5 0 1 2 5 0 1 0 0 5 0 1 ] , Q = [ 45 0 0 0 0 45 0 0 45 0 5.5 0 0 45 0 5.5 ]

Image

Let us calculate the constants in Theorem 4.5, α1=[000Kd]=80Image, α2=Q=45Image,

K 1 = P [ 0 0 0 2 M ( x 1 + x 1 d ) 1 ( F K d + C ( x 1 + x 1 d , x 2 d ) ) ] = 42.71 K 2 = P [ 0 0 M ( x 1 + x 1 d ) 1 K p M ( x 1 + x 1 d ) 1 ( K d + C ( x 1 + x 1 d , x 2 d ) ) ] = 84.3242 β 1 = [ 0 0 0 F + K d C ( x 1 + x 1 d , x 2 d ) ] = 80.0288

Image

where AImage denotes the absolute value of the real part of the maximum eigenvalue of the matrix A. Then (4.19) becomes

f ( ϵ ) = 3555.3 d 2 ϵ 4 + 6748.4 ( 1 d ) d ϵ 2 + 6745.9 ( 1 d ) ϵ + 3202.3 ( 1 d ) 2 3600 ( 1 d ) d

Image (4.79)

With d=0.5Image the polynomial (4.79) is shown in Fig. 4.2.

Image
Figure 4.2 Polynomial of epsilon.

One can see that for 0<ϵ<0.0558309561787Image (4.79) is negative. So ϵ=0.056Image, i.e., Γ=(0,0.056)Image. The high-gain observer (4.8) is determined as ϵ=0.003Image.

Fig. 4.3 and Fig. 4.4 show rapid convergence of the observer to the link's velocities. The observer error is almost zero in this short interval. Fig. 4.3 shows that with a velocity observer (high-gain observer), PD control can make the links follow the desired trajectory.

Image
Figure 4.3 The velocities of the links.
Image
Figure 4.4 The positions of the links.

The asymptotic stability assures that the desired positions may be reached with zero error if tImage. Rapid and accurate convergence of the observer is essential for the PD controller, because the observer is in the feedback loop. Since the high-gain observer (4.26) has these properties, we can use a simple controller (4.12) to compensate the uncertainties of the observer response.

Fig. 4.5 and Fig. 4.6 show the simulation results of the performance of the high-gain observer when a noise arises at t=3Image. The perturbation is a 20%Image increment on the second link's mass. This is intended to see the robustness of the observer-based PD control. Fig. 4.7 shows the tracking done by both links of the robot with the same perturbation at t=3Image.

Image
Figure 4.5 Links velocities with perturbation.
Image
Figure 4.6 Links velocities with perturbation.
Image
Figure 4.7 PD control for Link-1.

Since the observer and controller are independent of the robot's dynamics, the influence of perturbations are very small. All data are given with the appropriate unities. The following PD coefficients are chosen:

K p = [ 31 0 0 45 ] , K d = [ 60 0 0 80 ]

Image

Friction and gravity can be uniformly approximated by a radial basis function as in (4.60) with N=10Image. The Gaussian function is

ϕ j ( V x ) = exp { i = 1 n ( V x i c i ) 2 100 } ,

Image

where the spread σlImage was set to 50Image and the center ciImage is a random number between 0 and 1. The control law applied is

u = K p ( x 1 x 1 d ) K d ( x 2 x 2 d ) + W ˆ Φ ( V ˆ x )

Image

starting with W=0.7Image and V=0.7Image as initial values. Even though some initial weights are needed for the controller to work, no special values are required nor is a previous investigation on the robot dynamics for the implementation of this control. Fig. 4.7 and Fig. 4.8 gave the comparison of the performance of PD controller neural compensation. The continuous line is the exact position, the dashed line is general PD control without friction and gravity, and the dashed-dotted line is the neural networks compensation.

Image
Figure 4.8 PD control for Link-2.

We can see that PD control with the RBF networks compensator may improve the control performance.

4.5 Conclusions

The disadvantages of the popular PD control are overcome in the following two ways: (I) A high-gain observer is proposed for the estimation of the velocities of the joints. (II) The RBF neural networks are used to compensate the gravity and friction. By the singular perturbation and Lyapunov-like analysis, we give proofs of the high-gain observer and the stability of the closed-loop system with the velocity estimation and the RBF compensator.

Bibliography

[50] S. Haykin, Neural Networks – A Comprehensive Foundation. New York: Macmillan College Publ. Co.; 1994.

[64] S. Jagannathan, F.L. Lewis, Identification of nonlinear dynamical systems using multilayered neural networks, Automatica 1996;32(12):1707–1712.

[71] R. Kelly, Global positioning of robot manipulators via PD control plus a class of nonlinear integral actions, IEEE Trans. Autom. Control 1998;43(7):934–938.

[74] H.K. Khalil, Adaptive output feedback control of nonlinear systems represented by input–output models, IEEE Trans. Autom. Control 1996;41(2):177–188.

[78] Y.H. Kim, F.L. Lewis, Neural network output feedback control of overhead crane manipulator, IEEE Trans. Neural Netw. 1999;15:301–309.

[85] F.L. Lewis, K. Liu, A. Yesildirek, Neural net robot controller with guaranteed tracking performance, IEEE Trans. Neural Netw. 1995;6(3):703–715.

[86] F.L. Lewis, D.M. Dawson, C.T. Abdallah, Robot Manipulator Control: Theory and Practice. New York, NY 10016: Marcel Dekker, Inc.; 2004.

[100] S. Nicosia, A. Tornambe, High-gain observers in the state and parameter estimation of robots having elastic joins, Syst. Control Lett. 1989;13:331–337.

[120] Ali Saberi, Hassan Khalil, Quadratic-type Lyapunov functions for singularly perturbed systems, IEEE Trans. Autom. Control June 1984;AC-29(6).

[131] M.W. Spong, M. Vidyasagar, Robot Dynamics and Control. Canada: John Wiley & Sons Inc.; 1989.

[153] W. Yu, J. Rosen, A novel linear PID controller for an upper limb exoskeleton, 49th IEEE Conference on Decision and Control, CDC'10. Atlanta, USA. 2010:3548–3553.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.211.87