6

Fundamental approaches to control system analysis

Abstract

This chapter surveys the fundamental approaches to control system analysis, including the polynomial matrix description theory of linear multivariable control systems, behavioral approach in systems theory, and the chain-scattering representations.

Keywords

Control system analysis; Polynomial matrix description; Linear multivariable control systems; Behavioral approach; Chain-scattering representations

The main purpose of this introductory chapter is to familiarize the reader with such basic theories on control system structure and behavior as the polynomial matrix description (PMD) theory, behavioral theory, and chain-scattering representation (CSR) approaches. This chapter also serves to prepare for the representation of our results in the succeeding chapters.

6.1 PMD theory of linear multivariable control systems

The main aim of this section is to briefly introduce the background and preliminary results in PMD theory, which are needed in the sequel of this book. Regarding the related issue of determination of finite and infinite frequency structure of a rational matrix and the issue of the resolvent decomposition and solution of regular PMD, this book will present its contributions in Chapters 710. The main references to the following introduction are Rosenbrock [5], Vardulakis [13], and Kailath [32].

The initial aim [5] of PMD theory was to describe various time domain results of Kalman’s state space theory into a powerful algebraic language by using the existing results in matrix theory. This led to a better understanding of the mathematical structure of linear multivariable systems by generalizing the classical single-input single-output transfer function approaches to multivariable case. It finally resulted in various synthesis techniques for multivariable feedback systems. So far PMD theory has been established as a very successful and still developing area in the linear multivariable control system theory.

Throughout this book, R[s]p×qsi1_e denotes the set of p × q polynomial matrices, while R(s)p×qsi2_e denotes the set of p × q rational matrices. Regular (PMDs) or linear nonhomogeneous matrix differential equations (LNHMDEs) are described by

Σ:A(ρ)β(t)=B(ρ)u(t),y(t)=C(ρ)β(t)+D(ρ)u(t),t0,

si3_e  (6.1)

where ρ := d/ dt is the differential operator, A(ρ)R[ρ]r×rsi4_e, rankR[ρ]A(ρ)=rsi5_e, B(ρ)R[ρ]r×msi6_e, C(ρ)R[ρ]p×rsi7_e, D(ρ)R[ρ]p×m,β(t):[0,+)Rrsi8_e is the pseudo-state of the PMDs, u(t):[0,+)Rmsi9_e is a p times piecewise continuously differentiable function called the input, y(t):[0,+)Rpsi10_e is the output vector of the PMD.

Taking the Laplace transform of Eq. (6.1) and assuming “zero initial conditions,” i.e., that

u(0)(i)=0,β(0)(i)=0,y(0)(i)=0,

si11_e

Eq. (6.1) can be written as

A(s)β^(s)=B(s)û(s),ŷ(s)=C(s)β^(s)+D(s)û(s),

si12_e  (6.2)

where β^(s):=L{β(t)}si13_e, û(s):=L{u(t)}si14_e, ŷ(s):=L{y(t)}si15_e are Laplace transforms of β(t), u(t), and y(t), respectively. From Eq. (6.2) we have

ŷ(s)=[C(s)A(s)1B(s)+D(s)]û(s).

si16_e  (6.3)

Definition 6.1.1

The rational matrix

H(s):=C(s)A(s)1B(s)+D(s)R(s)p×m

si17_e

is called the transfer function matrix of the system si18_e.

A polynomial matrix T(s)R[s]p×psi19_e is called R[s]si20_e-unimodular or simply unimodular if there exists a matrix T^(s)R[s]p×psi21_e such that T(s)T^(s)=Ipsi22_e, where Ip denotes the p × p identity matrix, equivalently if detT(s)=cR,c0si23_e.

Definition 6.1.2

The degree of a polynomial matrix T(s)R[s]p×msi24_e, denoted by deg T(s), is defined as the maximum degree among the degrees of all its maximum order (nonzero) minors.

Corollary 6.1.1

If T(s) is a nonsingular square polynomial matrix, then

degT(s)=deg(det(T(s))),

si25_e

where det()si26_e denotes the determinant of the indicated matrix.

Corollary 6.1.2

T(s)R[s]p×psi19_e is unimodular if and only if deg T(s) = 0.

Two rational matrices T1(s), T2(s)R[s]p×msi28_e are called equivalent if there exist unimodular matrices TL(s)R[s]p×p,TR(s)R[s]m×msi29_e such that

TL(s)T1(s)TR(s)=T2(s).

si30_e

Any rational matrix is equivalent to its Smith-McMillan form, which is a canonical form of a matrix.

Theorem 6.1.1

([5] Smith-McMillan form of a rational matrix in C)

Let T(s)R[s]p×msi24_e with rankR[s]T(s)=rsi32_e, rmin{p,m}si33_e. Then T(s) is equivalent to a diagonal matrix ST(s)Csi34_e having the form

ST(s)C:=block diagε1(s)ψ1(s),ε2(s)ψ2(s),,εr(s)ψr(s),0pr,mr,

si35_e  (6.4)

where εi(s),Ψi(s)R[s]si36_e are monic and coprime such that εi(s) divides εi+1(s), and Ψi+1(s) divides Ψi(s), i = 1, 2, …, r − 1.

If T(s)R[s]p×msi24_e then Ψi(s) = 1, ∀ir, that is, ST(s)Csi34_e is also a polynomial matrix and it is called the Smith form of T(s). Otherwise, i.e., if T(s) is nonpolynomial, for some i and j, Ψi(s) are nonconstant, that is ST(s)Csi34_e is also nonpolynomial and it is called McMillan form of T(s).

The zeroes of T(s) are defined as the zeros of the polynomials εi(s), ir. The poles of T(s) are defined as the zeros of the polynomials Ψi(s), ir.

Vardulakis et al. [87] introduced the concept of the Smith-McMillan form at infinity of a rational matrix. The main definitions are briefly presented here.

Definition 6.1.3

T(s)R[s]p×msi24_e will be called proper if limsT(s)si41_e exists. If the limit is zero then T(s) will be called strictly proper , while if this limit is nonzero T(s) will be called exactly proper .

Let Rpr(s)si42_e denote the ring of proper rational functions.

Definition 6.1.4

The m × m rational matrix W(s)Rprp×m(s)si43_e is said to be biproper if and only if

1. limsW(s)=WRm×msi44_e exists, and

2. detW0si45_e.

The p ×m rational matrices T1(s) and T2(s) are said to be equivalent at infinity if there exist biproper matrices W(s)Rprp×p(s),V(s)Rprm×m(s)si46_e such that

W(s)T1(s)V(s)=T2(s).

si47_e

Since W(s) and V (s) are biproper, it can be seen from Definition 6.1.4 that W(s) and V (s) possess neither poles nor zeros at infinity. It therefore follows intuitively from this that T1(s) and T2(s) have an identical pole-zero structure at infinity. A canonical form for a rational matrix under the equivalence at infinity is its Smith-McMillan form at infinity, ST(s)(s)si48_e.

Theorem 6.1.2

(Smith-McMillan form of a rational matrix at s=si49_e)

Let T(s)Rp×m(s)si50_e with rankR[s]T(s)=rsi32_e. Then T(s) is equivalent at s=si49_e to a diagonal matrix having the form

ST(s)(s)=block diag[sq1,sq2,,sqk,1/sq^k+1,,1/sq^r,0pr,mr],

si53_e  (6.5)

where 1 ≤ k ≤ r and

q1q2qk0

si54_e

q^rq^r1q^k+10

si55_e

are respectively the orders of its poles and zeros at s=.si56_e

A reduction approach for computing Smith-McMillan form at si57_e of a rational matrix is suggested in [87].

Any rational function can be represented as a ratio of coprime polynomials, this can be generalized to the matrix case.

Definition 6.1.5

Let T(s)R(s)p×msi58_e with rankR(s)T(s)=rsi59_e, 1rmin{p,m}.si60_e Then there exist nonunique pairs: A1(s)R(s)p×psi61_e, B1(s)R(s)p×msi62_e and left coprime, B2(s)R[s]p×msi63_e, A2(s)R[s]m×msi64_e and right coprime such that

T(s)=A1(s)1B1(s)=B2(s)A2(s)1.

si65_e  (6.6)

A representation of a rational matrix T(s) given by Eq. (6.6) is called a left (right) coprime polynomial matrix fraction description (MFD) of T(s).

Let A(s) ∈ R[s]r×r, rankR(s)A(s)=rsi66_e. Then by Theorem 6.1.2A(s) is equivalent at s=si49_e to its Smith-McMillan form SA(s)(s)si68_e having the form

SA(s)(s)=diag[sq1,sq2,,sqk,1/sq^k+1,,1/sq^r],

si69_e  (6.7)

where 1 ≤ kr and

q1q2qk0,

si70_e

q^rq^r1q^k+10,

si71_e

If A(s) has at least one zero at s=si49_e, let

A(s)1=Hksk+Hk1sk1++H1s+H0+H1s1+H2s2+

si73_e  (6.8)

be the Laurent expansion of A(s)−1 at s=si49_e, where k > 0, Hk≠0. Then one has

k=q^r.

si75_e  (6.9)

Let

A(s)=A0+A1s++Aq1sq1Rr×r[s].

si76_e  (6.10)

n:=degdet(A(s))si77_e and λ0C such that det(A(λ0))=0si78_e. The sequence of r-dimensional vectors x0, x1, …, xm(x0≠0) for which the following equalities hold

i=0q1i!Ai(λ0)xqi=0,q=0,1,2,,m

si79_e

is called a Jordan Chain of length (m + 1) for A(s) corresponding to λ0. Now let

SA(s)C(s)=diag[1,1,,1,fk(s),fk+1(s),,fr(s)]

si80_e

1 ≤ kr be the Smith Form of A(s).

fj(s)/fj+1(s),j=k,k+1,,r1.

si81_e

Let

fj(s)=i=1l(sλi)σij,j=k,k+1,,r

si82_e

be the decomposition of the invariant polynomials fj(s) into irreducible elementary divisors over C, i.e., assure that

det(A(s))=0

si83_e

has l distinct zeroes

λ1,λ2,,λlR,

si84_e

where

0σikσi,k+1σir,iI

si85_e

are the partial multiplicities of the eigenvalues λi. Let

xj0i,xj1i,,xj,σij1iRr,(xj0j0),

si86_e

iI,j=k,k+1,,r

si87_e

be the Jordan Chain of lengths σij corresponding to the eigenvalue λi of A(s) and consider the matrices

Ci=[xk,0i,,xk,σi,k1i][xr,0i,,xr,σi,r1i]Rr×mi,

si88_e

where mi=j=krσij,iIsi89_e and

Ji=block diag[Jik,Ji,k+1,,Jir]Rmi×mi,iI,

si90_e

where

Jij=λi1000λi100001000λiRσij×σij

si91_e

iI,j=k,k+1,,r.

si92_e

Definition 6.1.6

The matrix pair (C, J) with

C=[C1,C2,,Cl]Rr×n

si93_e

J=block diag[J1,J2,,Jl]Rn×n

si94_e

where n:=m1+m2++ml=degdet(A(s))si95_e is called the finite Jordan pair of A(s).

Proposition 6.1.1

([17])

The matrices C, J satisfy

Aq1CJq1+Aq11CJq11++A0C=0

si96_e

Qn=CCJCJn1Rm×n,rankQn=n.

si97_e

Definition 6.1.7

Let T(s) ∈ Rpr(s)p×m, then a quadruple (A, B, C, E) such that

T(s)=C(sInA)1B+E

si98_e

is called a realization of the proper rational matrix T(s).

Definition 6.1.8

A realization (A, B, C, E) of a T(s) ∈ Rpr(s)p×m is called a minimal realization of T(s) if

n=δM(T(s)),

si99_e

where δM(T(s)) is the McMillan degree of T(s).

Definition 6.1.9

Let T(s) ∈ R[s]p×m. Then a triple of matrices CRp×μsi100_e, ARμ×μsi101_e, BRμ×msi102_e such that

T(s)=C(IμsA)1B

si103_e  (6.11)

is called a realization of the polynomial matrix T(s).

A realization of T(s) ∈ R[s]p×m can always be obtained from a realization of the strictly proper rational matrix T~(w):=(1/w)T(1/w)Rprp×m(w)si104_e. Because if CRp×μ,ARμ×μ,BRμ×msi105_e, is a realization of T~(w)si106_e, i.e., if

1wT1w=C(wIμA)1B,

si107_e  (6.12)

then Eq. (6.12) by the substitution 1w=ssi108_e gives Eq. (6.11).

Proposition 6.1.2

(Vardulakis [13])

Let

T(s)=T0+T1s++Tq1sq1Rp×m[s],rankR(s)T(s)=r,

si109_e

its Smith-McMillan form at infinity is

ST(s)(s)=block diag[sq1,sq2,,sqk,1/sq^k+1,,1/sq^r,0pr,mr]

si110_e

are respectively the orders of its poles and zeros at s=si49_e. Then

1. The McMillan degree of T~(w):=(1/w)T(1/w)Rprp×m(w)si104_e is given by

μ=δM1wT1w=i=1k(qi+1)=q1+q2++qk+k.

si113_e

2. If (C,J,B)si114_e is a minimal realization of T(s) with JRμ×μsi115_e in Jordan form, then

J=block diag[J1,J2,,Jk]Rμ×μ,

si116_e

where

Ji=0100001000010000R(qi+1)×(qi+1),i=1,2,,k.

si117_e

Crucial to the issue of the solution of regular PMDs, which we will discuss later, is the concept of an infinite Jordan pair of a regular polynomial matrix, i.e., a Jordan pair Csi118_e, Jsi119_e which corresponds to its zeros at s=si49_e.

Definition 6.1.10

([17])

Let A(s)=A0+A1s++Aq1sq1Rsr×rsi121_e, rankRsA(s)=r,q11.si122_e Then a pair

CRr×v,J=block diagJ1,J2,,JζRv×v

si123_e

Ji=01000010....00010000Rvi×vi,i=1,2,,ζ

si124_e

v:=i=1ζvi,vi,ζZ+si125_e is called an infinite Jordan pair of A(s) if it is a (finite) Jordan pair of the “dual” polynomial matrix

Ã(w):=wq1A1w=A0wq1+A1wq11++Aq1Rwr×r

si126_e

corresponding to its zero at w = 0, or equivalently if and only if (by Proposition 6.1.1)

A0CJq1+A1CJq11++Aq1C=0,

si127_e

and

rankQv=rankCCJCJv1=v.

si128_e

Regarding the resolvent decomposition of a regular polynomial matrix which is closely related to the solution of regular PMD, Vardulakis [13] gave the following result.

Theorem 6.1.3

Let A(s)=A0+A1s++Aq1sq1Rsr×r,rankRsA(s)=r,q11si129_e with Smith-McMillan form at s=SA(s)si130_e given by Eq. (6.7). Write

A1(s)=Hpol(s)+Hsp(s),

si131_e

where Hpol(s)Rsr×rsi132_e and Hsp(s)Rprr×r(s)si133_e is strictly proper.

Let n=deg(det(A(s)))=δM(Hsp(s)),si134_e v=i=k+1r(q^i+1).si135_e Let CRr×n,JRn×n,BRn×rsi136_e be a minimal realization of Hsp(s) and CRr×v,JRv×v,BRv×rsi137_e be a minimal realization of Hpol(s). Then C, J is a finite Jordan pair of A(s) and C,Jsi138_e is an infinite Jordan pair of A(s). Furthermore, A(s)−1 can be written as

eq06-01-9780081019467(6.13)

6.2 Behavioral approach in systems theory

The purpose of this section is to briefly introduce behavioral theory. Based on these preliminary results we will present a new approach realization of behavior in Chapter 12. The main references to the following introduction are [26, 27, 88, 89].

Both the transfer function and the state space approaches view a system as a signal processor that accepts inputs and transfers them into outputs. In the transfer function approach, this processor is described through the way in which exponential inputs are transformed into exponential outputs. In the state space approach, this processor involves the state as an intermediate variable, but the ultimate aim remains to describe how inputs lead to outputs. This input-output point of view has played an important role in system control theory. However, the starting point of behavioral theory is fundamentally different. As claimed in [27, 28], such a starting point is more suited to modeling and more suitable for actual applications in certain circumstances.

In the behavioral approach a mathematical model is viewed as a subset of a universum of possibilities. Before one accepts a mathematical model as a description of reality, all outcomes in the universum are in principle possible. After one accepts the mathematical model as a convenient description, one can declare that only outcomes in a certain subset are possible. This subset is called the behavior of the mathematical model. Starting from this perspective, one arrives at the notion of a dynamical system as simply a subset of time-trajectories, as a family of time signals taking on values in a suitable signal space.

It is in terms of the time trajectories of a specific system that all the concepts in behavioral theory are put forward. Linear time-invariant differential systems have such a nice structure that they fall into the scope of behavioral approach immediately. When one has a set of variables that can be described by such a system, then there is a transparent way of describing how trajectories in the behavior are generated. Some of the variables, it turns out, are free, i.e., unconstrained. They can thus be viewed as unexplained by the model and imposed on the system by the environment. These variables are called inputs. Once these variables are determined, the remaining variables called outputs are not yet completely specified, because of the possible trajectories, which are dependent on the past history of the system. This means that the outputs are still dependent on the initial conditions of the system. To formulate this relationship between the outputs, the inputs and the initial conditions of the system, one thus has to use the concept of state.

When one models an interconnected physical system, then unavoidably auxiliary variables, in addition to the variables modeled, will appear in the model. In order to distinguish them from themanifest variables, which are the variables whose behavior the model aims to describe, these auxiliary variables are called latent variables. The interaction between manifest and latent variables is one of the themes in this book. In this book a new approach, realization of behavior, will be presented to expose this interaction. In [27] it was shown how to eliminate latent variables and how to introduce state variables. Thus a system of linear differential equations containing latent variables can be transformed in an equivalent system in which these latent variables have been eliminated.

The basic idea in our approach of realization of behavior is, however, to find an ARMA representation for a given frequency behavior description such that the known frequency behavior is completely recovered to the corresponding dynamical behavior. From this point of view, realization of behavior is seen to be a converse procedure to the above latent variable eliminating process. Such a realization approach is believed to be highly significant in modeling dynamical system in some real cases where the system behavior is conveniently described in the frequency domain. Since no numerical computation is required of the procedure, the realization of behavior is believed to be particularly suitable for situations in which the coefficients are symbolic rather than numerical.

Definition 6.2.1

([89])

Let Usi139_e be a universum, Esi140_e a set, and f1,f2:UEsi141_e. The mathematical model (U,B)si142_e with B={uU|f1(u)=f2(u)}si143_e is said to be described by behavioral equations and is denoted by (U,E,f1,f2)si144_e. The set Esi140_e is called the equating space. We also call (U,E,f1,f2)si144_e a behavioral equation representation of (U,B)si142_e.

Latent variables appear frequently in system modeling practice, for which examples abound and are provided in [27, 89]. The need to use latent variables is recognized from the situations where for mathematical reasons they are unavoidably involved in expressing the basic laws in the modeling process. For example, state variables are needed in system theory in order to express the memory of a dynamical system, internal voltages and currents are needed in electrical circuits in order to express the external port behavior, momentum is needed in Hamiltonian mechanics in order to describe the evolution of the position, prices are needed in economics in order to explain the production and exchange of economic goods, etc.

Definition 6.2.2

([89])

A mathematical model with latent variables is defined as a triple (U,Ul,Bf)si148_e with Usi139_e the universum of manifest variables, Ulsi150_e the universum of latent variables, and BfU×Ulsi151_e the full behavior. It defines the manifest mathematical model (U,B)si142_e with B:={uU|lUlsi153_e such that (u,l)Bf}si154_e; Bsi155_e is called the manifest behavior (or the external behavior) or simply the behavior. We call (U,Ul,Bf)si148_e a latent variable representation of (U,B)si142_e.

Now by applying the above ideas the following basic description about dynamical system can be set up in a language of behavioral theory.

Definition 6.2.3

([27])

A dynamical system Σ is defined as a triple

Σ=(T,W,B),

si158_e

with Tsi159_e a subset of Rsi160_e, called the time axis, Wsi161_e a set called the signal space, and Bsi155_e is a subset of WTsi163_e called the behavior.

Now consider the following class of dynamical systems with latent variables

Rddtw=Mddtl,

si164_e  (6.14)

where w:RRqsi165_e is the trajectory of the manifest variables, whereas l:RRdsi166_e is the trajectory of the latent variables. The equating space is Rgsi167_e, and the behavioral equations are parameterized by the two polynomial matrices R(ξ)Rg×qξsi168_e and M(ξ)Rg×dξsi169_e.

The question of elimination of latent variables is, what sort of behavioral equation does Eq. (6.14) imply about the manifest variable w alone? In particular, we wonder whether the relations imposed on the manifest variable w by the full behavioral equations (6.14) can themselves be written into the form of a system of differential equations. In other words, this question is to ask whether or not the set

B=wL1loc(R,Rq)|lL1loc(R,Rd)satisfiesRddtw=Mddtlweakly

si170_e  (6.15)

can be written as the (weak) solution set of a system of linear differential equations. The above question is very important in situations where one has to introduce more variables in order to obtain a model of the relation between certain variables in a system of some complexity, after one proposes this model those auxiliary variables in which one is not interested may be eliminated by manipulating the equations (model).

An appealing and insightful answer to the above question is the following latent variable elimination procedure [27].

Theorem 6.2.1

Denote the full behavior of Eq. (6.14) by

Bf=(w,l)L1loc(R,Rq×Rd))|Rddtw=Mddtlweakly.

si171_e  (6.16)

Let the unimodular matrix U(ξ)Rg×gξsi172_e be such that

U(ξ)M(ξ)=0M(ξ),U(ξ)R(ξ)=R(ξ)R(ξ),

si173_e  (6.17)

with M″(ξ) of full row rank. Then the Csi174_e part of the manifest behavior Bsi155_e, denoted by BC(R,Rq)si176_e with Bsi155_e given by Eq. (6.15), consists of the Csi174_e solutions of

Rddtw=0.

si179_e  (6.18)

6.3 Chain-scattering representations

The main aim of this section is to briefly introduce some background and preliminary results on the CSR, which have been widely used in circuit theory, signal processing, and Hsi180_e control theory. In Chapters 11 and 12 we will present some generalizations to these results. The main references to the following introduction are [21, 22, 90].

There are two basic interconnections widely used in circuit theory and signal processing. One is the series-parallel interconnection which is shown in Fig. 6.2, the other one is the cascade connection which is shown in Fig. 6.2.

Consider two well-defined n-ports in Fig. 6.1, both having the same numbers of series ports on the one hand, and of shunt ports on the other, and designate by H1 and H2 their hybrid matrices. It has been proved [21] that if the shunt ports are paralleled (port by port) and if the series ports are connected in series, the resulting n-port has the hybrid matrix

f06-01-9780081019467
Fig. 6.1 Parallel connections.

H=H1+H2.

si181_e  (6.19)

Generally one comes to the conclusion that impedance matrices (when they exist) add up for series connections at all ports, and admittance matrices add up for parallel connections.

In addition to the above series-parallel interconnection, it is often convenient to consider a cascade (or chain) connection of two subnetworks which is shown in Fig. 6.2, where the output ports of the first subnetwork are identical to the input ports of the second subnetwork. Let xa, xb, and xc be the electrical variables at the input ports of the first subnetwork, at the interconnected ports, and at the output ports of the second subnetwork, respectively. If the equations of the first subnetwork can be written into

f06-02-9780081019467
Fig. 6.2 Cascade interconnection.

xa=K1xb,

si182_e  (6.20)

and the equations of the second subnetwork into a similar form

xb=K2xc,

si183_e  (6.21)

the elimination of internal variables in the interconnected subnetwork is immediate, and the final equations are obtained as

xa=K1K2xc.

si184_e  (6.22)

The above equations thus describe the input-output relationship in the cascade connection.

If one writes Eq. (6.20) explicitly into

vaia=ABCDvbib:=Kvbib

si185_e  (6.23)

the output vector xb (with a sign change in ib) of Eq. (6.23) is the input vector to a second subnetwork in cascade with the first. The matrix K appearing in Eq. (6.23) is naturally called the chain matrix of the 2n-port, and one has the following theorem.

Theorem 6.3.1

([21])

The chain matrix of a cascade connection of 2n-port is the product of the individual chain matrices in the order of connection.

The above cascade structure is the most salient characteristic feature of the chain matrix. In terms of control, the chain matrix represents the feedback simply as a multiplication of a matrix. This property makes the analysis of closed-loop systems very simple and makes the role of factorization clearer. Based on this, Kimura [22] brought forward the use of chain-scattering matrix in control system design. Another remarkable property of the CSR is the symmetry (duality) between the CSR and its inverse. This property is also regarded to be quite relevant to control system design.

A serious disadvantage of the CSR is, however, that it only exists for the special plants that satisfy certain full rank conditions. In order to obtain the CSR of the general plants that do not satisfy the full rank conditions, one is forced to augment the plants. Such augmentation is, however, irrelevant to and has no implication in control system design.

This book will present a generalization of the CSR to the case of general plants. Through the notion of input-output consistency, the conditions under which the generalized CSR and the dual generalized CSR exist will be proposed. The generalized chain-scattering matrices will be formulated into a general parameterized form by using the generalized inverse of matrices. The algebraic system properties such as the cascade structure and the symmetry (duality) property of this approach will be exploited completely.

Consider a plant P (Fig. 6.3) with two kinds of inputs (w, u) and two kinds of outputs (z, y) represented by

f06-03-9780081019467
Fig. 6.3 Input-output representation.

zy=Pwu=P11P12P21P22wu,

si186_e  (6.24)

where Pij (i, j = 1, 2) are all rational matrices with dimensions mi × kj (i, j = 1, 2).

If P21 is invertible, then one has the CSR of P as

zw=CHAIN(P)uy,

si187_e  (6.25)

where

CHAIN(P)=P12P11P211P22P11P211P211P22P211.

si188_e  (6.26)

If P represents a usual input-output relation of a system, CHAIN(P) represents the characteristic of power ports which in turn reflects physical structure of the plant. The CSR describes the plant as a wave scatterer between (u, z)-wave and the (w, y)-wave that travel oppositely to each other (Fig. 6.4).

f06-04-9780081019467
Fig. 6.4 Chain-scattering representation.

The main reason of using the CSR lies in its ability to represent the feedback connection as a cascade one. The cascade connection of two CSRs, G1 and G2, is actually a feedback connection because the loops across the two systems, G1 and G2, exists. The resulting CSR is just G1G2 of the two representation. This property thus greatly simplifies the analysis and synthesis of feedback connection. This can be seen by eliminating the intermediate variables (z1, w1) from the following relations:

zw=G1z1w1,z1w1=G2z2w2.

si189_e  (6.27)

If Gi = CHAIN(Pi), i = 1, 2, the cascade connection represents the feedback connection represented in Fig. 6.5. This connection is also termed a star product in Redheffer [91]. The use of CSR simply represents this connection by the product of the two individual representations.

f06-05-9780081019467
Fig. 6.5 Cascade property of chain-scattering representation.

Another interesting property of CSR is that its inverse (if exists) is dually represented as

H=(CHAIN(P))1=P121P121P11P22P121P21P22P121P11.

si190_e  (6.28)

The representation (6.28) exists if P12 is invertible. It is called the dual CSR of P which is denoted by

DCHAIN(P)=P121P121P11P22P121P21P22P121P11.

si191_e  (6.29)

The duality between CHAIN(P) and DCHAIN(P) is represented in the following identity

0II0CHAIN(PT)0II0=[DCHAIN(PT)]T.

si192_e  (6.30)

Now, we look at the realizations of the CSR and the dual CSRs. Let

eq06-02-9780081019467(6.31)

be a state-space realization of the plant P. In order to obtain the realization of the CSR and the dual CSRs, one may write out the following state equation of the plant P explicitly

x=Ax+B1w+B2u

si194_e  (6.32)

z=C1x+D11w+D12u

si195_e  (6.33)

y=C2x+D21w+D22u.

si196_e  (6.34)

In order that state space representations of CHAIN(P) and DCHAIN(P) exist, one must assume that D211si197_e and D121si198_e exist. In that case, one can solve Eq. (6.34) for w yielding

w=D211(C2xD22u+y).

si199_e  (6.35)

By substituting this relation into Eqs. (6.32), (6.33), one has a realization of CHAIN(P) as

eq06-03-9780081019467(6.36)

Similarly, DCHAIN(P) is given by

eq06-04-9780081019467(6.37)

6.4 Conclusions

This chapter has briefly introduced certain fundamental approaches to control system analysis, specifically the PMD theory, the behavioral approach and the CSR. Based on these, this book will present its main contributions in the following chapters. In the PMD theory, our interests will be focused on certain fundamental issues as determination of the finite and infinite frequency structure of any rational matrix, analysis of the resolvent decomposition of a regular polynomial matrix and formulations of the solution of a regular PMD.

Concerning the chain-scattering representation, this book will provide a generalization to the known approach. Such a generalized CSR is believed to be useful in circuit theory and Hsi180_e control theory. Related to the behavior theory, this book will present a new notion: realization of behavior. Realization of behavior is seen to be a converse procedure to the latent variable elimination theorem [27]. Finally, two key notions in control system analysis, well-posedness and internal stability, will be discussed.

References

[5] Rosenbrock H.H. State Space and Multivariable Theory. London: Nelson; 1970.

[13] Vardulakis A.I.G. Linear Multivarialble Control: Algebraic Analysis and Synthesis Methods. New York: Wiley; 1991.

[17] Gohberg I., Langaster P., Rodman I. Matrix Polynomial. New York: Academic Press; 1982.

[21] Belevitch V. Classical Network Theory. San Francisco: Holden-Day; 1968.

[22] Kimura H. Chain-scattering representation, J-lossless factorization and Hsi200_e control. J. Math. Syst. Estimation Control. 1995;5:203–255.

[26] Willems J.C. From time series to linear system: Part 1. Finite dimensional linear time invariant systems: Part 2. Exact modeling: Part 3: Approximate modelling. Automatica. 1986;22:561–580.

[27] Willems J.C. Paradigms and puzzles in the theory of dynamical systems. IEEE Trans. Autom. Control. 1991;36(3):259–294.

[28] Antoulas A.C., Willems J.C. A behavioural approach to linear exact modeling. IEEE Trans. Autom. Control. 1993;38(12):1776–1800.

[32] Kailath T. Linear Systems. Englewood Cliffs, NJ: Prentice-Hall; 1980.

[87] Vardulakis A.I.G., Limebeer D., Karcanias N. Structure and Smith-McMillan form of a rational matrix at infinity. Int. J. Control. 1982;35(4):701.

[88] Willems J.C. On interconnections, control, and feedback. IEEE Trans. Autom. Control. 1997;42(3):326–339.

[89] Polderman J.W., Willems J.C. Introduction to Mathematical Systems Theory: A Behavioural Approach. In: New York: Springer; . Texts in Applied Mathematics. 1997;26.

[90] Kimura H. Chain-scattering Approach to H-infinity Control. Boston: Birkhäuser; 1997.

[91] Redheffer M.R. On a certain linear fractional transformation. J. Math. Phys. 1960;39(1):269–286.


“To view the full reference list for the book, click here

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.51.3