4.6 Poles and zeros

The connections between a coprime polynomial fraction description for a strictly proper rational transfer function G(s) and minimal realizations of G(s) can be used to define notions of poles and zeros of G(s) that generalize the familiar notions for scalar transfer functions. In addition we characterize these concepts in terms of response properties of a minimal realization of G(s). For the peer results for discrete time, some translation and modification of these results are required.

Given coprime polynomial fraction descriptions

G(s)=N(s)D1(s)=DL1(s)NL(s),

si156_e  (4.68)

it follows from Theorem 4.3.5 that the polynomials detD(s)si93_e and detDL(s)si158_e have the same roots. Furthermore from Theorem 4.2.4 it is clear that these roots are the same for every coprime polynomial description. This permits the introduction of terminology in terms of either a right or left polynomial fraction description, though we adhere to a societal bias and use right.

Definition 4.6.1

Suppose G(s) is a strictly proper rational transfer function. A complex number s0 is called a pole of G(s) if detD(s0)=0,si159_e where N(s)D−1(s) is a coprime right polynomial fraction description for G(s). The multiplicity of a pole s0 is the multiplicity of s0 as a root of the polynomial detD(s).si7_e

This terminology is compatible with customary usage in the m = p = 1 case. Specifically if s0 is a pole of G(s), then some entry Gij(s) is such that |Gij(s0)|=.si161_e Conversely if some entry of G(s) has infinite magnitude when evaluated at the complex number s0, then s0 is a pole of G(s). A linear state equation with transfer function G(s) is uniformly bounded-input, bounded-output stable if and only if all poles of G(s) have negative real parts, that is, all roots of detD(s)si93_e have negative real parts.

The relation between eigenvalues of A in the linear state equation (4.43) and poles of the corresponding transfer function

G(s)=C(sIA)1B

si163_e

is a crucial feature in some of our arguments. Writing G(s) in terms of a coprime right polynomial fraction description gives

N(s)adjD(s)detD(s)=C[adj(sIA)]Bdet(sIA).

si164_e  (4.69)

Using Lemma 4.5.1, Eq. (4.69) reveals that if s0 is a pole of G(s) with multiplicity σ0, then s0 is an eigenvalue of A with multiplicity at least σ0. But simple single-input, single-output examples confirm that multiplicities can be different, and in particular an eigenvalue of A might not be a pole of G(s). The remedy for this displeasing situation is to assume 4.43 is controllable and observable. Then Eq. (4.69) shows that, since the denominator polynomials are identical up to a constant multiplier, the set of poles of G(s) is identical to the set of eigenvalues of a minimal realization of G(s).

This discussion leads to an interpretation of a pole of a transfer function in terms of zero-input response properties of a minimal realization of the transfer function.

Theorem 4.6.1

Suppose the linear state equation (4.43) is controllable and observable. Then the complex number s0 is a pole of

G(s)=C(sIA)1B

si163_e

if and only if there exists a complex n × 1 vector x0 and a complex p × 1 vector y0≠0 such that

CeAtx0=y0es0t,t0.

si166_e  (4.70)

Proof

If s0 is a pole of G(s), then s0 is an eigenvalue of A. With x0 an eigenvector of A corresponding to the eigenvalue s0, we have

eAtx0=es0tx0.

si167_e

This easily gives Eq. (4.70), where y0 = Cx0 is nonzero by the observability of Eq. (4.43).

On the other hand if Eq. (4.70) holds, then taking Laplace transforms gives

C(sIA)1x0=y0(ss0)1

si168_e

or,

(ss0)C[adj(sIA)]x0=y0det(sIA).

si169_e  (4.71)

Evaluating this at s = s0 shows that, since y0≠0, det(s0IA)=0.si170_e Therefore s0 is an eigenvalue of A and, by minimality of the state equation, a pole of G(s).

Of course if s0 is a real pole of G(s), then Eq. (4.70) directly gives a corresponding zero-input response property of minimal realizations of G(s). If s0 is complex, then the real initial state x0+x¯0si171_e gives an easily computed real response that can be written as a product of an exponential with exponent (Re [s0])t and a sinusoid with frequency Im [s0].

The concept of a zero of a transfer function is more delicate. For a scalar function G(s) with coprime numerator and denominator polynomials, a zero is a complex number s0 such that G(s0) = 0. Evaluations of a scalar G(s) at particular complex numbers can result in a zero or nonzero complex value, or can be undefined (at a pole). These possibilities multiply for multiinput, multioutput systems, where a corresponding notion of a zero is a complex s0 where the matrix G(s0) “loses rank.”

To carefully define the concept of a zero, the underlying assumption we make is that rankG(s)=min[m,p]si172_e for almost all complex values of s. (By “almost all” we mean “all but a finite number.”) In particular, at poles of G(s) at least one entry of G(s) is ill-defined, and so poles are among those values of s ignored when checking rank. (Another phrasing of this assumption is that G(s) is assumed to have rank min[m,p]si173_e over the field of rational functions, a more sophisticated terminology that we do not further employ.) Now consider coprime polynomial fraction descriptions

G(s)=N(s)D1(s)=DL1(s)NL(s)

si174_e  (4.72)

for G(s). Since both D(s) and DL(s) are nonsingular polynomial matrices, assuming rankG(s)=min[m,p]si172_e for almost all complex values of s is equivalent to assuming rankN(s)=min[m,p]si176_e for almost all complex values of s, and also equivalent to assuming rankNL(s)=min[m,p]si177_e for almost all complex values of s. The agreeable feature of polynomial fraction descriptions is that N(s) and NL(s) are well-defined for all values of s. Either right or left polynomial fractions can be adopted as the basis for defining transfer-function zeros.

Definition 4.6.2

Suppose G(s) is a strictly proper rational transfer function with rankG(s)=min[m,p]si172_e for almost all complex numbers s. A complex number s0 is called a transmission zero of G(s) if rankN(s0)<min[m,p],si179_e where N(s)D−1(s) is any coprime right polynomial fraction description for G(s).

This reduces to the customary definition in the single-input, single-output ease. But a look at multiinput, multioutput examples reveals subtleties in the concept of transmission zero.

Another complication arises as we develop a characterization of transmission zeros in terms of identically zero response of a minimal realization of G(s) to a particular initial state and particular input signal. Namely with m ≥ 2 there can exist a nonzero m × 1 vector U(s) of strictly proper rational functions such that G(s)U(s) = 0. In this situation multiplying all the denominators in U(s) by the same nonzero polynomial in s generates whole families of inputs for which the zero-state response is identically zero. This inconvenience always occurs when m > p. Here we add an assumption that forces mp.

The basic idea is to devise an input U(s) such that the zero-state response component contains exponential terms due solely to poles of the transfer function, and such that these exponential terms can be canceled by terms in the zero-input response component.

Theorem 4.6.2

Suppose the linear state equation (4.43) is controllable and observable, and

G(s)=C(sIA)1B

si180_e  (4.73)

has rank m for almost all complex numbers s. If the complex number s0 is not a pole of G(s), then it is a transmission zero of G(s) if and only if there is a nonzero, complex m × 1 vector u0 and a complex n × 1 vector x0 such that

CeAtx0+0tCeA(tσ)Bu0es0σdσ=0,t0.

si181_e  (4.74)

Proof

Suppose N(s)D−1(s) is a coprime right polynomial fraction description for Eq. (4.73). If s0 is not a pole of G(s), then D(s0) is invertible and s0 is not an eigenvalue of A. If x0 and u0≠0 are such that Eq. (4.74) holds, then the Laplace of Eq. (4.74) gives

C(sIA)1x0+N(s)D1(s)u0(ss0)1=0

si182_e

or

(ss0)C(sIA)1x0+N(s)D1(s)u0=0.

si183_e

Evaluating this expression at s = s0 yields

N(s0)D1(s0)u0=0

si184_e

and this implies that rank N(s0) < m. That is, s0 is a transmission zero of G(s).

On the other hand suppose s0 is not a pole of G(s). Using the easily verified identity

(s0IA)1(ss0)1=(sIA)1(s0IA)1+(sIA)1(ss0)1,

si185_e  (4.75)

we can write, for any m × 1 complex vector u0 and corresponding n × 1 complex vector x0 = (s0IA)−1Bu0, the Laplace transform expression

L{CeAtx0+0tCeA(tσ)Bu0es0σdσ}=C(sIA)1x0+C(sIA)1Bu0(ss0)1=C[(sIA)1(s0IA)1+(sIA)1(ss0)1]Bu0=G(s0)u0(ss0)1=N(s0)D1(s0)u0(ss0)1.

si186_e  (4.76)

Taking the inverse Laplace transform gives, for the particular choice of x0 above,

CeAtx0+0tCeA(tσ)Bu0es0σdσ=N(s0)D1(s0)u0es0t,t0.

si187_e  (4.77)

Clearly the m × 1 vector u0 can be chosen so that this expression is zero for t ≥ 0 if rank N(s0) < m, that is, if s0 is a transmission zero of G(s).

Of course if a transmission zero s0 is real and not a pole, then we can take u0 real, and the corresponding x0 = (s0IA)−1Bu0 is real. Then Eq. (4.74) shows that the complete response for x(0) = x0 and u(t)=u0es0tsi188_e is identically zero. If s0 is a complex transmission zero, then specification of a real input and real initial state that provides identically zero response is left as a mild exercise.

4.7 State feedback

Properties of linear state feedback

u(t)=Kx(t)+Mr(t)

si189_e

applied to a linear state equation (4.43). As noted, a direct approach to relating the closed-loop and plant transfer functions is unpromising in the case of state feedback. However, polynomial fraction descriptions and an adroit formulation lead to a way around the difficulty.

We assume that a strictly proper rational transfer function for the plant is given as a coprime right polynomial fraction G(s) = N(s)D−1(s) with D(s) column reduced. To represent linear state feedback, it is convenient to write the input-output description

Y(s)=N(s)D1(s)U(s)

si190_e  (4.78)

as a pair of equations with polynomial matrix coefficients

D(s)ξ(s)=U(s),Y(s)=N(s)ξ(s).

si191_e  (4.79)

The m × 1 vector ξ(s) is called the pseudo-state of the plant. This terminology can be motivated by considering a minimal realization of the form (4.55) for G(s). From Eq. (4.57) we write

ψ(s)ξ(s)=ψ(s)D1(s)U(s)=(sIA0+B0Dhc1Dl)1B0Dhc1U(s)

si192_e  (4.80)

or

sψ(s)ξ(s)=(A0B0Dhc1Dl)ψ(s)ξ(s)+B0Dhc1U(s).

si193_e  (4.81)

Defining the n × 1 vector x(t) as the inverse Laplace transform

x(t)=L1[ψ(s)ξ(s)],

si194_e

we see that Eq. (4.81) is the Laplace transform representation of the linear state equation (4.55) with zero initial state. Beyond motivation for terminology, this development shows that linear state feedback for a linear state equation corresponds to feedback of ψ(s)ξ(s) in the associated pseudo-state representation (4.79).

Now, as illustrated in Fig. 4.1, consider linear state feedback for Eq. (4.79) represented by

f04-01-9780081019467
Fig. 4.1 Transfer function diagram for state feedback.

U(s)=Kψ(s)ξ(s)+MR(s),

si195_e  (4.82)

where K and M are real matrices of dimensions m × n and m × m, respectively. We assume that M is invertible. To develop a polynomial fraction description for the resulting closed-loop transfer function, substitute Eq. (4.82) into Eq. (4.79) to obtain

[D(s)Kψ(s)]ξ(s)=MR(s)Y(s)=N(s)ξ(s).

si196_e  (4.83)

Nonsingularity of the polynomial matrix D(s) − (s) is assured, since its column degree coefficient matrix is the same as the assumed-invertible column degree coefficient matrix for D(s). Therefore we can write

ξ(s)=[D(s)Kψ(s)]1MR(s)Y(s)=N(s)ξ(s).

si197_e  (4.84)

Since M is invertible Eq. (4.84) gives a right polynomial fraction description for the closed-loop transfer function

N(s)D^1(s)=N(s)[M1D(s)M1Kψ(s)]1.

si198_e  (4.85)

This description is not necessarily coprime, though D^(s)si199_e is column reduced.

Reflection on Eq. (4.85) reveals that choices of K and invertible M provide complete freedom to specify the coefficients of D^(s).si200_e In detail, suppose

D(s)=DhcΔ(s)+Dlψ(s)

si201_e

and suppose the desired D^(s)si199_e is

D^(s)=D^hcΔ(s)+D^lψ(s).

si203_e

Then the feedback gains

M=DhcD^hc1,K=MD^l+Dl

si204_e

accomplish the task. Although the choices of K and M do not directly affect N(s), there is an indirect effect in that Eq. (4.85) might not be coprime. This occurs in a more obvious fashion in the single-input, single-output case when linear state feedback places a root of the denominator polynomial coincident with a root of the numerator polynomial.

References

[13] Vardulakis A.I.G. Linear Multivarialble Control: Algebraic Analysis and Synthesis Methods. New York: Wiley; 1991.

[32] Kailath T. Linear Systems. Englewood Cliffs, NJ: Prentice-Hall; 1980.

[40] Vidyasagar M. Control System Synthesis: A Factorization Approach. Cambridge, MA: MIT Press; 1985.

[55] Polak E. An algorithm for reducing a linear time-invariant differential system to state form. IEEE Trans. Autom. Control. 1966;11(3):577–579.

[56] IIchmann A., Nurnberger I., Schmale W. Time-varying polynomial matrix systems. Int. J. Control. 1984;40(2):329–362.

[57] Delchamps D.F. State Space and Input-Output Linear Systems. New York: Springer-Verlag; 1988.

[58] Liu G., Lu K. Polynomial matrix description and structural controllability of composite system. In: Proceedings of 2009 International Conference on Information Engineering and Computer Science. 2009:1–4.

[59] Fang C.H. A new approach for calculating doubly-coprime matrix fraction descriptions. IEEE Trans. Autom. Control. 1992;37(1):138–141.

[60] Desoer C.A., Schulman J.D. Zeros and poles of matrix transfer functions and their dynamical interpretation. IEEE Trans. Circ. Syst. 1974;21(1):3–8.

[61] Kamen E.W. Poles and zeros of linear time-varying systems. Linear Algebra Appl. 1988;98:263–289.

[62] Schrader C.B., Sain M.K. Research in system zeros: a survey. Int. J. Control. 1989;50(4):1407–1433.

[63] Grasselli O.M., Longhi S. Zeros and poles of linear periodic multivariable discrete-time systems. Circ. Syst. Signal Process. 1988;7(3):361–380.

[64] Anderson B.D.O., Kucera V.V. Matrix fraction construction of linear compensators. IEEE Trans. Autom. Control. 1985;30(11):112–1114.

[65] Zhang G., Lanzon A. On poles and zeros of input-output and chain-scattering systems. Syst. Control Lett. 2006;55(4):314–320.

[66] Bekhiti B., Dahimene A., Nail B., Hariche K., Hamadouche A. On Block roots of matrix polynomials based MIMO control system design. In: Proceedings of 4th International Conference on Electrical Engineering (ICEE). 2015:1–6.

[67] Sandmann A., Ahrens A., Lochmann S. Resource allocation in SVD-assisted optical MIMO systems using polynomial matrix factorization. In: Proceedings of 16th ITG Symposium on Photonic Networks. 2015:1–7.

[68] Freitas F.D., Luis Varricchio S. In: A power system dynamical model in the matrix polynomial description form. Vancouver, BC: IEEE Power & Energy Society General Meeting; 2013:1–5.

[69] Barnett S. Polynomials and Linear Control Systems. New York: Marcel Dekker; 1983.

[70] Wilson J. In: Kailath T., ed. Rugh, Linear System Theory. Upper Saddle River, NJ: Prentice Hall; 1996. Prentice Hall Information and System Science Series..

[71] Wolovich W.A. Linear Multivariable Systems. New York: Springer-Verlag; 1974.

[72] Blomberg H., Ylinen R. In: Algebraic Theory for Multivariable Linear Systems. London: Academic Press; . Mathematics in Science and Engineering. 1983;vol. 166.

[73] Hou M., Pugh A.C., Hayton G.E. A test for behavioural equivalence. IEEE Trans. Autom. Control. 2000;45:2177–2182.

[74] Hou M., Pugh A.C., Hayton G.E. General solution to systems in polynomial matrix form. Int. J. Control. 2000;73:733–743.

[75] Anderson B.D.O., Moore J.B. Optimal Filtering. Englewood Cliffs, NJ: Prentice-Hall; 1979.


“To view the full reference list for the book, click here

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.17.68.14