• Nie Znaleziono Wyników

Time-Varying System Theory for Computational Networks

N/A
N/A
Protected

Academic year: 2021

Share "Time-Varying System Theory for Computational Networks"

Copied!
25
0
0

Pełen tekst

(1)

Time-Varying System Theory for Computational Networks

Alle-Jan van der Veen and Patrick Dewilde

Many computational schemes in Linear Algebra can be studied from the point of view of Time-Varying Linear Systems theory. This approach not only puts a variety of results in a unified framework, but also generates new and unexpected results such as strong approximations of operators or matrices by computational networks of low complexity, and the embedding of contractive operations in orthogonal computations. In the present paper we develop the required Time-Varying System Theory in a systematic way, and derive a Kronecker or Ho-Kalman type realization method.

1. INTRODUCTION

Consider the computations schematically represented in figure 1. The unfolded or expanded version is shown in fig. 1(a). At each instant of time the computation takes in some input data from an input sequence U and computes new output data which is part of the output sequence Y generated by the processor. To execute the computation, the processor will use some remainder of its past history know as the state, which it had temporarily stored in registers indicated by the symbol Z. We shall limit ourselves to the case where the computations are indeed linear — although it will become clear that the theory cries for extensions.

Characteristic for the computations of a real life system is that it can use only a finite amount of data at any given time in its history. This then easily leads to a recursive model, which is obtained by folding the computation and feeding back the state. This computational model is grasped mathematically by introducing a state sequence X with entries Xi, which are vector

quantities (the dimension of each vector equals the number of state variables and need not be constant in time). At time instant i, the mapping from the current input and state to the output

0Department of Electrical Engineering, Delft University of Technology, Delft, The Netherlands. In

(2)

Xi+1 (c) (b) (a) Xi Yi Ui Ti T -1 X Z Y U X T2 T1 T0 U2 Y2 X2 U1 X1 Y1 X0 Y0 U0 Z Z Z Z Z Z Z

Figure 1. Time-varying state space representations: (a) fully expanded network, (b) time-varying network, (c) compact representation with diagonal operators.

and next state is a linear “memoryless” mapping, Ti say, such that [ Xi+1 Yi] = [ Xi Ui]Ti,

or written more explicitly, Xi+1 = XiAi + UiBi Yi = XiCi + UiDi Ti = Ai Ci Bi Di  .

It is thus possible to associate with the fully expanded network in figure 1(a) a time-varying network, depicted in fig. 1(b), in which the parameters of the mapping can be changed at each time instant. This is the conventional way in which time-varying networks are treated. We will, instead, pursue what we will call a network description corresponding to fig. 1(c), in which the network operates on the full sequences U and X, and produces the output Y and next state XZ−1, where

X =  X−1 X0 X1 X2 

XZ−1 =

 X0 X1 X2 X3  .

This defines Z as the right-shift operator for sequences. It is thus possible to write the state space description as

XZ−1 = XA + UB

Y = XC + UD T =

A C B D 

such that A = diag[ A−1 A0 A1 ] is a diagonal that operates on the sequence X

as a direct (“instantaneous”) multiplicator. B, C, D are defined likewise. In effect, figure 1(c) provides a compact operator notation for the same computations as in figure 1(a/b), and uses only multiplications by diagonals and the shift operator.

(3)

transformation on the input sequence U with the output sequence Y as the result. Because of causality (we assume that series are represented by rows and that operators act from left to right), such a transformation will be represented by what we call an “upper” operator T. An attractive (and physical) mathematical framework is obtained if the input sequences are constrained to have finite energy, and T to be a bounded operator on such sequences. Hence we focus on bounded upper operators mapping 2-sequences U to 2-sequences Y via

Y = UT .

With U = [ U−1 U0 U1 U2 ], and Y likewise, we will identify T with its

(doubly-infinite) matrix representation

T = . .. . .. . .. . .. T1,−1 T1,0 T1,1 T1,2 T00 T01 T02 . .. T11 T12

0

T22 . .. . ..              .

(The square identifies the 00-th entry of the matrix.) If T is viewed as the transfer operator of a non-stationary causal linear system with input U and corresponding output Y then the i-th row of T corresponds to the impulse response of the system when excited at time instant i. For time-invariant systems, all elements on the diagonals of T are the same, and T is said to have a Toeplitz structure. In the time-invariant case often more is known about T than just an operator representation, namely a description as rational transfer function or an equivalent state space description. This then allows to perform applications of T with a finite number of operations. It is the purpose of this paper to study these kinds of representations for time-varying systems.

The connection between the I/O operator T : Y = UT and the state space description follows from the expansion

Y = U D + U B ZC + U B (ZA) ZC + U B (ZA) (ZA) ZC +

and can be written as

T = D + B (IZA)−1ZC .

provided the inverse in the formula is meaningful. As in the time-invariant case, one might wonder conversely whether there exists, for a given transfer operator T, a state space realiza-tion T that realizes the same transfer, yet has this advantage to be finitely computable. This question is known as the identification problem, and will be the topic of this paper. A related problem is the model reduction problem, in which to a given T an approximating state space description of low complexity is pursued. Since the number of state variables need not be constant in time, but can increase and shrink, it is seen that in this respect the time-varying realization theory is much richer, and that the approximation accuracy can be varied in time

(4)

Transfer-operator T State space realization T B D = A C Identification (model reduction) Embedding Factorization Lossless Σ Lossless realization embedding Θ (or ) cascade

Figure 2. Overview of the cascade synthesis problem

at will. It is also possible to choose the number of state variables to be zero outside a region of interest in time, and to incorporate in this way upper triangular matrices of finite size into the time-varying context.

Context

The present paper is based on the research published in [1, 2, 3, 4], in which the theory of a generalization of the z-transform for upper non-commutative operators, called the W-transform, was developed and the interpolating properties of lossless time-varying (or non-stationary) systems represented by these operators was investigated. Most notation in the present paper is adopted from these papers, although it is generalized to furnish time-varying state dimensions.

Starting in the 1950’s (or even earlier), time-varying network and state space theory and ex-tensions of important system theoretic notions to the time varying case have been discussed by many authors. While most of the early work is on time-continuous linear systems and differential equations with varying coefficients (see e.g., [5] for a 1960 survey), time-discrete systems have gradually come into favor. Some important more recent approaches that parallel the state space realization part of the present paper are the monograph by Fein-tuch/Saeks [6], in which a Hilbert resolution space setting is taken, and recent work by Kamen et al.[7, 8], where time varying systems are put into an algebraic framework of polynomial rings. However, many results in particular on controllability, detectability, stabilizability etc. have been discussed by many authors without using these specialized mathematical means (see e.g., Anderson/Moore [9] and references therein), by time-indexing the state space ma-trices  A, B, C, D and deriving expressions (iterations) in terms of these matrices. There is

usually a one-to-one correspondence between these expressions and their equivalent in our notation.

A Tour of the Results

The results obtained so far are depicted in Fig. 2 and summarized below. The present paper deals with item 1. Item 2 has been published in [10], while item 3 will be the subject of a separate treatment still to be published.

(5)

42 41 32 31 22 21 12 11 X1 X2 X3 X4 0 U1 Y2 U2 Y1 T

Figure 3. Cascade realization of the lossless embedding.

1. Identification. Proper definitions of shift-invariant input and output state spaces of the system T are possible. By selecting a (strong) basis in either of these spaces, minimal

 A, B, C, D -realizations can be computed. In addition, these spaces define a Hankel

operator that maps the input state space to the output state space. We shall prove a Kronecker [11] or Ho-Kalman [12] type theorem which shows that the system order is equal to the rank of the Hankel operator. Moreover, a diagonal expansion of the Hankel operator reveals its relation to the given data in T, which will in turn lead to an identification scheme that has a close resemblance to subspace identification techniques for time-invariant systems, and that can also be used to find solutions to model reduction problems at a later stage.

2. Embedding. If T corresponds to a system that is inner (respect. J-inner), then selecting a (J)-orthogonal basis in either the input or output state space will yield an orthogonal (or lossless) realization. If, on the other hand, the given T is not inner but contractive, we show that a realization of T can be extended (by adding an extra input and output, and supplementing states where needed) to yield an orthogonal realization that “embeds” the given system in the sense that T will be the transfer operator from one input to one output if the other inputs are put to zero.

3. Factorization. Finally, it is possible to factor an orthogonal multiport realization matrix into a minimal number of elementary (2×2) orthogonal operations. Corresponding to this factorization is a network structure that consists of a cascade of elementary lossless sections, as in figure 3. In this figure, the embedded transfer operator T is the transfer from input U1 to output Y2, when the extra input U2 is zero.

2. NOTATION, SETTING AND MATHEMATICAL PRELIMINARIES Spaces

We consider a generalization of 2 sequences

X =   X−1 X0 X1   ,

in which each of the entries Xi is an element of a (row) vector space C| Ni, with varying

dimensions Ni ∈ N| , and such that the total energy  X

2 2 =  ∞ −∞  Xi  2 2 is bounded. In

(6)

the above expression, the square identifies the position of the 0-th entry. We denote the set (ZZ →N| ) of index sequences by  , and with N∈ say that the above X is an element of 2(

|

CN), orC| N

2 for brevity. We adopt the shorthand “•n” for the index sequence N with all Ni

equal to n. Hence, e.g.,C| •1

2 is the set of the usual 2 sequences.

Let N, P ∈  . Following [1], we denote by  ( |

CN,C| P) the class of bounded operators

(C| N

2 → C|

P

2). E.g., a system transfer operator with n1 input ports and n0 output ports is

an operator in  ( •1,  •1), with  = | Cn1 and  = | Cn0 . An operator A ∈  ( | CN,C| P)

may be represented by a doubly infinite matrix with entries Aij : C| Ni → C| Nj, and may as

well be represented by the shorthand AN×P. For example, with N = [ 1 3 2 ] and

P = [ 2 1 3 ], X∈C| N

2 will have the form

X =       . and A∈ ( | CN,C| P) A = . .. ... ...                     .. . ... . ..              .

where in this case each box represents a complex number. We think of A as acting on row inputs and producing row outputs. The i-th row of A will be in 2(C|

Ni,C| P) = (C| P

2)Ni and will

be bounded by  A . The converse is certainly not true, as can be seen when A is Toeplitz and

upper triangular, for in that case A will correspond to a classical 2 system, and boundedness

in 2 of the impulse response is known not to be a sufficient condition for boundedness of

the system transfer operator.

Shifts and Constructors

For every index sequence N ∈  , N =

  N−1 N0 N1   we indicate the k-th shift by N(k) =   Nk−1 Nk Nk+1   .

We will use the shorthand X+ for X(1), and likewise X= X(−1).

We define the shift operator Z :C| NC| N+ as

(XZ)i= Xi−1.

The shift operator is of course bounded. It is even unitary, meaning that ZN×N+(Z∗)NN= IN×N, and (Z∗)NNZN×N+ = INN+. We denote by Z[k] the k-times repeated application of Z:

Z[k]= ZN×N+ZNN(2)  Z

(7)

(Note that formally Zk is not well defined because dimensions do not match. Nonetheless, we

will in the next sections usually suppress dimension information in formulas and just write Zk when we mean Z[k].) The entries Z

ij of Z satisfy

Zij = INi×Ni j = i + 1,

= 0 otherwise,

and hence Z can be pictured as the infinite size matrix

Z = . .. ... ...  0 IN −1×N−1

0

 0 INN0 0 INN1 

0

0  .. . ... . ..             .

Following [1], we define the operators

π: uC| N0 →fC| N : f0 = u fi = 0 , i /=0

with adjoint

π∗: fC| NuC| N0

: u = f0.

For the entry (i, j) of the matrix representation of an operator A we may write Aij =π∗Z[i]AZ[j]π,

and define an operator π∗i = π∗Z[i] to select the i-th row of its operant accordingly. Next, we

define the k-th diagonal shift on A∈ (C| N

,C| P) by

A(k) = Z[k]AZ[k], which will be in  (

|

CN(k),C| P(k)). We adopt, as with index sets, the shorthand A+ for A(+1), A

for A(−1). Hence ( A+)

ij= Ai1,j−1.

Spaces for Upper, Lower and Diagonal Operators

As in [1] we define subset of upper, lower and diagonal operators in  as

 =  A ∈ : Aij = 0, i>j  =  A ∈ : Aij = 0, i<j  =  ∩  . For A∈

, “Ai” will serve as shorthand for the entry Aii, and we shall write

A = diag  A−1 A0 A1  = diag( Ai) .

Let A∈ . We define the j-th diagonal A[j]



of A by A[j]! i = Aij,i.

(8)

Hence A[0]is the main diagonal of the operator A, and for positive j, A[j] is the j-th subdiagonal

above A[0]. With this notation, A can formally be written in terms of its diagonals as

A =

j=−∞

Z[j]A[j],

although this expression need not converge at all. A class of operators that do allow this representation are the Hilbert-Schmidt operators [1]:

 2=  A∈ :  A 2 HS= i,j  Aij 2 2 < ∞

along with inner product "A, B# = trace(AB

), and norm

 A

2

HS = "A, A# = trace(AA

). A subset $ in  2 is a left D-invariant subspace in  2 if

A∈$ , B∈$ ⇒ D1A + D2B∈$ all D1,2



.

We can define orthogonal projectors P% onto these subspaces, according to the natural

Hilbert-Schmidt metric. Standard subspaces are

 2 =  ∩ 2  2 =  ∩ 2  2 =  2∩  2

and standard projectors that go with these spaces are P0 = P&

2 and P = P' 2: P0 :  2→  2 : P0(A) = A[0], P :  2→  2 : P(A) =  ∞ j=0 Z[j]A[j].

It is a fundamental fact (and proven in [1]) that

 2=  2Z−1 ⊕  2 ⊕  2Z ,

where “⊕” indicates orthogonal composition of spaces.

Diagonal Inner Product

For A, B∈ 2, define the “diagonal” or “brace” inner product  A, B as  A, B = P0( AB

)

It follows that, with A, B∈ 2,  A, B ∈



2, and that "A, B# = trace A, B . In particular, we

have that

A = 0 ⇔ "A, A# = 0 ⇔  A, A = 0

D1"A, B# D2 = 0 (all D1,2



) ⇔  A, B = 0

so that orthogonality of left D-invariant subspaces is the same in each of these inner products. The observation that the diagonal inner product does not render a single number but rather a full diagonal of rowwise inner products will be useful in the determination of projections onto subspaces. The above expressions show that two left D-invariant subspaces are orthogonal iff they are orthogonal rowwise.

(9)

Basis Representations of Subspaces

Let$ be a left D-invariant subspace in 2. Because of the left D-invariance,$ falls naturally

apart into “slices” $ i = π

i$ . Each such $ i is an ordinary subspace in 2. If each of these

subspaces is finite dimensional, say dim($ i) = Ni, then we shall say that $ is of local finite

dimension. Each $ i has a finite orthonormal basis  (qi)1,  , (qi)N

i  , with (qi)k ∈ 2, and

hence is generated by a sequence Qi ∈ 2( |

CNi) whose rows are the (q

i)k (k = 1 Ni), such

that

$ i =  DiQ

i : Di ∈C| 1×Ni  ,

and QiQi = INi×Ni. Stack the Qi to arrive at an operator Q whose i-th row π∗iQ is Qi. This

Q is not necessarily a bounded ( 2 →  2) operator, but with domain restricted to



2 it is a

bounded operator in (

2 →  2) — in fact, an isometry — with range $ : $ =



2(C| •1,C| N)⋅Q.

In addition, Q is orthonormal in the sense thatΛQ := P0(QQ) = IN×N.

(Remark. Since Q need not be a bounded  2 operator, but is known to be a bounded (



2→

 2)-operator, the value of an expression like P0(QQ

) should be interpreted as P

0(DQQ∗) =

DP0(QQ), for all D



2. Technically speaking, the “P0” in P0(QQ∗) could be dropped as

Q∗ :  2 →



2 already, but then the notation would lead to confusion and not be compatile

with the previous cases, especially since the domain of Q can be extended. In this respect, if X∈ 2 then the product XQ

is interpreted as XQ=

 Z

[k]P

0(ZkXQ∗), which is compatible

with the usual definition when Q ∈ .)

The above construction is summarized in the following proposition:

Proposition 1. If a left D-invariant subspace $ in  2 has finite local dimension N∈ , then

there exists an operator Q, bounded in (

 2(C| •1,C| N) →  2), with ΛQ = P0(QQ) = I, such that $ =  2(C| •1,C| N)⋅Q .

Q is said to be an orthonormal basis representation of $ .

More generally, let F be a bounded (

2→  2)-operator such that

$ =



2(C| •1,C| N)⋅FNו1,

andΛF = P0( FF∗)∈



(C| N,C| N) is uniformly positive (meaning thatΛ

Fis boundedly invertible

as well; we write ΛF ( 0). Then also F generates

$ and is said to be a strong basis

representation of $ . A Gram-Schmidt orthogonalization on each of the rows Fi will yield

F = RQ, where Q is an orthonormal basis representation of $ , and R



(C| N,C| N) is a

boundedly invertible positive factor of ΛF, since ΛF = P0( FF) = P0( RQ QR) = RR∗ ( 0.

Projection onto Subspaces

Lemma 2. If $ is a subspace in  2, generated by an orthonormal basis representation Q,

then (for X∈  2),

X⊥$ ⇔ P0( XQ

(10)

PRO) O

)

F Any Y in $ can be written as Y = DQ, for some D



2. Then XY ⇔  X, Y =

P0( XY) = 0, and P0( XY) = P0( XQD) = P0( XQ) D. Since this is 0 for all D in



2, it

follows that P0( XQ∗) = 0.

Lemma 3. Let $ be a left D-invariant subspace in  2, generated by an orthonormal basis

representation Q. The projection of any X∈ 2 onto $ is given by

P% (X) = D Q ,

with D = P0( XQ).

PRO) O

)

F An operator P is a projector onto a subspace $ if it is idempotent: P P = P, and if

its range is $ . This last requirement is true because P% (X) = D Q, with D = P0( XQ

)

2,

and all elements in



2 can be reached this way.

P% is idempotent: if X∈ $ , then P% (X) = X. Indeed, by proposition 1, X = D1Q for some

D1 ∈  2, since Q is a basis. In fact D1 is equal to P0( XQ

): P

0( XQ) = P0( D1QQ∗) =

D1P0( QQ) = D1, hence P% (X) = DQ = D1Q = X.

Finally, the projector is orthogonal: if X∈ 2, then XP% (X)∈$

because P0 (XP0( XQ)Q ) Q∗! = P0( XQ)P 0 P0( XQ∗)⋅Q Q∗! = P0( XQ∗)−P0( XQ) P0( QQ∗) = P0( XQ∗)−P0( XQ∗) = 0 . *+

If F is a strong basis representation generating $ , then

P% ( X ) = P0( XF

)Λ−1

F F

is also a projection onto $ . This can be derived from the orthogonal projection by putting

F = RQ, where ΛF = RR∗ must be boundedly invertible.

3. NERODE STATE SPACE DEFINITIONS

Let be given a bounded linear causal time-varying system with n1 input ports and n0 output

ports, and with transfer operator T in

( •1,  •1), where  = | Cn1 and  = | Cn0. We will derive a state space description for T, i.e., some representation of T such that when u∈ 2(

|

C1, •1)

is an input sequence and y = uT is its corresponding output, we can recover any entry yk of y

from knowledge of uk and a compact (state) representation of  ui : ik−1 , the “past” of u

with respect to instant k. It is of course not enough to consider only one pair u, y and hope to recover a state space description from it, or to consider only one time instant k. One approach is to let u range over all 2, and to consider, for each time instant, the relation between inputs

applied until instant k1 (i.e., the projection of 2 onto this subspace) and corresponding

outputs from instant k on (the projection of y onto “the future”). This is akin to a Hilbert resolution space approach and is described in detail in a monograph on time-varying system theory by Feintuch and Saeks [6]. The approach we take here is (necessarily) strongly related to this resolution method, yet has a few additional merits.

(11)

We consider inputs and corresponding outputs as elements of  2, i.e., an infinite collection

of 2 input sequences such that the energy (Hilbert-Schmidt norm) of the total collection is

bounded. Since the operators in  2 admit a decomposition into diagonals, and projections

onto  2Z−1 and  2 or even 

2Z are well defined, we avoid much of the problems of causality

and strict causality to which a major part of [6] is devoted. A second advantage is that, in order to arrive at a state space description, it is enough to consider the effect of inputs in



2Z−1(the “past”) onto the projection onto



2 (the future part) of their corresponding outputs,

i.e., to study operators P(UT) : U∈

2Z−1. In this way, the notion of time is avoided almost

completely, and as a consequence the use of indices representing time is often not needed. The resulting theory is elegant and in a natural way almost looks like a time invariant theory with non-commutative operators.

Let be given a bounded linear causal time-varying system with transfer operator T in

( •1,



•1),

for ,  some finite-dimensional Hilbert spaces. Define the Hankel operator HT associated

to T to be HT :  2Z−1→  2 : U HT = P(UT)

We consider the effect of inputs in



2Z−1 onto outputs in



2, i.e., we study the range and

kernel of the operators HT and HT.

We say that an input U1 is Nerode equivalent to U2, U1

N , U2, for U1,2 ∈  2Z−1, if P ( (U1−U2)T ) = 0. Accordingly, U N , 0 if U ∈  2Z−1 and P( UT ) = 0, i.e., if U is in

the kernel of HT. Define =  U : U N

,

0 =  U



2Z−1 : P( UT ) = 0 . is called the

input null space. It is a left D-invariant subspace in 

2Z−1. Denote the complement of in



2Z−1 by $ (called the input state space):



2Z−1=$ ⊕ .

Define the natural output space $ 0 in



2 to be the range of the operator HT: $ 0=  P(UT) : U



2Z−1 .

$ 0is the left D-invariant subspace containing the projection in



2 of all outputs of the system

that can be generated from inputs in 

2Z−1. Denote the complement of $ 0 in



2 by 0:



2= $ 0 ⊕ 0

From these definitions, the relations P(

2Z−1T) = P($ T) + P( T)

= P($ T)

= $ 0,

follow immediately, and with slightly more work, P -2Z−1(  2T) = P -2Z−1($ 0T) + P -2Z−1( 0T) = P -2Z−1($ 0T) = $ .

(12)

subspaces.

Theorem 4. The spaces $ , ,$ 0, 0 are left D-invariant subspaces satisfying the

shift-invariance properties: Z−1 ⊂

R.$ ⊂ $

Z 0 ⊂ 0

R$ 0 ⊂ $ 0

in which the restricted shift operator R. is defined by R.U = P

-2Z−1(ZU) = ZUP0(ZU) (for U∈  2Z−1), and R on  2 defined by RY = P(Z−1Y), PRO) O ) F 1. Z−1 ⊂ . If U, so that P( UT ) = 0, then UT ∈ 

2Z−1, from which it follows

that Z−1UT∈

2Z−1 also, and P(Z−1UT) = 0.

2. R.

$ ⊂ $ . This is a consequence of the shift invariance of in the following way.

If U ∈ $ , then P

-2Z−1(ZU)



2Z−1 by definition, and P

-2Z−1(ZU) ⊥ because for all X ∈ ,  P

-2Z−1(ZU), X =  ZU, X −  P0(ZU), X =  U, Z

−1X  (−1). Since is shift-invariant, Z−1X, and  R .U, X  = 0. 3. R$ 0⊂$ 0: P( Z−1$ 0) =  P/ Z −1P( UT ) 0 : U∈  2Z−1 =  P( Z −1 UT ) : U∈ 2Z−1 ⊂ $ 0, because Z−1U∈ 2Z−1. 4. Z 0 ⊂ 0. Since Z 0 ⊂ Z  2 ⊂ 

2, we only have to prove that Z 0 ⊥ $ 0. For

any Y∈ 0, X∈$ 0,  ZY, X = P0( ZYX) = P0 / (YX)(−1)Z 0 = P0( YXZ )(−1) =  Y, Z −1X  (−1) =  Y, Z −1(XX [0])  (−1).

Use is made of the fact that ZDZ−1= D(−1). $ 0 is left Z

−1-invariant: Z−1(XX

[0])∈$ 0.

Hence, since Y⊥$ 0,  ZY, X = 0.

*+

4. CANONICAL STATE SPACE REALIZATIONS

Let T be a given bounded linear causal time-varying system transfer operator in

( •1,



•1),

and assume that its shift-invariant input/output state and null spaces, $ ,$ 0, and 0, are

known. $ is such that P(



2Z−1T) = P($ T), hence the effect of any input in the past (



2Z−1)

onto the future output in



2 is equivalently described by a (unique) representative element

X of $ , called the state. The point is that $ is assumed to be a much smaller dimensional

space than 

(13)

observations leads to the construction of a operator state space model, in a way that is already familiar from a number of other contexts as well. By choosing a basis in either the input state space or the output state space, the desired result, a minimal state space realization involving only diagonal operators, is obtained.

4.1. “Canonical Controller” State Space Realization

For a given input U in 2 and instant k, define the past input U(k) (with respect to instant k)

to be U(k) = P

-2Z−1( Z

kU ). Define the state X

k ∈ $ at instant k to be the projection of the

past input onto$ : Xk = P% (U(k)) = P% ( Z

kU )∈

2Z−1.

Theorem 5. Given a bounded system transfer operator T ∈ 

( •1,



•1) with input state

space $ , then with the above definition of Xk ∈ $ , we have the “operator state space”

realization

Y = UT ⇐⇒ Xk+1 = XkA + U[k]B Y[k] = XkC + U[k]D

where A, B, C, D are bounded operators satisfying A C B D  = P% (Z −1⋅) P 0(⋅T) P% (Z −1⋅) P 0(⋅T)  . PRO) O )

F Recall that since U(k)

 2Z−1 = $ ⊕ , and P0( T) = 0 by definition of , we have P0( U(k)T ) = P0 P% (U(k)) T + P1 (U(k)) T ! = P0( XkT ). 1. Y = UTY[k] = P0( ZkY ) = P0( ZkUT ) = P0( U(k)T ) + P0( U[k]T ) = P0( XkT ) + U[k]P0T . 2. Xk+1 = P% ( U(k+1)) = P% ( Zk−1U ) = P% ( Z −1U(k)+ Z−1U[k] ) = P% / Z −1P % (U(k)) + Z −1P 1 (U(k))0 + P% ( Z −1U [k] ) = P% ( Z −1X k ) + P% ( Z −1U [k] ) ,

where in making the last step the fact is used that is shift-invariant (Z−1 ⊂ )

and that $ ⊥ .

*+

It is clear that  A ≤1, and that if there exists an ˆX∈$ such that Z

−1Xˆ ∈

$ , then  A  = 1.

Let r(A) denote the spectral radius of A: r(A) = lim n→∞  A n  1/n.

Since  A  ≤1 we have that r(A)≤1 also.

The above state space description in terms of operators is not yet very useful. By choosing an orthogonal basis Q in $ , it is possible to “precompute” the effect of the operators A, B and

(14)

is demonstrated in the following theorem. Some care must be taken if Q is an unbounded operator on 2. It can be shown that this happens only if r(A) = 1, and that r(A) = 1 coincides

with A = 1, where A = r(ZA) is the spectral radius of the operator ZA. Nonetheless, Q is

bounded as a (

2 → 2) operator, and this property is sufficient to prove the theorem.

Theorem 6. Given a bounded system transfer operator T



( •1,



•1), and assume that the

input state space $ of T is locally finite dimensional. Let N = dim($ ), and let Q represent

an orthonormal N-dimensional basis of $ , such that ΛQ = P0(QQ

) = I. 1. T admits a state space realization

Y = UT ⇐⇒ XZ −1 = XA + UB Y = XC + UD , (1) where A = P0(QQZ−1) ∈  (C| N,C| N) B = P0( QZ−1) ∈  ( •1,C| N) C = P0(QT) ∈  (C| N, •1) D = P0(T) ∈  ( •1,  •1) .

2. The realization satisfies the following relations:

 A ≤1 , Q= QAZ + BZ T = QC + D (2) AA + BB = I (3) 3. If A = r(ZA)<1, then Q = B(IZA) −1Z  ∗ T = D + B(IZA)−1ZC (4)

so that Q is a bounded operator in 

Z−1, and X ∈  2( | C•1,C| N). PRO) O ) F

1. Expanding X into its diagonals, X = 

−∞ ZkX[k], we will derive the equivalent relation

Y = UT ⇐⇒ X

(−1)

[k+1] = X[k]A + U[k]B

Y[k] = X[k]C + U[k]D

.

For a given Xk in$ , it is possible to write Xk in terms of the basis Q of$ : Xk = X[k]Q,

for some X[k]



2(C| •1,C| N). Starting, for a certain k and Xk, with the realization in

theorem 5, write the new state Xk+1 as Xk+1= X[k+1]Q. Then

Xk+1 = X[k+1]Q = P% (Z −1X k) + P% (Z −1U [k]) = P0( Z−1XkQ)Q + P0( Z−1U[k] Q)Q = P0( Z−1X[k]QQ)Q + P0( Z−1U[k] Q)Q = X+ [k]P0( Z−1QQ)Q + U+[k]P0( Z−1Q)Q X[k+1] = X+[k] P0( Z−1QQ) + U+[k] P0( Z−1Q) .

(15)

Putting A+ = P

0( Z−1QQ) and B+ = P0( Z−1Q), i.e., A = P0( QQZ−1) and B =

P0(QZ−1), gives the first part of the result. In the same way, C = P0(QT) is derived

via

P0( XkT ) = P0( X[k]QT )

= X[k]P0(QT ) .

2. From the above formula we have that  A = sup

i  Ai = sup i  QiQ

i+1 ≤ 1 since

QiQi = I for all i. Continuing, since X[k] = P0( XkQ∗)

= P0( U(k)Q∗)

= P0( ZkUQ∗)

and X = Z kX

[k], it follows that X = UQ∗. Combining this with the state equations (1)

yields UQZ−1 = UQA + UB UT = UQC + UD , (for all U∈ 2), or QZ−1 = QA + B T = QC + D .

This proves (2). Equation (3) follows by using the expression on Q∗in the computation ofΛQ = I: ΛQ = P0(QQ∗) = P0([ZAQ + ZB][QAZ + BZ]) = ZAP0(QQ)AZ + ZBBZ = ZAAZ + ZBBZ = IAA + BB = I

3. Assuming A < 1 so that (IZA)

−1 is bounded, equation (2) can be rewritten via

Q= BZ(IAZ)−1 into equation (4). This shows that Q is a bounded operator, hence

X = UQ∗ is bounded in the Hilbert-Schmidt norm.

*+

Definition 7. (Bounded State Equivalence) A realization  A1, B1, C1, D1 is said to be

boundedly state-equivalent to a given realization  A, B, C, D , if there exists a boundedly

invertible state transformation operator R∈

(C| N,C| N), such that A1 C1 B1 D1  = R I  A C B D  R−(−1) I 

(16)

To see the rationale behind this definition, start with the given realization XZ−1 = XA + UB

Y = XC + UD

and map X to an equivalent state vector X1 via X = X1R, with R a boundedly invertible

diagonal operator. Then

X1RZ−1 = X1R A + U B Y = X1R C + U DX1Z−1 = X1RAR−(−1) + U BR−(−1) Y = X1RC + U DX1Z−1 = X1A1 + UB1 Y = X1C1 + UD1

Theorem 8. Given a bounded system transfer operator T ∈ 

, and assume that the input state space $ of T is finite dimensional. Let N = dim($ ), and let F be the representation of

a strong N-dimensional bounded basis of $ , such that ΛF = P0(FF

)

( 0 andΛF < ∞. Then

T admits a state space realization A1 = P0( FFZ−1)⋅ Λ−F(−1)

B1 = P0( FZ−1)⋅ Λ−F(−1)

C1= P0( FT )

D1= P0( T)

and A

1 ≤1 and independent of the choice of the strong basis in $ . If A

1 <1, then F∗Λ−1 F = B1(IZA1) −1Z  T = D1+ B1(IZA1)−1ZC1, (5)

so that F is a bounded operator in 

Z−1, and X ∈  2( | C•1,C| N). PRO) O )

F The realization follows from theorem 5 in the same way as the realization in theorem 6 has been derived, but now with the projector onto $ written in terms of F: P% (⋅) =

P0(⋅F∗)Λ−F1F. (Rest of proof omitted.) When F is written in terms of an orthonormal basis

Q of $ ,

F = RQ

ΛF = P0( FF) = RR

(where R ∈ 

(C| N,C| N) is a boundedly invertible positive factor of ΛF), then the above

realization on F can also be derived via a state transformation XX1R of the realization

 A, B, C, D on Q in theorem 6, e.g.,

A1 = R A R−(−1) = R P0( QQZ−1) R−(−1)

= P0( RQQRZ−1) R−∗(−1)R−(−1)

= P0( FFZ−1)Λ−F(−1).

The other relations mentioned in the theorem follow from the application of this state trans-formation to the corresponding relations in theorem 6. Finally, the fact that A

(17)

of the choice of F (or of R), as long as it is a strong basis, is derived from  [ZA1] n  1/n =  [Z R −∗A R∗(−1)]n  1/n =  [R −∗(−1)(ZA) R∗(−1)]n  1/n =  R −∗(−1)[ZA]nR∗(−1)  1/n.

For n→ ∞ and R, R−1 both uniformly bounded, it follows that

A

1 = A.

*+

4.2. “Canonical Observer” State Space Realization

To obtain a realization in the observer form, define the state Xk to be in the output state space $ 0: again with U(k) = P

-2Z−1(ZkU),

Xk= P(U(k)T) ∈$ 0.

Theorem 9. Given a bounded system transfer operator T



with output state space $ 0,

then with the above definition of Xk, we have the “operator state space” realization

Y = UT ⇐⇒ Xk+1 = XkA + U[k]B Y[k] = XkC + U[k]D with A C B D  = P(Z −1⋅) P 0(⋅) P(Z−1⋅T) P 0(⋅T)  PRO) O ) F 1. Xk+1 = P( U(k+1)T ) = P( P -2Z−1(Zk−1U)T ) = P( [Z−1P -2Z−1(ZkU) + Z−1U [k]]⋅T ) = P( Z−1U(k)T + Z−1U[k]T ) = P( Z−1U(k)T ) + P( Z−1U[k]T ) = P / Z −1P(U(k)T)0 + P( Z −1U [k]T ) = P( Z−1X k ) + P( Z−1U[k]T ) . 2. Y[k] = P0( ZkUT ) = P0( U(k)T ) + P0( U[k]T ) = P0(Xk) + U[k]P0(T) .

Theorem 10. Let be given a bounded system transfer operator T∈

, and assume that the output state space $ 0 of T is known and of finite local dimension. Let N = dim($ 0), and let

G represent an orthogonal N-dimensional basis of $ 0, such that P0(GG

) = I. 1. A state space realization of T is

Y = UT ⇐⇒ XZ −1 = XA + UB Y = XC + UD , (6) where A = P0( GGZ−1) ∈  (C| N,C| N) B = P0( TGZ−1) ∈  ( •1,C| N) C = P0(G) ∈  (C| N,  •1) D = P0(T) ∈  ( •1,  •1) .

(18)

2. The realization satisfies the following relations:  A ≤1 , G = C + AZG T = BZG + D (7) AA+ CC= I . (8) 3. If A = r(ZA)<1, then G = (IAZ)−1C T = D + B(IZA)−1ZC , (9)

so that G is a bounded operator in  , and X∈  2( | C•1,C| N). PRO) O ) F

1. Expanding X into its diagonals, X = 

−∞ ZkX[k], we will derive the equivalent relation

Y = UT ⇐⇒ X

(−1)

[k+1] = X[k]A + U[k]B

Y[k] = X[k]C + U[k]D

. (10)

The proof follows closely that of theorem 6. For a given Xk in$ 0, put Xk = X[k]G, for

some X[k] ∈  2(C| •1,C| N). Then Xk+1= X[k+1]G = P(Z−1Xk) + P(Z−1U[k]T) = P% 0(Z− 1X k) + P% 0(Z− 1U [k]T) = P% 0( Z− 1X [k]G ) + P% 0(Z− 1U [k]T) = P0( Z−1X[k]GG)G + P0(Z−1U[k]TG)G = X+ [k]P0( Z−1GG)G + U+[k]P0(Z−1TG)G .

Hence A = P0( GGZ−1) and B = P0( TGZ−1). In the same way,

P0( Xk) = P0( X[k]G )

= X[k]P0(G) ,

hence C = P0(G).

2.  A ≤ 1 follows as in theorem 6. To show that G = C + AZG, put Y+(k) = P(U(k)T).

Then on the one hand, Y+(k) = Xk = X[k]G, on the other hand, it can be shown (using

(10)) that Y+(k) = X[k]C + X[k]AZG. Hence G = C + AZG. T = BZG + D then follows from

substituting this relation (in the form C = (IAZ)G) into equation (6): XZ−1 = XA + UB

(19)

X(IAZ) = UBZ Y = X(IAZ)G + UDY = U (BZG + D).

Finally, AA+CC= I follows by substituting the relation G = C+AZG in the expression ΛGP0( GG) = I.

3. X ∈ 2 if A <1 follows directly once it has been established that X = YG

. The proof of this property is dual to that in theorem 6 and is omitted here.

*+

Theorem 11. Given a bounded system transfer operator T



, and assume that the output state space $ 0 of T is finite dimensional. Let N = dim($ ), and let F0 represent a strong

N-dimensional basis of $ , such that ΛF

0 = P0( F0F

0) ( 0 and ΛF0 < ∞. Then T admits a

state space realization A1 = P0( F0F∗0Z−1)⋅ Λ −(−1) F0 B1 = P0( T F∗0Z−1)⋅ Λ− (−1) F0 C1 = P0(F0) D1 = P0( T) = T[0] and A

1 ≤1 and independent of the choice of the basis, as long as ΛF0 ( 0. If A 1 <1, then F0 = (IA1Z)−1C1 T = D1+ B1(IZA1)−1ZC1. PRO) O )

F The proof follows from theorem 10 and goes along the lines of the proof of theorem 8, with state transformation X = X1R, and orthogonal basis G such that F0= RG.

*+

Theorem 12. Given a bounded system transfer operator T∈

with finite dimensional state spaces $ and $ 0. Let F be the representation of a strong basis in $ . Let

F0 = P(FT)

and suppose that F0 represents a strong basis (ΛF0 ( 0). Then the canonical realization

based on F (theorem 8) is identical to the canonical realization based on F0 (theorem 11).

The Hankel operator HT= P(⋅T) on



2Z−1 has a decomposition in terms of F, F0 as

HT :  2Z−1→  2 : Y = U HT = P0( UF∗)Λ−F1⋅F0 ⇔ X = P0( UF∗)Λ−F1 Y = X F0 PRO) O )

F Let X be the state of the realization on F, and ¯X be that of F0. We will prove that,

when F0 = P(FT), these states are the same. The proof hinges on the fact that P(U(k)T) =

P( P% (U(k)) T ) by definition of $ . Let Xk = P% (U(k)) ¯ Xk = P(U(k)T) , X¯k = X[k]F Xk = X¯[k]F0

(20)

(according to the definitions leading to theorems 8 and 11). Then ¯ Xk = P(U(k)T) = P( P% (U(k)) T ) = P(XkT) = P(X[k]FT) = X[k]P(FT) = X[k]F0 If F0 is strong, then ¯X[k] = X[k].

To prove that Y = UHT= P0( U F∗)Λ−F1F0, (where this U



2Z−1plays the role of any U(k) of

the expressions above), notice that we defined X = P% (U) = P0( UF

−1

F F = XF, and hence

X = P0( UF∗)Λ−F1 for the controller realizations, and Y = ¯X = ¯XF0 for the observer realizations.

Since these states X, ¯X are the same when F0 = P(FT), the result follows.

*+

The above decomposition of the Hankel operator proves to be essential in the actual compu-tation of a realization of a given transfer operator T, as is shown in the next section.

5. FROM TRANSFER OPERATOR TO REALIZATION

In this section we shall consider how a realization can actually be computed fom the data in a transfer operator T. The Hankel operator HT will play an important role in the computations,

just as it did in the related generalized Wiener-Hopf theory developed in [13] and [14].

Diagonal Expansion of the Hankel Operator If the operator X∈

2, then the diagonal expansion of X is

~ X, defined by X = X[0]+ ZX[1]+ Z2X[2]+ = X[0]+ X (−1) [1] Z + X (−2) [2] Z2+ ~ X = X[0] X (−1) [1] X (−2) [2]  . ~

X is an alternative representation of X which we still will denote as belonging to



2. If the

operator X ∈ 

2Z−1, then the diagonal expansion of X is also designated by

~ X, now defined by X = Z−1X [−1]+ Z−2X[−2]+ = X (+1) [−1]Z−1+ X (+2) [−2]Z−2+  ~ X = X (+1) [−1] X (+2) [−2]  

Here also isX an alternative representation of X.~

These definitions keep entries of X that are on the same i-th row π∗iX in X also on the same row π∗iX in~ X. This is seen directly from the second expansion of X in Z, since a~ multiplication of a diagonal on the right by Z will only shift its columns. In addition, we have that P0( XX∗) =

~ XX~∗.

Using diagonal expansions, we can associate an operator H~T to the Hankel operator HT of a

system T, in the sense that H~T maps the diagonal expansion of U to the diagonal expansion

(21)

and can be specified in terms of the entries of T: Theorem 13. Let T



, and Y = UHT with U



2Z−1. The matrix representation of the

operator H~T such that

~ Y =U~H~T is given by ~ HT= T[1] T([2]−1) T (−2) [3]  T[2] T([3]−1) T[3] . .. .. .        PRO) O )

F The multiplication UT can be broken down into operations on diagonals of U: Y =

UT = Z k(U [k]T). It follows that Y[0] = U (+1) [−1] U (+2) [−2]  T [1] T[2] .. .    , Y[1] = U (+2) [−1] U (+3) [−2] 2 T [2] T[3] .. .    , etc. Hence Y[0] Y (−1) [1] Y (−2) [2]  = U (+1) [−1] U (+2) [−2] U (+3) [−3]  ~ HT with H~T as claimed. *+

A nice connection of T with H~T is obtained by construct (infinite size) submatrices Hi (−∞ <

i < ∞) of H~T by selecting the i-th entry of each diagonal in

~

HT. The Hi can be viewed as

time-varying Hankel matrices. The entries of Hi are entries of T, e.g.,

H0= T1,0 T1,1 T1,2  T2,0 T2,1 T3,0 . .. .. .       

Hence the rows of Hi are parts of the rows of T, and in fact the Hi are mirrored submatrices

of T, as seen in figure 4. The mirroring effect is introduced by definition of the diagonal expansion of operators in 

2Z−1.

Hankel Matrix Decompositions

We will need the following results from the previous chapter. Given a bounded system transfer operator T ∈ 

, and assume that the input/output state spaces $ and $ 0 of T are

finite dimensional. Let F be the representation of a strong N-dimensional basis of$ , and F0

the representation of a strong N-dimensional basis of $ 0. Then a state space realization of T

based on F is A = P0( FFZ−1)⋅ Λ−F(−1) B = P0( FZ−1 )⋅ Λ−F(−1) C = P0( FT ) D = P0( T) (11) and (assuming A <1) satisfies Λ

−1

(22)

-3,0 -2,0 -1,0 0 T T T T T T -2,1 [2] [1] T T T[0] T = -3,0 -2,0 -2,1 -1,2 -1,1 -1,0 -1,1 -1,2 0 H =

Figure 4. Hankel matrices are submatrices of T.

A second realization that is based on F follows from the above realization after applying a state transformation byΛ−1 F : A =Λ−1 F P0( FFZ−1) B = P0( FZ−1 ) C =Λ−1 F P0( FT ) D = P0( T) (12)

and (assuming A <1) satisfies F

= B(IZA)−1Z.

A third state space realization of T is based on F0:

A = P0( F0F∗0Z−1)⋅ Λ −(−1) F0 B = P0( T F∗0Z−1 )⋅ Λ− (−1) F0 C = P0(F0) D = P0( T) = T[0] (13)

and (assuming A <1) satisfies F0 = (IAZ)

−1C.

Realization (11) is equal to realization (13) if F0 = P( FT ) is taken, and if this F0 is a strong

basis representation. With F0= P( FT ) we have a decomposition of HT as (theorem 12)

HT= P0(⋅F∗)Λ−F1⋅F0

Switching to diagonal expansions, this decomposition turns into a decomposition of the diag-onal expansion of HT and leads to an expression that is familiar in the time-invariant case:

Theorem 14. Let T∈

be the transfer operator of a bounded system. If  A, B, C, D is a

state space realization of T, then H~T has a decomposition

~ HT=3 ⋅

(23)

where 3 :  2Z−1 →  2, :  2 →  2 are defined as 3 := B(+1) B(+2)A(+1) B(+3)A(+2)A(+1) .. .       := C AC (−1) AA(−1)C(−2)  

If the realization is given by equation (11), based on a strong basis representation F generating the input state space, then3

is equal to the diagonal expansion ofΛ−1

F F.

If the realization is given by equation (12), again based on a strong basis representation F generating the input state space, then F =~ 3

.

If the realization is given by (13), based on a strong basis representation F0 generating the

output state space, thenF~0= .

PRO) O

)

F From T = D + B(IZA)−1ZC follows

T[1] = B(+1)C T[2](−1) = B(+1)AC(−1) 

T[2] = B(+2)A(+1)C T[3](−1) = B(+2)A(+1)AC(−1)

T[3] = B(+3)A(+2)A(+1)C



Application of theorem 13 shows that H~Thas the claimed decomposition. (With slightly more

effort, the same can be shown in case A = 1.)

For A < 1, the second part of the theorem can be inferred from the relations Λ

−1

F F = [ B(I

ZA)−1Z ], F = [ B(IZA)−1Z ], and F

0 = (IAZ)−1C respectively. The theorem is formally

verified by using the decomposition of the Hankel operator (theorem 12) and looking at the relation between the ordinary and the diagonally expanded Hankel operator. For U∈ 

2Z−1,

realization (11) and (13) follow from Y = UHT ~ Y = U~H~TX = P0(UF∗)Λ−F1 = ~ U(F)~∗Λ−1 F = ~ U3 Y = X F0 ~ Y = XF~0 = X showing that Λ−1 F ~ F =3 ∗, F~

0 = . Realization (12) is slightly different due to a state

transfor-mation byΛ−1 F : Y = UHT ~ Y = U~H~TX = P0(UF∗) = ~ UF~∗ =U~3 Y = XΛ−1 F F0 ~ Y = XΛ−1 F ~ F0= X

showing that, for this realization, F =~ 3

.

*+

3 is the controllability matrix, is the observability matrix in the present context. A

real-ization  A, B, C, D is called a controllable realization if 3

3 >0, and uniformly controllable

if 3

3

( 0. In view of theorem 14, it follows straightforwardly that the second realization

on a strong basis F in $ as given by equation (12) is uniformly controllable by construction:

ΛF = P0(FF∗) = 3 ∗ 3 and ΛF ( 0 yields 3 ∗ 3 ( 0. IfΛ −1

F ( 0 (i.e., the state transformation

(24)

(11) is also uniformly controllable, since for this realization it holds that Λ−1

F =3

3 .

Along the same lines, a realization A, B, C, D is called observable if

>0, and uniformly observable if ∗ ( 0. A realization based on a strong basis F0 generating

$ 0 via equation

(13) is uniformly observable by construction, since ΛF0 = P0(F0F

0) = ∗ and ΛF0 ( 0. A

realization is called minimal if it is both controllable and observable.

The rank of the Hankel operator H~T is defined to be N ∈  such that Hi has rank Ni, for

i = 444 ,−1, 0, 1,444 . Since$ 0 = (



2Z−1) HTit follows immediately from the relations between

an operator and its diagonal expansion that rank(H~T) = dim($ 0). Since rank(

~

HT) = rank(

~ HT), it follows that rank(H~T) = dim($ ), and hence dim($ 0) = dim($ ).

Theorem 15. Let T be a bounded linear causal time-varying system transfer operator in



. 1. If H~T has local finite rank N ∈ , then there exist minimal state space realizations of

order N. This is a Kronecker-type result.

2. These realizations can be obtained from any decomposition ofH~T intoH~T=3 ⋅ , where 3 has column rank N, has row rank N (i.e., with 0 < 3

3 < ∞, 0 <

< ∞), whenever at least one of these products is taken uniformly positive, as follows

— If3

3

( 0, then take F



2Z−1 such that its diagonal expansion

~ F =3

. This F is a strong basis representation generating the input state space $ of T. A realization of

T is given by equation (12) and is uniformly controllable by construction. — If ∗ ( 0, then take F0 ∈



2 such that its diagonal expansion

~

F0 = . This F0 is

a strong basis representation generating the output state space $ 0 of T. A realization

of T is given by (13) and is uniformly observable by construction.

3. Existence of a realization that is uniformly controllable and uniformly observable is a system property: it depends only on T. If it exists then a realization based on F is also uniformly observable, and a realization based on F0 is also uniformly controllable.

PRO) O

)

F

1. This will follow from the construction in step 2.

2. The decomposition can be constructed via decompositions of the Hi, which is a standard

linear algebra problem (typically using SVDs). The choice for F and F0 is motivated

by theorem 14 and the discussion following it.

3. The condition for existence of a realization that is both uniformly controllable and uniformly observable is that, given a strong basis F, then F0 = P( FT ) should be a

strong basis in the output state space: ΛF0 ( 0. Because of the definition of input

and output state space, we have at least that ΛF0 > 0, but it need not necessarily be uniformly positive. If it isn’t, then no boundedly invertible state transformation R applied to the realization on F (making it a realization based on RF) will make it uniformly positive: ΛF5

0 := RΛF0R

(25)

connected via boundedly invertible state transformations, and these realizations are the only ones that are uniformly controllable, the conclusion is that there either exists a realization that is uniformly observable in addition, or it does not exist, depending on T.

*+

ACKNOWLEDGEMENT

This research was supported by the commission of the EC under the ESPRIT BRA pro-gram 3280 (NANA). It was performed partly at the Department of Theoretical Mathematics, Weizmann Institute of Science, Rehovot, Israel, whence the first author would like to thank support by The Karyn Kupcinet Summer School program, and the second author gratefully acknowledges a grant from the Meyerhoff foundation.

REFERENCES

[1] D. Alpay, P. Dewilde, and H. Dym, “Lossless Inverse Scattering and Reproducing Kernels for Upper Triangular Operators,” Operator Theory Advances and Applications, 47:61–135, 1990. [2] D. Alpay and P. Dewilde, “Time-varying Signal Approximation and Estimation,” In Proc.

MTNS-89: Signal Processing, Scattering and Operator Theory, and Numerical Methods. Birkh¨auser,

1990.

[3] D. Alpay, P. Dewilde, and H. Dym, “Lossless Inverse Scattering, Reproducing Kernels and Nevanlinna-Pick Interpolation for Upper Triangular Operators,” (to appear).

[4] P. Dewilde, “A Course on the Algebraic Schur and Nevanlinna-Pick Interpolation Problems,” In Ed. F. Deprettere and A.J. van der Veen, editors, Algorithms and Parallel VLSI Architectures. Elsevier, 1991.

[5] L.A. Zadeh, “Time-Varying Networks, I,” Proc. IRE, 49:1488–1503, October 1961.

[6] A. Feintuch and R. Saeks, “System Theory: A Hilbert Space Approach,” Academic Press, 1982. [7] E.W. Kamen and K.M. Hafez, “A Transfer-Function Approach to Linear Time-Varying

Discrete-Time Systems,” SIAM J. Control and Optimization, 23(4):550–565, 1985.

[8] E.W. Kamen, “The Poles and Zeros of a Linear Time-Varying System,” Lin. Alg. Applications,

98:263–289, 1988.

[9] B.D.O. Anderson and J.B. Moore, “Detectability and Stabilizability of Time-Varying Discrete-Time Linear Systems,” SIAM J. Control and Optimization, 19(1):20–32, 1981.

[10] A.J. van der Veen and P.M. Dewilde, “Orthogonal Embedding Theory for Contractive Time-Varying Systems,” In H. Kimura, editor, Proc. Int. Symp. on MTNS. MITA Press, Japan, 1991. [11] L. Kronecker, “Algebraische Reduction der Schaaren Bilinearer Formen,” S.B. Akad. Berlin, pp.

663–776, 1890.

[12] B.L. Ho and R.E. Kalman, “Effective Construction of Linear, State-Variable Models from Input/Output Functions,” Regelungstechnik, 14:545–548, 1966.

[13] H. Nelis, “Sparse Approximations of Inverse Matrices,” PhD thesis, Delft Univ. Techn., The Netherlands, 1989.

[14] H. Woerdeman, “Matrix and Operator Extensions,” PhD thesis, Dept. Math. Comp Sci., Free University, Amsterdam, The Netherlands, 1989.

Cytaty

Powiązane dokumenty

Prawne małżeństwo występuje u Rzymian jako małżeństwo z władzą nad żoną, jest to związek cum manu, lub jako matrimonium bez władzy męża - małżeństwo sine

The underlying challenges at the tactical gov- ernance level that relate to the way ecological features in Rotterdam city are planned pointed at the following interrelated

Aktualnie nadal trwają szczegółowe analizy danych odnoszących się do kształtu dna jeziora Lednica oraz skanów z magnetometru protonowego, które wraz z na- stępnymi badaniami

Kryzys idei uniwersytetu w dobie oświecenia stał się przyczynkiem do nasilenia dyskursu na temat koncepcji jego funkcjonowania oraz pełnionych prze niego funkcji, będąc

Sección de Filología Medieval, Instituto de Filología Clásica oraz Facultad de Filosofía y Letras działające w ramach Universidad de Buenos Aires (UBA), pod patronatem wielu

Przekłady Literatur Słowiańskich 5/2,

A consensus-based dual decomposition scheme has been proposed to enable a network of collaborative computing nodes to generate approximate dual and primal solutions of a

mogło być wystarczające dla kościoła filialnego, jednak z chwilą erygowania w Skrzatuszu nowej parafii, pojawiła się konieczność powiększenia stanu