• Nie Znaleziono Wyników

The analog computer solution of differential equations of the form (D)y(t) = M(D)u(t)

N/A
N/A
Protected

Academic year: 2021

Share "The analog computer solution of differential equations of the form (D)y(t) = M(D)u(t)"

Copied!
56
0
0

Pełen tekst

(1)
(2)
(3)

Ie ",N'N'~ i+l'" ij . . .

tNi,_'

...

IjI!I.,1I1 I r " I j J n " , , ! ! li ,nll'

THE ANALOG COMPUTER SOLUTION OF DIFFERENTlAL

EQUATIONS OF THE FORM

R. E. Gagne', Head Analysis Laboratory

L(D)y(t)

=

M(D)u(t)

by

L.G.

BIRTA

D. C. MacPhail Director

(4)
(5)

SUMMARY

The analog (or digital) computer solution of a linear differential equation of the form L(D)y(t) = M(D)u(t) is invariably obtained by sol ving an equivalent set of linear first-order differ-ential equations. It is shown in this Report that if the polynomials L(D) and M(D) have a common factor, this equivalent set of first-order equations must be chosen in a particular manner. The initial conditions associated with the equivalent first-order system are intimately related to the gi ven initial conditions on y and its der-ivatives. The fundamental equation embodying this interrelation is developed.

(6)

Page SUMMARY ... ... (iii) ILLUSTRATIONS ... ... ... ... ... ... (iv) APPENDICES .. .... ... ... ... ... ... (v) 1.0 INTRODUCTION ... 1 2.0 PRELIMINARY CONSIDERATIONS ... 2

3.0 CANONICAL DECOMPOSITIONS OF EQUATION (1) ... 7

4.0 THE PROBLEM OF INITIAL CONDITIONS

10

5.0 THE INITIAL CONDITION EQUATION ... 12

6.0 ON THE OBSERVABILITY OF DECOMPOSITIONS OF EQUATION (1) ... 18

7.0 DISCONTINUITY AT t = 0 ... 19

8.0 CONCLUSIONS ... ... .... ... ... ... ... ... ... ... ... 20

9.0 REFERENCES ... 21

ILLUSTRATIONS

Figure Page

la

Analog Computer Realization of Equation (5)

23

1b

Analog Computer Realization of Equation (7)

23

2 Analog Computer Realization of the First (Observabie) Canonical Decomposition of Equation (1) ... .... ... 24

3

Analog Computer Realization of the Second (Controllabie) Canonical Decomposition of Equation (1) ... 25

(7)

'" ,,'iUt",," . " "

'._._._"

Appendix A B C D APPENDICES Proof of Theorem 1 ... . Expansion of hT(sI-Fflg + h n ... . Proof of Theorem 2 ... .

The Impulse Response of Equation (1)

(v) Page 27 31 37 43

(8)
(9)

cr:rl'Jf, unr' 'l-;I'1nmR .DIIII'1II]1 NWpiWUi" Ie' lA I I

,I

IA , / " bti hLi . . . 'itI,ll"JWr.wt··-p -'~lIi·ljl 'lil.

1

-THE ANALOG COMPUTER SOLUTION OF DIFFERENTlAL

EQUATIONS OF THE FORM

L(D)y(t)

=

M(D)u(t)

1.0 INTRODUCTION

The purpose of this Report is to investigate the analog computer solution of linear, constant coefficient, ordinary, differential equations of the form

where b m Ä 0 and 1

<

m

<

n.

*

To simplify notion, it is convenient to let D write (1) as

L(D)y(t) M(D)u(t)

wh ere LCD) Dn + a n-I Dn-I+ ... + aID + a.. ~1J

M(D) = b Dm + b Dm-l + ... + bID + b O m m-l d and to dt

Two standard approaches to this problem are given in most textbooks on analog computation - with the implication that they are both equally applicable. It is shown in this Report that each of these standard approaches is a representative member of a broad class of distinct but equivalent approaches. Furthermore, it is demonstrated that, in general, the approaches in one of these classes are all unsuitable wh en the polynomials L(D) and M(D) have a common factor.

Whenever equation (1) is to be solved on the analog computer, there al ways exists the non-trivial problem of relating the given initial conditions on y and its first n-1 derivatives to the initial voltages on the analog computer integrators. A straightforward technique is developed for solving this problem.

The question of initial conditions can become ambiguous when the forcing function u(t), or some of its derivatives, are undefined at the initial time (which we take throughout to be t = 0). It is argued that under these circumstances the 'initial' conditions must be interpreted as existing at t

=

0- and not at t

=

0+ as one might suspect from a conventional Laplace transform treatment of the problem. As a direct application of these arguments, a simple method for generating the 'impulse response' of (1) on the analog computer is presented.

*

The results reported herein are closely re1ated to those of Marzollo (Ref. 1) and are, in fact, an extension of the comments made in Reference 2.

(10)

It should, furtherrnore, be appreciated that although the discussion is oriented toward the analog computer solution of (1), the concepts presented and the results obtained are equally relevant to the digital computer solution of th is equation. This follows because the solution approach, in both situations, is conceptually identical.

2.0 PRELIMINARY eONS! DERATIONS

Consider a system of first-order differential equations of the form*

iet) Px(t) + qu(t) (2a)

y(t) = rT x(t) + vu(t) (2b)

where x is an n-vector, u and y are scalars, and P, q, r, and vare constant (real) matrices of appropriate dimensions. Let En denote the set of real n-vectors and let

êto,

00) denote the set of real piecewise continuous functions on the interval** [0, 00) .

....

Corresponding to any given*** (a, u) f En X C [0, 00), (2a) has a unique

solution, 4>, which can be written as (see Ref. 3, pp. 74-78)

t

4>(t;a,u) = ePta +

J

eP(t-T) qU(T)dT

o

In

particular, c,b(t;a,u) has the following properties

(i) 4>(O;a, u) = a

(ii) 4>(t;a,u) satisfies (2a) wherever its derivative exists (namely wherever u(t) exists).

The corresponding y(t), specified by (2b); viz.

y(t) = rT4>(t;a,u) + vu(t)

is the solution of (2) associated with the given (a,u). Notice that the uniqueness of c,b(t;a,u)

assures the uniqueness of the associated solution of (2) as given in (3).

*

The notation QT denotes the transpose of the matrix Q.

** The interval [0, 00) is identical with the set of real numbers, r, such that 0

<

r

<

00.

(3)

*** The symbol E is to be read as 'belongs to', or 'is a member of'. Given two sets A and B, A X B denotes the set of all ordered pairs (a, b) such that a E A and b E B.

(11)

I J _ _ _ ",'=""'1"'" '''RU.MaltiV''.' " . , , . ,

3

-Because of the intermediate variable, x, appearing in (2), the relationship between ,...,

given (a, u) ( E x C[O, 00) and the associated solution, y, of (2) is obscured. This relationship

n

is, however, of fundamental importance for our purposes. We now undertake to remedy this difficulty.

THEOREM 1

With respect to equation (2), let Ó(s) of the square array (sI-P», and let

det (sI-P) (i. e., Ó(s) is the determinant

(A) If (i) v 1= 0, th en iii

=

n and

ç-

m

=

v;

(ii) v =

°

and there exists a least integer

f3

in the set 10,1, ... n-11 such that

n-f3-1;

(iii) v

o

for k = 0,1, ... n-1, th en 17(s) _ 0.

(B) Let (a, u) be any given element of En X C[O, 00) and let y be the associated (unique)

solution of (2). If

following:

(i) 77(S)

i

0, then y(n\t) exists for each t ( [0, 00) for which u(m)(t) is defined

and, furthermore

Ó(D)y(t) 77(D)u(t)

(ii) 77(S) _ 0, then y(n)(t) exists for each t ( [0,00) and, furthermore

Ó(D)y(t) _ 0

Theorem 1 (who se proof appears in Appendix A) provides the motivation for the

DEFINITION

(12)

and 17(s) M(s)

Notice that if (2) is a decomposition of (1) and if m=n, then it must be that v bn.

It is of interest now to examine the ramifications of the Definition and of

"

Theorem 1. Suppose that (2) is a decomposition of (1) and that u is some distinguished element

#'0.1 " "

of C[O, (0). Let Ya be the (unique) solution of (2) associated with (a, u) where a (. En. From

the Definition and Theorem 1 it follows that

for each 0

<

t

<

00 where M(D)C(t) is defined. Consequently,

y

is the solution to (1), which

- a

corresponds to the 'forcing function'

~

and the initial condition vector a =

[Ya(O)'~a(l)(O),

...

Y

a (n-l) (O)]T. It should be noted, however, that the relationship between the two n-tuples a and a remains to be established. This problem is considered in Section 5.

In the light of these remarks, it is dear th at the problem of sol ving (1) can be transformed into the problem of sol ving (2), with the matrices P, q, r, and v suitably chosen to ren der (2) a decomposition of (1)*. The salient advantage of this transformation is that (2) has a form that is particularly amenable to analog computer solution. In particular, note that each first-order differential equation of (2a) can be associated with a single analog computer integrator. Hence a system of n integrators is sufficient to generate the solutions of (2a). Equation (2b), on the other hand, implies a single summer that sums (with suitable weighting factors) the integrator output voltages and the circuit input voltage.

The question of obtaining a decomposition of equation (1) remains to be considered. In this regard, however, it is important to note that, for a given equation of the form of (1), there exist many decompositions (an infinite number, in fact). We illustrate this via the following example

y(t) + 3y(t) + 2y(t) Ii(t) + u(t) (4)

or M1(D)u(t)

where D2 + 3D + 2

*

M1(D)

D

+

1

The validity of this transformation is subject to further conditions on the coefficient matrices of (2). These arise from initia! condition considerations and are dealt with in the sequel.

(13)

" t . . . .

S

-Consider the system of equations

(Sa) (Sb) where p (6) Since and

it follows (by definition) that (5) is a decomposition of (4).

Consider now the equations

(7a)

(7b)

where

Again

(14)

Thus (7) is also a decomposition of (4).

Figures la and lb indicate analog computer realizations of equations (5) and (7) respectively.

To demonstrate the existence of other decompositions of (4), let T be any

non-singular (2 x 2) constant matrix and let the 2-vector

wet)

be defined as

wet)

Tx(t)

Rewriting (5) in terms of

wet)

we get

WCt) TPT 1

wet)

+ Tqu(t) (8 a)

y(t) (8b)

where P, q, and r are as specified in (6). Since

det(sI-TPT l ) = det(T(sI-P)T l ) = det(sI-P) = Ll(s)

and

it follows that (8) is also a decomposition of (4). The braad choice of specific values available

for T gives rise, via (8), to a correspondingly wide variety of decompositions of (4).

The system of equations (2) is said to be observable if the n x n matrix

is non-singular (i.e. has rank n). Equation (2) is said to be controllabIe if the n x n matrix

(15)

"",'n ... "1111111."_--

-

, I

- 7

-is non-singular.

The notions of ohservahility and controllability, in the context of (2), have

im-portant control-theoretic implications (see ReL 4 or 5). It is, however, sufficient for our purposes

to regard these concepts solely as manifestations of the matrix criteria stated above.

If (2) is both observable (controllabie) and a decomposition of (1), then we shall

say that (2) is an observabie (controllable) decomposition of (1).

3.0 CANOt-UCAL DECOMPOSITIONS OF EQUATION (1)

Consider the system of equations*

x(t) = Fx(t) + guet) y(t) hTx(t) + b u(t) n where F g 0

1

0

o

o

0 0 1

o

o

0 0 0

o

o

bo-aobn bcalbn b 2-a2bn b n-2 - an_2bn b n_l - an-l bn 0 0 0

1

o

h 0 0 0

o

1 -aa -al -a 2 0 0 0

o

1 (9a) (9b)

*

If the highe st derivative of u(t) on the right-hand si de of (1) is m < n, th en bn, bn_1, '" bm+

1 are to be taken as zero throughout.

(16)

In Appendix Bit is shown that det(sI-F) = L(s) and that L(s)[hT(sI-Frlg + b ] n = M(s). Hence, (9) is, by definition, a deeomposition of (1). Furthermore, by explicitly evaluating the vectors hTF, hT F2, .... hTFn-1, it ean be shown that the matrix

is non-singular. Thus (9) is an observabIe decomposition of (1)

A system of equations closely related to (9) is

x(t) Fx(t) A + " gu(t) (lOa) y(t) A h T x(t) + b u(t) n (lOb) A where F

g

h

A h g

(17)

11;·'i1'! ' ' ' ' ' ' ' ' ' 1 ' · ' ' ' I I I m . _ . . . . I I ' ' ! ' t ! ! l l' ' j . l ' ' j l j j det(sI-F') det(sI-F) T

t

T

(sI-~r

1

~

is a scalar det(sI-F) - 9

-L(s) (using the results of Appendix B). Further, since

Adding bn to both sides of the above equality, and again incorporating the results of Appendix B, we have

which completes the verification.

We note also th at [A. FAA, g, g, ... FAn-l~lT !:>l AT g AT" T g F

(18)

1\ 1\

As indicated earlier, the matrix on the right-hand side is non-singular, thus [g; Fg; ... Fn-l~ is

also non-singular. Equation (10) is, therefore, a controllabIe decomposition of (1).

In the sequel, (9) and (10) wilt be referred to, respectively, as the first and second

canonical decompositions of equation (1).

The two methods normally suggested in analog computer textbooks for solving (1)

correspond to the first and second canonical decompositions described above. The second

canon-ical decomposition is frequently referred to as the 'dummy-variable method' (see, for example,

Ref. 6, p. 52). Analog computer realizations of equations (9) and (10) are given in Figures 2 and 3,

respectively.

Theorem 2, which appears below, demonstrates that every observable decomposition

of (1) is, in a sense, 'equivalent' to the first canonical decomposition. An analogous 'equivalence'

can also be established between the set of controllabie decompositions of (1) and the second

canon-ical decomposi tion.

THEOREM 2*

Let WCt) = Pw(t) + qu(t) (lla)

(llb)

be any observable decomposition of (1). There exists a non-singular transformation matrix, V, which transforms (11) into the first canonical decomposi tion.

In ot her words, Theorem 2 states that if we express (11) in terms of the n-vector

x(t) Vw(t), the resulting system of equations, namely

*(t) VPV-1 x(t) + Vqu(t)

is precisely the first canonical decomposition; i.e., VPV-1 = F, Vq

4.0 THE PROBLEM OF ItoUTIAL CONDITIONS

Our intent here is to bring to light, via aspecific example, certain salient aspects

of the problem under consideration. In particular, we show that even though a given analog computer

(19)

11

-circuit is a realization of a decomposition of (1), it is not necessarily adequate for generating all solutions of (1). More specifically, we demonstrate the inadequacy (with respect to a particular exampIe) of the second canonical decomposition - or equivalently, a computer realization of the second canonical decomposition.

The exampie we consider is specified byequation (4) of Section 2. For

definite-ness, we assume it is desired to obtain that solution of (4) that corresponds to y(O)

=

1, y(O)

=

1,

and u(t) = t. Suppose, further, that the circuit of Figure la is selected. Notice that this circuit

is a realization of the second canonical decomposition of equation (4), which is explicitly given in equation (5).

Initial conditions on the analog computer can be introduced onIy via initial voltages on the circuit integrators. From Figure la we observe that the outputs of the two integrators are

labelled Xl (t) and -x

2(t). But the probiem initial conditions are specified (at t = 0) on y and y!

Thus, before this circuit can be used to generate the desired solution, the prescribed initial

condi-tions, y(O)

=

1, y(O)

=

1, must be translated into appropriate initial conditions on the variables

Xl and x2 ·

With this objecti ve in mind, differentiate equation (Sb) to get

(12)

Using (Sa) to eliminate the derivatives on the right-hand side of (12), we get

y(t) = -2xI (t) - 2x

2(t) + u(t) (13)

Equations (Sb) and (13) can, at = 0, be expressed in vector-matrix format, as

Incorporating the given conditions that u(t)

=

t, y(O)

=

1, y(O) 1, we obtain

(20)

generated solution be the desired solution. However, by applying certain fundamental results from

Linear Algebra (see Ref. 7, p. 21) it can be demonstrated that (14) has tlO solutiotl; i.e. there exist

no values for xl(O) and x2(0) that satisfy (14). Consequently it must be concluded that the circuit

of Figure la is unsuitable for generating the desired solution of equation (4).

The problems raised by this exarnple are considered in detail in the following Section.

5.0

THE

1t ..

ITIAL CONDITION EQUATION

Assurne that the desired solution of (1) is associated with the initia! conditions

y(O)

=

aO' y(O) = al' ... y(n-I>CO)

=

a n- l where ao' al' ... a n- l are prescribed but arbitrary. For

convenience, let the n-vector a be defined as a = [ao' al' ... an_I]T. The 'forcing function' , u(t)

in (1), is assumed to be such that u(O), li(O), ... u(m-l)(O) exist; we let

f3

denote the m-vector

[u(O), li(O), ... u(m-I)(O)]T. Initially we assume m

<

n to avoid undue notational complexity.

Let Ac be an analog computer circuit that is a realization of the equations

x(t) Px(t) + qu(t) (15 a)

y(t) rT x(t) (lsb)

where (15) is, in turn, a decomposition of (1). Necessarily, therefore, we have rTpkq = 0 for

k = 0, 1, ... n-m-2 and rTpn-m-I q 1= 0*. Furtherrnore, for any initia! condition** x(O) = a the

solution, yaCt), of (15) (and hence the response of

A)

is such that L(D) Ya(t) = M(D)u(t). Thus

a necessary (and sufficient) condition for Ya(t) to be the desired solution of (1) is that Ya(O) = aO'

Ya(O) = al' .. ·ya(n-I)(O) = an-l·

To investigate the implications of this necessary and sufficient condition we

proceed as follows: Let 0

.:s

t

<

00 be any point at which u(t), li(t), ... u(m-l)(t) are defined.

By successively differentiating (lsa) and substituting (15b), the following sequence of equations is obtained

*

This re sult is implied by Theorem I (see App. A).

**

An essential property of the r~:lization Ac is that the integrator outputs of Ac have a one-to-one correspondence with the elements of the n-vector x(t). The vector x(O) therefore prescribes the initial voltages on the n integrators of Ac'

(21)

f'It. Y (2)(t) a y (n-m-l)(t) a y (n-m)(t) a y (n-m+l)(t) a y (n-2)(t) a y (n-l)(t) a

13

-The evaluation of Ya and its first n-1 derivatives at t 0, yields

Y (k)(O)

a

k-n+m

Y (k)(O)

a rTpkx(O) +

I.

rTpk-i-l qu(i)(O)

i=O

~-

- - -

-(16)

o

<

k

<

n-m

n-m

<

k

<

n-1

By imposing the condition that Ya(k)(O) = a

k, and incorporating the definitions of the vectors a,

{3,

and a, the above system of equations can be expressed more succinctly as

a

ra

+ R m

{3

(17)

where

(22)

and R m

o

o

o

o

o

o

o

Tpn-m-l r q

o

o

o

o

o

o

o

o

o

o

rTpn-S q ... rTpn-m-l Tpn-4 Tpn-m-2 r q ... r

o

o

o

o

o

o

Tpn-m-l r q

Note that Rm is an n x m matrix, and that the entries in the first n-m rows of Rm are all zero. Further, since rTpn-m-lq f, 0, it follows that the rank of R is m.

m

In order that the response, y (t), of a A c be the desired solution of (1), the initial condition vector x(O) = a must be chosen so that (17) is satisfied. It remains therefore to inves-tigate the existence of a solution to (17). Henceforth, we refer to (17) as the initial condition equation.

We rewrite (17) as

ra

(18)

which has the form of a system of linear algebraic equations. Notice further, that the right-hand side of (18) is arbitrary by virtue of the arbitrariness of a. From basic results in the theory of linear algebra (Ref.7), it can be concluded th at (18) has a solution if, and only

if,

the coefficient matrix

r

is non-singular. But the non-singularity of

r

is precisely the condition of observability of (15). Thus, the initial condition equation has a solution (for arbitrary a) if, and only if, (15) is an observable decomposition of (1). lf a solution exists, then it is unique and is explicitly given as

(23)

1'

'''''''--'''

'

,P'III I' I l " ë _ _ _ _ ~!n"l I liJm!' mi

15

-It is dear from this result that the only decompositions of (1) that are of practical utility, with respect to the analog computer solution of (1), are those that are observable. In Section 6 we shall indicate conditions on the polynomials L(D) and M(D) that ensure that any decomposition of (1) is observable.

It is of interest to note that if (15) is an observable decomposition of (1), then the non-zero entries of Rm are each expressible in terms of the characteristic matrices of the first canonical decomposition. This result can he estahlished in the following way. Let Y be the n x n non-singular matrix whose existence is assured by Theorem 2. Then

The non-zero entries of R m have the form rTpiq, where n-m-l

<

pi (y-1Fy)i

we have

<

n-2. Since

In effect, this demonstrates that the Rm matrix of the initial condition equation is independent of

the particular observable decomposition that is being used to generate the solutions of (1).

From equation (19) it can be observed that knowledge of u(i)(O), 0 ~ i ~ m-1, is unnecessary for the evaluation of (J if (and only if) the ith column of the matrix r-1R is zero.

m

(24)

16

-As noted earl ier, the rank of R is m. Since r- 1 is non-singular, the rank of m

r-1Rm must be that of Rm i namely m. But r-1Rm has exactly m columns, consequently the existence of a zero column wou1d be incompatible with the established faet that rank r- 1R m = m. Thus r-1Rm cannot have a zero column and so the integrator initial conditions, i.e. the n-vector

a, cannot be evaluated unless each of u(O), d(O), ... u(m-l)(O) is specified.

Throughout the above discussion it has been assumed that m

<

n in equation (1). The initial condition equation corresponding to the case where m = n ean be formulated in an entirely analogous manner. In particular, to accommodate the case where m = n, the decomposition of (15) must be replaced by

x(t) Px(t)

+

qu(t) (20 a)

y(t) rT x(t) + b u(t) n (20b)

By paralleling tbe differentiation process used earl ier, the following system of equations is obtained

a ra + Rnf3 (21)

where r is as defined earlier, and

b n 0 0 0 0 0 rTq b n 0 0 0 0 rTpq rTq b n 0 0 0 rTp2q rTpq rTq 0 0 0 R n

o

Notice that R n is an n x n matrix that has rank ni i.e., R n is non-singular. This, of course, is consistent with the properties that one might have anticipated on the basis of the earlier analysis.

(25)

...

t f I ft f ti . . ft tH.rl ... ' . . . . 'P'). ,.",~ " , . . . " " . . I M I ! ! ,I! '" ! , i .

17

-Observe also that equations (21) and (17) are entirely equivalent from an 'existence-of-solution' point of view. That is to say, (21) has a solution (for arbitrarya) if, and only if, [' is non-singular, in which case the solution is unique and is given as

(22)

On the basis of the above results, it is c1ear that the motivation for distinguishing

the case where m = n in equation (1) arises wholly from notational, rather than mathematical,

peculiarities.

Equation (19) (or (22) in the event that m = n) represents a general result

appli-cable to any (observabie) decomposition of (1). For the first canonical decomposition - which is

always observable - one can obtain the result of (19) (or (22)) by working directly from the

funda-mental specification given in equation (9). In particular, note that (9) corresponds to the following

system of equations*

y(t) xn (t) + bn u(t)

For any t, 0

<

t

<

00 for which u(t), liet), .... u(m-l) (t) exist, these equations can be re-written as

*

The reader should bear in mind our convention that bn, bn_

1, .. , bm+ 1 are to be taken as zero if m

<

n in equation (1).

(26)

18

-The general equation in this sequence is (note that an 1)

k

:2, (an_iyCk-i)(t) - b

n_i u(k-i)(t» (23)

i=O

for k O,1,2, ... (n-1)

When evaluated at t = 0, equation (23) represents an explicit solution of the initial

condition equation with respect to the first canonical decomposition of (1).

6.0

ON THE OBSERVABILITY OF DECOMPOSITIONS OF EQUATION (1)

One might (correctly) suspect that the polynomials L(D) and M(D), which

com-pletely characterize equation (1), should, in some manner, influence the observability property of

its various decompositions. Our purpose here is to state two results that have a bearing on this dependency. These results are presented in the format of a two-part theorem, and are a direct

consequence of the work of Butman and Sivan in Reference 8. Indeed, Theorem 3 below might,

more properly, be regarded as a corollary of the two theorems presented in the above Reference.

THEOREM 3(0)

If the polynomials L(D) and M(D) have no common factors, then any

(27)

I

'

1'''l'Iw

''''"'.

''

'III

''Dl

m

""""11""'"1''''''''''' ft

',,"_'_

'

1I1._,

__

npy_t _ _ _ _ "gr.,.

t , 'I" I l m a " % r t ! ' , . . na wrmn

19

-THEOREM

3(b)

If the polynomials L(D) and M(D) have a common factor, then every controllabie

decomposition of (1) is non-observable.

Theorem 3(b) embodies the essential reason for the inadequacy of the so--called

'dummy-variable' method for obtaining analog computer solutions of equation (1). As noted earlier, the dummy-variable method is a realization of the second canonical decomposition of (1), which is,

in turn, always controllabie. Hence, if L(D) and M(D) have a common factor, the initial condition

equation associated with the dummy variabie method will not, in general, have a solution because

ï

is singular.

In the light of these remarks, it is of some interest to re-examine the example treated in Section 4. In particular, observe that the polynomials L1(D) and M1(D) (see eq. (4))

have the common factor (D+1).

7.0

DISCONTINUITY AT t .,

0

In considering the question of initial conditions it was assumed, in the preceding

development, that the forcing function, u(t), and its first (m-1) derivatives were well-defined at

t = O. This assumption was motivated by the fact that the determination of the required integrator

initial conditions needed explicit knowledge of the m-vector {3

=

~(O)

=

[u(O),ü(O), ... u(m-I)(O)]T (see

eq. (19)). In many problems of practical interest, however, the forcing function may be such that

~(O) does not exist. Under such circumstances we are, apparently, unable to proceed!

To be more specific, consider the situation where m = 2 in equation (1) and the

forcing function, u(t) , of interest is

u(t) e ( t

10<

t

~

0

~

0 for

<

0

For this u(t), the derivative ü(t) does not exist* at t =

0,

and consequently ~(O) is not defined.

Generally speaking, the forcing functions of practical interest for which ~(O) is

undefined, have the property that both ~(O~ and ~(O-) are, nevertheless, well-defined. The

present impasse can therefore be circumvented by evaluating equation (16) at either t = 0- or

t = 0+ to get either**

*

The given function does have left- and right-hand derivatives at t = 0, but these are not equal.

(28)

(24a)

or

(24b)

respectively.

The decision as to whether 0- or 0+ represents the correct alternative is con-clitioned by the answer to the following question: Are the given 'initial' conditions on y and its derivatives intended to be values assigned at t = 0- or t = O+? The following (heuristic) argument provides the answer.

The fact that ~(O) is not defined, generally implies th at ~(O-) 1= ~(O+). This in turn implies that Ya(O-) 1= Ya(O+). In other words, the forcing function in question causes an instantaneous change in the

Y

a vector across t = O. But initial conditions are intended to char-acterize the system (in this case eq. (15» before the excitation has had an opportunity to influence it. Consequently, one is obliged to use the argument 0- rather than 0+ for purposes of evaluating initial conditions; that is, equation (24a) rather than (24b).

In summary, we have argued that when the forcing function of interest is such that

~(O)

is not defined, one uses the relation

to establish the necessary integrator initial voltages.

Of ten one is interested in obtaining the solution of (1) corresponding to u(t) = O(t)

(the unit impulse function) and the initial conditions y(k)(O)

=

0 for k

=

0,1,2, ... (n-l). This response is normally called the impulse response of (1). Let Ac be the analog computer realiza-tion of some observable decomposirealiza-tion of (1). It is well known that the unforced response of Ac' starting from some judicious choice of integrator initial conditions, is precisely the desired impulse response of (1). A simple expression for these appropriate initial conditions is developed in Ap-pendix D, using the conclusions of this Section.

8.0 CONCLUSIONS

(29)

, ! '" ! • 0' ''''!!I "M'"

21

-an equivalent system of equations of the form

x(t) Px(t) + qu(t)

(25) y(t) rT x(t) + vu(t)

In general, this system of equations must have the following two properties: det(sI-P)

=

L(s) and L(s)[rT(sI-pr1q + v]

=

M(s). In terms of the Definition given in Section 2, this is tantamount to the requirement that (25) be a decomposi ti on of (1).

It has been noted that two general classes of decompositions of (1) can be distin-guished; nameIy, those that are observable and those that are controllabIe. Each of these classes has an archtypical member whose coefficient matrices can be written by inspection from the coeffi-cients of L(D) and M(D) (see eq. (9) and (10». These two particular decompositions are referred to as the first and second canonical decompositions, the former being observabie and the Iatter, controllabie. The second canonical decomposition corresponds to the so-called 'dummy-variabIe' method for solving (1). It is shown (Theorem 2) that any observable decomposition of (1) can, via a linear transformation, be transformed to the first canonical decomposition.

Suppose th at (25) is a decomposition of (1) and let Ac be a corresponding analog computer realization of (25). To obtain any particular solution of (1), the given initial conditions on y and its first (n-1) derivatives must be translated into an appropriate set of integrator initial voltages. In Section 5 the initial condition equation, (17), is derived. This represents the funda-mental constraint relationship among the vectors y(O)

=

[y(O) ,y(O), ... y(n-l \0)] T ,~(O)

=

[u(O), ü(O), ... u(m-l)(O)]T, and x(O) of (25). It is shown that, in general, there exists a solution for x(O) (which specifies the necessary initial condition voltages of Ac) if, and only if, (25) is an observ-able decomposition of (1).

In Section 6 it is noted that, if the polynomials L(D) and M(D) in (1) have no common factors, then every decomposition of (1) is both observable and controllabie. Furthermore, when L(D) and M(D) have a common factor, the class of observable decompositions and the c1ass of controllabie decompositions are disjoint.

9.0

REFERENCES

1. Marzollo, A. Nonobservable or Noncontrollable Analog Schemes for the Solution of a Special Class of Differential Equations.

(30)

2.

3.

4.

5.

6.

7.

8.

9.

10.

11.

Birta, L.G. Coddington, E.A Levinson, N. Zadeh, L.A. Desoer, C.A. Athans, M. Falb, P.L. Levine, L. Hildebrand, F.B. Butman, S. Sivan, R. Gantmacher, F.R. Tuel, W.G. Bellman, R .

Comment on 'Nonobservable or Noncontrollable Analog Schemes for the Solution of a Special Class of Differential Equations'.

IEEE Trans. on Electronic Computers, Vol. EC-16, Aug. 1967, p.516.

Theory of Ordinary Differential Equations. McGraw-Hill, New Vork, N.V. 1955. Linear System Theory.

McGraw-Hill, New Vork, N.V. 1963.

Optimal Control.

McGraw-Hill, New Vork, N.V. 1966.

Methods for Sol ving Engineering Problems Using Analog Computers. McGraw-Hill, New Vork, N.V. 1964.

Methods of Applied Mathematics.

Prentice-Hall, Englewood Cliffs, N.J., 1961.

On Cancellations, Controllability and Observability.

IEEE Trans. on Automatic Control, Vol. AC-9, July1964, pp. 317-318.

The Theory of Matrices.

Vol. 1, Chelsea Publishing Co., New Vork, N.V., 1959. On the Transformation to (Phase- Variable) Canonical Form. IEEE Trans. on Automatic Control, Vol. AC-ll, July 1966, p. 607.

Introduction to Matrix Analysis. McGraw-Hill, New Vork, N.V. 1960.

(31)

'1"IIII'III"""mnmnmll . . . . nUlln.IIIIIIIJIIIIII!1111t111! b I HilI! "lh IIlltl111 j I I" ! ! ! I 1I!IIIIIhllllll!l,. UIU.II d.! I

- 23

-u (t)

2 3

FIG

.

Ia

ANALOG COMPUTER REALIZATION OF EQUATION (5)

u (t)

2

(32)

. - - - ' \ ' - - - 1 (-I)"-I bo n-I (-I) bi (_I)n-I b2 bn_2 bn_1 bn y (t) ~ - - - 1 < - - - 1 f - - - ' - - - '\' '1''---'

FIG

.

2

~--- - ---~---~

ANALOG COMPUTER REALIZATION OF THE FIRST (OBSERVABLE)

CANONICAL DECOMPOSITION OF EQUATION (I)

tv

(33)

bn u (t)

FIG.3

-bn -I Xn(t) On_I °n_2 n-I (-I) O2 (-I)n-IOI n-I (-I) 0 0 bn-2 ~~-_X_n __ 1_0_) ... _ _

+

-(_I )" b 2 n-I () (-I) x3t ') (_I)n-\ ~ilnX2(t)

ANALOG COMPUTER REALIZATION OF THE SECOND (CONTROLLABLE)

CANONICAL DECOMPOSITION OF EQUATION (I)

(-I)n-IXI(t)

)

tv

(34)
(35)

r t ' '1 . . . . · · · · , I I ' I , " I I \ J I I I I' UIUUlUllIll1'll 1 1 'i 11 ",",INH! 4 Il'MiItU, . . " • • • " . ' _-·-'.'1

-

27

-APPENDIX A

PROOF OF THEOREM 1

A Sn-l + n-2

Let u(s) = det(sI-P) = sn + an-l an_2s + ... + als + aO •

The following two results from matrix theory have direct relevance:

(a) (b) where and pn + a n-l pn-l + pn-2 p I a n-2 + .... + al + a

o

B(s) n-l

L

k=O B sn-k-l n-k-l

o

-Item (a) above is a statement of the Cayley- Hamilton Theorem. A detailed expo si-tion on both these results can be found in Reference 9, pp.83 to 88.

PART (A)

(i) Suppose v

Ic

O. Then

rTB(s)q + vÖ(s)

t1(s)

Since r TB( s)q is at most an (n_l)th degree polynomial, it follows that rTB(s)q + v~(s) must be of the form

n é n-l é é

v S + S n-l s +... + S IS + S 0

which is the desired result.

(36)

(ii) Suppose v = 0, and that there exists a least integer {3 in the set 10,1, ...

n-l\ such th at rTpi3 q .;, 0 while rTpkq = 0 for 0 ~ k

<

(3.

Under these circumstances

o

and

Consequently, rTB(s)q is an (n-{3-l)th degree polynomial and its leading coefficient is rTpi3q.

(iii) Finally, if v

=

0 and rTpkq

=

0 for k

=

O,l, ... n-l, it follows that

o

for each k. Hence, rTB(s)q '" 0, which is the desired result.

PART (8)

The proof is exhihited only for the case where v .;, 0 (hence iii

for the other altematives requires only trivial modification.

n). The proof

Let t he any point in the interval [0, 00) where u(t), ti(t), ... u{n>(t) exist. From

(2b) we have

y(t) rTcp(t;a,u) + vu(t) (A. 1)

where

cp(t;

a,u) = Pcf>(t; a,u) + qu(t) (A. 2)

Since the right-hand si de of (A. 1) is defined, the existence of y(t) is assured. Furthermore, the

right-hand side of (A. 1) is differentiable. The differentiation of (A. 1) and the substitution of (A. 2)

yields

}iet) rTp

cp(t;

a,u) + rT qu(t) + vü(t) (A.3)

The hypothesis ensures the existence and differentiability of the right-hand side of (A.3). On differentiating (A.3) and substituting (A.2), we obtain

(37)

II I """U""""""

29

-From the hypothesis and (A.4), the existence (and differentiability) of y(2)(t) is assured.

The continued repetition of this procedure yields the system of equations k-1

y(k)(t) = rTpk ~(t; a,u) +

I.

rT pi qu(k-i-1)(t) + tlu(k\t) (A.S)

j=O

with k 0,1,2, ... n. Since, by hypothesis, the right-hand side of (A.S) is defined for each k,

o

<

k

<

n, we have the desired result that y(t),y(t), .... y(n)(t) exist.

Multiply the kth equation, 0

<

k

<

n, in (A.S) by ak (notice that an add the resultant (n+ 1) equations to get

1) and

It follows from the Cayley-Hamilton Theorem that the coefficient of ~(t; a,u) in the above equa-tion is zero. Further, the double summation of the second term can be re-arranged so that we get

Referring now to the preliminary item (b), we note that k-1

I.

a pk-j-1 n_j j=O Consequently we have n-1 n n + ti

I.

aku(k\t) k=O

I.

rTBkqu(k)(t) + ti

I.

aku(k)(t) k=O k=O or, equivalently,

(38)

or

~(D)y(t) [rTB(D)q + v ~(D)]u(t) (A.6)

The coefficient of u(t) on the right-hand side of (A.6) is, however, 17(D). Thus (A.6) represents the desired result that ~(D)y(t) = 17(D) u(t).

(39)

'111"'11'911"""''''''''',,_ ""M'.'''IIIIIIIIII''',,,,,,, -"" - j' "' Hl . ! ,

31

-APPENDIX B

By our earlier definitions, h, F, and gare given as h T [0

o

o

1]

o

o

o

o

o

1

o

o

o

o

o

1

o

o

o

F g

o

o

o

1

o

o

o

o

o

1

Notice first that, because of the special form of h, only the last row of (sI-Frl is significant in

evaluating h T(sI-Frl. Thus we need only consider the co-factors of the last column of

s

o

o

o

o

-1 s

o

o

o

o

-1

s

o

o

(sI-F)

o

o

o

-1

s

o

o

o

o

-1

Let A.. denote the co-factor of the ijth element of (sI-F), then

(40)

Aln det 1 A2n det s

-1

o

o

o

o

s

o

o

o

o

Continuing in this manner, we have

s

-1

o

o

o

o

-1

o

o

o

o

s

-1

o

o

o

s

-1

o

o

o

o

o

o

o

o

-1 s

o

-1

o

o

o

o

o

o

-1 s

o

-1

(41)

" ,·!,;p-r"""""U""II."P'

A

and finally

tOn leem'U';''' ft ljjjul!!w . . . h ... h"". 9" !

n-l,n s

-1

det 0

o

o

s

-1

o

A nn det

o

o

33

-o

s

-1

o

o

o

s -1

o

o

o

o

s

o

o

o

o

s

o

o

o

o

o

o

o

o

s 0

o

-1

o

o

o

s -1

o

o

o

o

s

(42)

Thus

*

Also

and so

Therefore

n

det(sI-F)

L

(sI-F\n Ain

i=l L(s)

L

b sk - b 1 [n-l = L(s) k=O k n n-l

L

k=O 1

["-'

hT(sI-Frlg + b n

L

bksk -L(s) k=O 1

[~

bk'kJ

L(s) k=O

*

Here (sI-F). denotes the inth element of (sI-F). In (B.1) n-l n

~,kJ

b

L

~sk + b

L

n n k=O k=O

(43)

' ' ' I I I ' W · _ - W ' ' " _ ' "'" 111

35

-Hence

M(s)

(44)
(45)

37

-APPENDIX C

PROOF OF THEOREM 2*

Since (11) is a decomposition of equation (1), we have

det(sI-P) = L(s) = sn + a

n_l sn-l + ... + al s + aO

This, in turn, implies that the characteristic polynomial of P (and hence of pT) is

From the Cayley-Hamilton Theorem, therefore, it follows that

i=O

Post-multiplication by the vector r yields

(C.1)

i=O

Define the n-vectors vo' vI' ... vn- l in the following way

= 1,2, ... n-1 (C.2)

and let

It should be noted that by virtue of (C.1), the definitions of (C.2) imply that

pTV = -a.. V

o

n-l u (C.3)

(46)

38

-In fact, it can be readily verified that

for k 1,2, ... n-1

and so

From (C.l) it follows that the right-hand side of the above equation is -aa vo' as required.

We now demonstrate that the vectors vO,vl,v2, ... vn_l are linearly independent,

and hence, that V is non-singular. Suppose that this is not the case, then there exist constants

cO,cl' ... cn_

1' not all zero, such that

or or n-l

L

ck vk 0 k=O n-l Co va +

L

ck vk 0 k=l n-l k Co Vo +

L

L

ck an_iCPT)k-i Vo k=l i=O

By re-arranging the double summation, we get

or

o

o

(47)

where "'R!I r' 1 '-'I t I I I' , 39 -n-l P k =

L

Ci an -i+k i=k ' . . . _ _ '\LIIQ111!II1MI\!l1

But since we are dealing with an observable decomposition, itfollows that the vectors V

o ,P T vo' (pT)2v

O' ... (pT)n-l Vo are linearly independent. Consequently (C.4) imp lies that for k = O,l, ... n-l

However

Pn-l cn-l' hence Pn-l = 0 implies c n_l = 0;

Pn-2 c n-2 + cn- l an- l , hen ce Pn-2

=

0 implies cn-2

=

0; The continuation of this process leads to the result that c

i =

0

for i =

0,1,2, ...

n-l, which

contradicts our original assumption. Thus it must be that v 0' v l'''' v n-l are linearly independent.

In order that V be the desired transformation, it is necessary and sufficient that

VP FV or pTVT VTF T rT hTV or VTh and Vq g Now 0 1 0

o

0

0 0 1

o

0

0 0 0

o

0

VTF T [ vn_l;vn_2; .. ·vl;vO] 0 0 0 1 0 0 0 0

o

1

-aa -al -a2

(48)

But from (C.2) and (C.3) we have

and so

Also, we note that

as required.

[vn_l;vn_2; ... Vl;VO] 0

o

o

1

To complete the proof that V is indeed the required transformation, it remains to establish that

Vq

=

g

(49)

41

-Using the established results that P

=

V-lFV and rT

=

h TV, the above equality yields

or

i.e.

which holds for all s*. Utilizing equation (B.l) of Appendix B, this can be written as

_1_ [1 s s2 ... sn-l]Vq = _1_ [1 s s2 ... sn-l]g

L(s) L(s) (C.S)

Choose Yl' Y2'··· Yn to be n distinct, non-zero, rea! numbers, none of which is a zero of L(s). Since (C.S) must hold for each s = Yi' i = 1,2, ... n, we obtain the following set of equations

ï D Vq ïDg (C.6) where Y/ n-l 1 Yl Yl 1 Y2 Y22 Y2 n-l ï D 1 Yn Y 2 n-l n Yn

But det ï D is the Vandermonde determinant, and since Yi 1= Yj' det ï D 1= 0 (see Ref. 11, p. 186), and so ï D is non-singular. Equation (C.6) therefore implies that

Vq = g

which is the desired result.

q.e.d.

(50)
(51)

JMU!I"NWW""'"

.-

",r'"'U"'''II'''''''''""",'

.

.,

43

-APPENDIX D

THE IMPULSE RESPONSE OF EQUATION

(1)

Dur purpose here is to indicate an analog computer technique for generating the impulse response of (1); namely that solution that corresponds to the initial conditions /k)(O) = 0,

o

~ k ~ n-1, and the forcing function u(t) = B(t) - the unit impulse function. We restrict our considerations to the case where m

<

n.

Let

x(t) Px(t) + qu(t)

(D.1) y(t) = rT x(t)

be any observabIe decomposition of (1) and let Ac be an analog computer realization of (D.1)*. Ignoring for the moment the fact that u(t) = B(t) is unrealizable in practice, we recognize that

~(O) is undefined and use the results of Section 7 to concl ude that the required integrator initial conditions are specified by

But y(O-) = 0 (given) and ~CO-) initial conditions are all zero.

o

when u(t) = B(t). Thus xCO-) 0; i. e., the required

The resultant (but hypothetical) response of A can be determined by explicitly c

solving (D.1) with x(O-)

=

0 and u(t)

=

B(t). In doing so, we utilize certain (pseudo-) mathe-matical properties of the delta function. This yields**

t

y(t) rT ePt x(O-) + rT ePt (o- e-PT qU(T)dT

(D.2)

Equation (D.2) is, therefore, the impulse response of (1). Furthermore, since u(t) , ü(t), ... u(m-1)(t), (u(t) =

o(t»

each exist for t

>

0,

we are assured by Theorem

1

that this conclusion is valid for

all

t

>

O

.

*

Thus the elements of the vector x(t) bear a one-to-one relation to the integrator outputs of Ac and, furthermore, the dynamic behaviour of (0. 1) and Ac is identical.

(52)

44

-Consider now the response of Ac corresponding to x(O) = q and u(t) - O. To

obtain an explicit expression for this response, we again solve (D.1) to get

which is, clearly, identical with (D.2).

What we have demonstrated, therefore, is that the unforced response of A e

cor-responding to the integrator initial conditions x(O) = q is precisely the impulse response of

(53)
(54)
(55)
(56)

Cytaty

Powiązane dokumenty

Equip the harmonic oscillator with a damper, which generates the friction force proportional to the movement velocity F f = −c dx dt , where c is called the viscous damping

A. The initial value problem for systems of differential equations with impulses is considered. For the systems under consideration the impulses are realized at

In ac- cordance with [6] the proof of the existence theorem is based on an iter- ative method and a monotone behaviour of some operator.. The proof of the uniqueness is different

In recent, years, several bounds of eigenvalues, norms and determinants for solutions of the continuous and discrete Riccati equations have been separately

INSTITUTE OF MATHEMATICS OF THE POLISH ACADEMY OF SCIENCES INSTYTUT MATEMATYCZNY PO LSKIE J AKADEMII

B ie le ck i, Une remarque sur la méthode de Banach-Cacciopoli-Tihhonov dans la théorie des équations différentielles ordinaires,

(b) If fees continue to rise at the same rate, calculate (to the nearest dollar) the total cost of tuition fees for the first six years of high school... (2) (d) Explain why the

Special attention is given to implementation of one-step methods and predictor corrector methods for functional differential equations including equations of neutral