A PRACTICAL METHOD FOR OPTIMUM TRANSIENT
RIDDANCE IN DAMPED LINEAR SYSTEMS
by
Phung khac Nguyen
2
S SE?
1970
.1
'.1
A PRACTICAL METHOD FOR OPTIMUM TRANSIENT
RIDDANCE
IN
DAMPED
LINEAR
SYSTEMS
by
Phung khac Nguyen
Manuscript received November, 19690
• 1
•
ACKNOWLEDGEMENT
The author wishes to aCknowledge, with bis de ep appreciation, the
invalu-able guidance and encouragement given by Dr. Po C. Hughes.
This research has been sponsored by the University of Toronto, under a
University of Toronto Open Fellowship and by the U. So Air Force Office of
SUMMARY
The damping rate of a stable damped linear system can be defined as the negative of the real part of the eigenvalues which is closest to the imaginary axis. Consequently, the most rapid decay of transients can be achieved by selec-ting the design parameters in such a way that the damping rate is as large as possible. However3 the discontinuity in the derivatives of the daroping rate with respect to the design parameters, especially in the neighbourhood of its WEXimum, causes difficulty or even failure with the classical theory of maxima and minima., and other direct o~ indirect optiroization methods.
This problem is reformula~ed as an optiroization problem wherein a fQ~ction
of seve~al design parameters is maxiroized using a linear transformation of the eigenvalues in the complex plane and two Hurwitz criteria as constraints. The optimum dfu~ping rate and its associated design par~~eters are then the solution to a set of nonlinear, simultaneous, algebraic equations. A modified Newto n-Raphson version in which a relaxation parameter is introduced and the inverse Jacobian matrix is approximated, is used for nuroerical solution.
Two exaroples of second and fourth order systems are worked out to
illustrate the typical behaviour of the damping ~ate in the neighbourhood of its maximum and to coropare the optimum ~esult5 found by the present approach with those obtained from direct search or graphical methods. Various numerical appli-cations of the present method to satellite dynamic systems found in open literature are also carried out.
I.
11. l I l .IV.
V.
VI.
TABLE OF CONTENTS INTRODUCTIONINDIRECT OPTIMIZATION METHOD FOR THE LEAST-TIME-TO-DAMP PROBLEM
1 2
2.1
Analytical Formulation of the Least Time to3
Damp Prob lem2.2 Approach to the Solutions 3
2.3
Discussions on Analytical Solution5
2.3.1
Five Sets of ~ossible Solutions 62.3.2
Significance of the Possible Solutions 72.3.3
Comparison with the n Hurwitz Deter- 11minant Constraint Approach
2.3.4
Advantages of the Hughes Approach15
NUMERICAL METHOD
16
3.1
Coefficients of the Characteristic Equation16
3.2
Test Functions Tl & T2
173.2.1
3.2.2
3.2.3
Evaluation of bi's Formation of matrix (h . . ) lJ Expansion of det (h .. )lJ
3.3
Modified Newton-Raphson Method3.4
Discussions on Numerical Solution ILLUSTRATIVE EXAMPLES4.1
Second Order System4.2
Fourth Order SystemAPPLICATIONS IN SATELLITE DYNAMICS
5.1
Gravitationally oriented two body satellite 5.2 Gravity oriented satellite with 2 hingedstabilizers
5.3
Dual spin satellite CONCLUSIONS REF'ERENCES APPENDIX 1718
18
18
2021
21
24
28
28
31
33
37 39CAPITAL LEI'TERS [A] A, B, C C. l '~ D F G [H] I I ' 1,2,3
[J]
K L M M' P Q R R~( .) S P TLOWER CASE LETTERS a. l b. l C d NOTATION n x n matrix defined by (2.1)
principal moments of inertia about roll, pitch, yaw axes
constants defined by (5.28) constant defined by
(5.6)
constant defined by (5.2)dimensionless damping coefficient approximation to the negative of
[J]
n x n identity matrixprincipal moments of inertia Jacobian matriiK
spring constant length
mass
total mass of system polynomial operator polynomial operator
radius
real part of (.)
ttïace of [A] raised to the power p test function
polynomial coefficient defined by (2.3) polynomial coefficient defined by (2.8) damping coefficient
f h .. lJ i n
~)
m r s t u x GREEKa
5 E p T wvector function of order n
element of Hurwitz det~rminant
defined by i
f
=
-1Lagrangian multiplier order of [A]
binomial coefficient nurnber of parameters
negative of the greatest real part of system engenvalues
constant defined by
(5.22)
time
input vector
n x 1 state vector
n
x
1 vector defined by (3.~8)real number defined by
(2.11)
real, positive nurnber
positive, real nurnber defined by (2.30)
p
x
P
Hurwitz determinantsmall change in x
system eigenvalue
transformation eigenvalue, defined by (206)
m x 1 parameter vector
real, positive nurnber
relaxation factor
function to be optimized
angular velocity (also coefficient of characteristic
equation
(4.1))
• ADDITIONAL SYMBOLS
*
~(.) optimum condition discrete change of (.)NOTE: . Symbols of limited use are defined where they occur.'
I. INTRODUCTION
Over the past twenty five years, a great body of knowledge has been built
up on the theory of feedback control of linear, time invariant dynamic systems. Although many dynamic systems, especially in aerospace engineering, are nonlinear and/or time varying, they can be approximated by linear, time invariant systems under some circumstances and various applications of this theory can be found in many branches of engineering science. While most authors have concentrated on
the stability and frequency response of these systems, little attention is paid
to their transient response due to some initial disturbance. The nature of
the disturbance depends on the situation, for example, it might be the docking
irrpact on a space station, the jettison of a space vehicle from its mother ship, or a micrometeorite impact onto a satellite antenna (which could lead to consider-able noise). It is therefore clear that in practice it is usually desirable to design the system parameters in such a way that the transient will decay as rapid-ly as possible. The problem of maximization of decay rate of a damped, linear system is thus formulated and forms the analysis of this paper.
One problem of this general type 1$ related to passive satellite attitude control using the gravity-gradient principle. Etkin (Ref.5) and Zajac (Ref.22) employed a graphical approach, and Yu (Ref.20) used a root locus technique. These methods sufficed when only two design parameters were available. Later, when more complex configurations were optimized, involving more design
para-meters (e.g. Hughes Ref. (9) and Garg (Ref.7)), a more sophisticated approach
was necessary, and the powerful gradient method was used. However, difficulties
were reported due to the discontinuities in the derivatives of the decay rate with respect to the system parameters. In response to this difficulty, it was
recently reformulated analytic~lly by Hughes (Ref.10) as an optimization problem
wherein a function of the design parameters is maximized. This method is sound theoretically and can be used generally for linear invariant systems. The pur-pose of this paper is then to explore its computational practicability. For completeness, the Hughes approach will be presented in the next section.
One of the features of the problem at hand is that the optimum damping (or decay) rate cannot be determined analytically in general. In the first place, finding the system eigenvalues is not a trivial process and it is well known to be analytically impossible for fifth or higher order systems, except in special cases. This leads to the unavailability of an explicit dependence of
the real part of the right most eigenvalues on the design parameters. Even if such an explicit expression was known analytically, the discontinuity of the
derivatives of the damping rate with respect to the system parameters, as of ten the case, would make the theory of maxima and minima fail to guarantee au
absolute maximum damping rate. As will be illustrated later, a corner, or under some circumsta.nces a cusp, usually occurs under the optimum condition where the real part of the right most eigenvalue is "taken over" by its adjacent (on
the 1eft) real eigenvalues or the real part of its adjacent (on the left)
complex eigenvalues. It is this fact which gives rise to the discontinuity
mentioned above. If such a case occurs, a slight:ghange in the design parameters
off~their optimum values will result in a large change of the damping rate. It is worthwhile to note that the discontinuous nature of the gradient of the damping rate even leads to difficulties in directly searching for the optimum point. These facts were pointed out by Hughes (Ref.10).
In the present analysis, these difficulties are overcome. The problem
analytically as the solution to a set of nonlinear, simultaneous algebraic
equations. This is where the numerical problem comes in. Prior to assessing the
computational practicability of this method, an attempt will be made to find
the significance of the solutions in terms of the behaviour of eigenvalues. Such an analysis will explain how the damping rate attains its maximum without necessarily investigating its variations with the design parameters directly. However, this information can be obtained only when the optimum solution has been found.
The optimum damping rate of a simple second order system will be examined analytically first. It will illustrate the typical behaviour of the damping rate in the neighbourhood of its maximum and also will confirm the effectiveness of the present approach. A fourth order mechanical vibrating system with
8
parameterswill next be considered. The complexity of the equations for this simple system
requires a completely numerical computation. A modified Newton-Raphson vers ion, suggested by Broyden (Ref.2) in which a relaxation factor is introduced and the inverse Jacobian matrix is approximated, is then used for numerical solution.
A computer program was written for the general case which became the standard one
for successive dynamic~systems. For different systems, the user just needs to modify the subroutine specifying the dependences of the coefficients of the characteristic equation on the system parameters. Wherever permissible, the behaviour of the damping rate in the neighbourhood of its maximum will be investi-gated graphically.
When the present method is proved fruitful, it wi~l be applied further to satellite dynamic systems found in the literature, e.g., gravity-gradient and dual-spin satellites. The accuracy of the optimum solutions found by this indirect optimization technique will then be compared with those obtained from other direct methods.
11. INDIRECT OPTIMIZATION METHOD FOR THE LEAST-TIME-TO-DAMP PROBLEM
In engineering science, many practical problems can be modelled
analyti-cally by a linear differential system
x =
[A
(7r)] .! + u (t) (2.1)where
7T m x 1 parameter vector
t time
Even nonlineax systems can be reduced to (2.1), under some circumstances using small perturbations from a referenced steady state. In general, the matrix [A] in (2.1) might be time varying, i.e.,
[A] = [A(E ,
t)]
however, [A] is assumed time-invariant herein so that the Routh-Hurwitz criteria
can be applied.
From linear system theory, it is well known that the transient response of the system (2.1) can be investigated by considering that of its autonomous system
(2.2) and the influence of some initial disturbance, not explicitly included in (2.1)
can most quickly be dissipated if all the real parts of the eigenvalues of
[A]
are as negative as possible. Moreover, of the eigenvalues of[A],
the right-most one is the most critical in view of the stability of the system. Conse-quently, it is always desirable to design the available system parameters rr insuch a way that the algebraically greatest real part of the system eigenvaïues is as negative as possible. In other words, it is wished to choose the design parameters
E
that will ensure the most rapid decay of transients.2.1 Analytical Formulation of the Least-Time-To-Damp problem
Given a linear differential system (2.2), find the value of rr for which
the greatest real part of the system eigenvalues is maximized.
n
Let det(A -ÀI) =
pCZ!:
,À) = Àn +I
ai(!I)
Àn-i = 0 (2.3)i=l where
eigenvalues of
[A]
coefficients of the characteristic equation
I f
~
{Re
(À. )}i=l 1
(2
.
4)
is the greatest real part of the system eigenvalues then it is wished to solve the problem:
max rr
Prior to actually solving this problem, a number of the difficulties inherent in its solution are now noted. The functional dependence implied by [A (rr)] may be
a complicated one. The eigenvalues of
[A]
are not smooth functions (everywhere) of the elements of [A] , even t hough the elements of [A] may be smooth functions of rr. This may lead to discontinuiti"es of the form èJr/àrr (e.g., figs. 2,4,
8,
14,-20)
and create great difficulty in the desired maximization process.2.2 Approach to the Solutions
Assume an asymptotically stable system so that all the À.(rr) (i = l, •• ,n)
1 ..
-lie in the left half of the complex À-plane. A linear transformation in the complex plane defined by
J-L=À+P (2.6)
where P is areal, positive number is now made (see fig. 1). So long as P
<
r, the location of the J-L. 's will stil~ lie in the left half of the complex1 J-L-plane.
n Q (!!. , ~) = ~ n
+
L
b. ~ n-i 0 1. i=l where i b. (!!.,p) =C:)
(_p)i+L
a.G~~
)
(_p)i- j 1. J(2.8)
j=l (i 1, ••• ,n)In the limiting case, where p -7r from below, either a real root ~ -7 0
from the left or a complex conjugate pair have the common real part -70 from
the left*.
In the former case, b -7 0 while in the latter case, !::,. 1 -7 0 (see
n
n-appendix), where!::" 1 is the (n-l) x (n-l) Hurwitz determinant:
n-o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
b n-l b n-2 bn-3
o
o
o
o
b n b n_lAccordingly, it is convenient to define the two test functions:
Tl (!!. ,p)
T
2
CE
,p)b (TT, p)
n -
(2.10)
In order for the ~. 's to lie in the closed half of the complex ~-plane, 1. it is necessary to have: Tl (!!., p) T 2 (!!., p) >0
(2.11)
>
0Sufficiency is provided by hypothesis since an asymptotically stable
system is assumed initially. As long as p
<
r at any TT, one of the conditions(2.11) or both ef them will therefore be violated as P--7 r from below.
* There might alse be more than one real ~ -7 0 and more than one pair of complex
~ whose real part -7 0 in the limiting case. However, this fact does not
affect the analysis described hereafter.
The problem of maximization of the damping rate is therefore to maximize
p subject to the constraints (2.11). One thus has to maximize:
where
t
l
,t
2 are constant Lagrangian multipliers.by the necessary conditions:*
}Z cp= 0 -# dep 0 TT
dP
depà<P
~
dÇ
= 0 or dT l dT2t
ld7T
TT +t
2d7T
=
0t
l dT l+
t'f
dT 2 + 1 0di)
di)
_
t{X
lt
2 CX2=
0which provide m + 3 equations for m + 3 unknowns:
2.3 Discussions on Analytical Solutions
The maximum
R
is then given(2.13)
(2.14 )
(2.15)
(2.16)
A ~~ose scrutiny of the above (m + 3) simultaneous, nonlinear, algebraic equations (2.14) to (2.16) reveals the following five distinct cases:
( i) Tl T 2 0:
,
tl ,t
2r
0 (ii) Tl T2 O',
t l 0,t
2r
0 (iii) Tl=
T2=
O',
tlr
0,t
F
=
0 (iv) Tl>
0, T 2 0; tl 0,t
2r
0*
cf. section 2.3.3+
Note that these~ .may also be the necessary conditions for the real part of theleft most eigenvalues to be minimum if the constraint p
<
r at any ~ isre-leased, and n
= 4p for example.
-~ If all 6.(i
=
l, ••• ,n) were used as constraints, it would lead to (m+n+l)simultan~ous equations. Thus the present method saves (ri-2) equations. See 2.3.3.
(Equation (2.15) requires that at least one of the Lagrangian multipliers be
different from 0.)
2.3.1 Five Sets of Possible Solutions
In the first three cases, since
t
l ,
t
2f
0, equation (2.16) becomes:T
1
=
T2
=
°
(2.17)If m
=
1, (2.17) is sufficient to solve for ~*,p*.
If dTl/O~ and OT2/0~f
0,this is the case (i) whi1e if dT
2/OV
=
0, it is case (ii) and if dT1/OV=
0, itis case (iii). Since one is usua11y just interested in ~* and p* there is no
need to distinguish these three cases.
However, if m
>
1, for case (i) any two of equation (2.14) may betaken, and if such a pair is viewed as two equations in
t
l andt
2, then to ensure
a non-trivia1 solution, one must have:
=
°
(j 2 ••• ,m) (2.18)d(V1, ~)
Equations (2.17) and (2.18) became the necessary m + 1 equations which specify
~* and
P*.
The Lagrangian multipliers can be computed from:~*,p*
I
dT
l- d1r.
1 (2.19)The equations for cases (ii) and (iii) will be apparent later. In case (iv)
equation (2.14) gives the m conditions
showing the stationary of T
2 with respect to ~ at ~* and p* while equation (2.16)
yields one equation
(2.21) and one inequality
(2.2?)
Equations (2.20) and (2.21) become the necessary m + 1 equations which specifyx* and p*. The Lagrangian multiplier .e
2 is then given by 1
(2.23)
When
(2.22)
becomes an equality, then(2.20), (2.21)
and(2.23)
give m +2
equations for case (ii). Similarly, for case (v) the appropriate set of m + 1 conditions isand the Lagrangian multiplier
Tl = 0
T
2
>
0
.el is determined fram 1
(2.24)
(2.25)
(2.26)
(2
.
27)
When
(2.26)
becomes an equality,(2.24), (2.25)
and(2.27)
give m +2
equations for case (iii).Although there exist five possible solutions to
(2.14)
to(2.16),
it is only necessary to solve the three cases (i), (iv) and(v).
Evaluations of Tl and T2 at ~ , p* will distinguish case (ii) from case (iv) and case (iii) from case
(v)
respectively.2.
3
.2
Significanee of the Possible SolutionsSince the five above solutions are all possible and it is not generally known in advance which one of the five will yield the true optimum solution, one has to solve the three cases (i), (iv) and
(v)
individually, and select the value of r* which is the largest. This difficulty exists because a function of severa1 variables is being minimized subject to constraints. In such cases it is not generally known in advance whether the minimum occurs on a boundary or within the permissible region. However, each solution of the five cases implies a particular type of behaviour of the eigenvalues in the neighbourhood of the optimum point. Unfortunately, such information is known only when the true optimum solutions have been found.Let ~. 's (i = 1, ••• ,n) be the eigenvalu$of
(2.7)
then Tl and T2
can be expressed in têrms of ~. 's viz: . ~ n T = 1 (;'l)nIT
~. T 2 1=1 ~ n~n-l~ 1 ••. n = (-1) 2lT
j < (ORLANDO'S formula) k(2.28)
(~j+ ~k)(2.29)
Thus:
o
if there is one or more than one ~.=
0 1b) T2
=
0 if there is at least one pair of ~i's such that ~j = -~k = i5where 5
=
real number>
0 (see Appendix)(2.30)
In case (i) the necessary condition for a solution to Tl T
2 0 (m ~ 1) to exist requires: or (a) 5
=
0 (b) 5>
0 and one ~.=
0 1In case (a) there will be at least one pair of conjugate ~. 's whose real partE 1
equals 0; while in case (b) there will be at least one ~.
=
0 and one pair 1~j
=
-~k=
i5r
0
(if
jf
k).
Since$1' $2
f
0,
from(2.19)
one has:OT
lrQ
dl[
(2.31)
2:*,P*OT
2r
0d7T
(2.32)
~,p*Suppose o~./o~1 exists, then differentiating
(2.28, 29)
with respect to ~ 1 - ~,P*shows that under the optimum condition, there are 3 and only 3 eigenvalues on the imaginary axis of the complex ~-plane. In other words, either the right most
complex eigenvalues tend to its adjacent ,(on the left) real eigenvalues or the
right most real eigenvalue tends to its adjacent (on the left) complex eigenvalues
under the optimum condition. Let
Then
(2.31)
and(2.32)
read:O~l
d7T
~l*
~2*
=
0(2.33)
i5r
0(2.34)
8
Since O~./O~
=
oÀ./o~, (2.34) implies that oÀl/o~I
and oRe(À2+À3)/o~I
1 - 1 - - ~*,P*
-
~*,P*exist and differ from 0 which is impossible according to Rolle's theorem.
Consequently, under optimum condition, o~l/oE and o(~2+~3/oE do not exist and a discontinuity in or/oE occurs.
In cases (ii) and (iii), since Tl
=
T2
=
0, there are at least 2 eigenvalues ~. 's whose real parts equal O. Since if o~./o~1 do not exist1 1 -
E*,P*
(so do oÀ./o~1 ) a discontinuity in or/o~ will appear at the optimum point;
1 -
E*,P*
-an attempt is now made to find out under which circumstances
orl
o~=
0 may occur, ie, when o~./o~1.
mayexist. Using the same reasoning as above, d~. 's/o~ are1 - ~,p* 1
-first assumed to exist at all ~ and p.
The common case of (ii) and (iii), namely:
OT
l
OT
2d7T=
d7T=Q
(2.35)
is discussed first. Suppose there are 2p eigenvalues for which Re(~.)
=
0(i
=
1,2, ••• ,2p). When p=
1, the 2 right most eigenvalues ~l' ~2 (êither 2 real eigenvalues or a pair of complex ones) become a double eigenvalue under optimum condition:~ 1
=
~ 2=
0=
0(2.36)
In case 2 real eigenvalues ~l' ~2 tend to the origin of the complex ~-plane, (2.36) reads:
which mayor may not be equal to
o.
one should have
(2.37)
However, since o~. 's/o~ are assumed defined,
1
-(2.38)
otherwise, it is contradictory to the hypothesis. Similarly, in case a pair of complex eigenvalues ~l' ~2 tends to 0 under the optimum condition, one has
or
o
(2.40)~, p*
*
It is therefore possible, although not necessarily , to have a smooth optimum point when
e
eigenvalues lie .on::treimaginary axis of the complex J.l-plane under the optimum condition. Now when p> 1, the real part of the right most eigenvalues is taken over by that of its adjacent (on the left) eigenvalues atE*,
p*. Since in this case the assumption of defined dJ.l. IS/d~ would lead to the conclusion that~
-the optimum point is smooth; it does not help understanding the behaviour of the eigenvalues in the neighbourhood of the optimum point because whether dJ.l. IS/d~
1
-are defined or not is still not determined. However, due to the excha~ie of right most eigenvalues at
E*
,p*, dr/è2!: is expected discontinuous atE*
,P* • The same reasoning can be applied to the case where 2p + 1 (p=
1,2 ••• ) eigenvalues J.l. lie on the imaginary axis of the complex J.l-plane under optimum condition.A
genêral conclusion can be made that when there are more than 2 eigenvalues J.l. for which Re(J.li) = 0, a discontinuity in èr/è2!: at!r* '
p* occurs. ~In case (ii), when dTl/èE
f
Q with dJ.lils/dE assumed defined, there can only be one J.l.=
0, thus 2p + 1«
n) J.l. IS where Re(J.l.)=
0 which is not possible1 - ~ 1
according to the above remark. Therefore dJ.l. IS/d~ are not defined in this case.
1
-Similarly in case (iii), if dT /d~
f
0 with dJ.l./d~I
assumed defined,';! - - 1 -
E*,
P*
there cannot oe more than 3 J.l. IS on the imaginary axis of the complex J.l-plane. Moreover, since dT/èE
=
Q ,
~it requires bn_l
r-
O. Therefore there are ane J.li
=
0 and one pair J.lj=
-J.lk=
i6r-
0 which is again contradictory to the hypothesis of defined dJ.l. IS/d~.~
-In case (iv), since Tl
>
0, the necessary condition for T2=
0 requires the solutions to be of the type:J.l. = -J.l .
=
i5 (5>
0)1 J (2.41)
i.e.,
fP
J.l. IS lie of the imaginary axis (but none of them at the origin) of the complex J.l-11ane, under optimum condition.Suppose p
=
1, similarly to (2.36), one has J.l = -J.l = i5 ~ 2 (2.42) d(J.l l + J.l2) êî~ = 0 at 'lI.* ,P* Since J.ll and J.l2 are complex, (2.42) can be rewritten as:
*
Because it depends on whether dJ.l. Islêî~ are defined or not. In general, êîJ.l.lêî~ are no\ defined at 'lI.* ,p*, eg., ~ee Sëction 4.1 12
°
(2.43)i.e., the real part of the right most complex eigenvalues attains its maximum, hence
When p
>
1, as mentioned above, a discontinuity in dr/dE is expected at E* and p*.In case
(v),
since T2
>
0, there can only be one ~i' say ~1 if d~i/ dE is defined at E* ,p* then:=
°
.!!*'
p*0, hence
(2.44)
i.e., the real part of the right most real eigenvalue tends to its maximum. It is thus seen that the present method covers all possible optimum solutions.
A discontinuity in ör/dE at
E*,
p* occurs in almost all cases. A smooth optimum point .can be obtained when there are 2 eigenvalues ~. 's for which1.
Re (~.) = 0. It should be noted that this is a necessary, but not
a
sufficient 1.condition for smooth optima.
2.3.3 Comparison with the n Hurwitz Determinant Constraint Approach
So far, no constraint is taken on the Hurwitz determinants 6i (i=1, ••• n-2). The only constraint used in the above analysis, apart from (2.11) is p
<
r atany ~ and P, yet it has not shown up in equations (2.6) to (2.40). It Is now provëd that the same results (i.e., !I* ,-p*) would be obtained, and the condition p ~ r at any E and p would not show up in the equations, if all the n Hurwitz determinants were taken as constraints. The comparison of one method with the other will be made in the next section.
If all the n Hurwiiz determinants are taken as constrain~s, viz. 2
*'
6
i
=
al::::°
(i=
l, •.. ,n) (2.45)*'.
and the n test functions are defined by T l · n+ -1.
then one has to maximize:
&.
1. (i
=
l, ...,n)
*'
These are the necessary and sufficient condtions for p ~ r •• In this se~tion on1y) Tl = 6n rather than bn. Note that T2(2.46)
6 1 as always •
n-n <P=p+
~
i=l 2 $. (T.-a.) J. J. J.The necessary conditions for p to be maximum are given by: n
L
i=l nL
i=l $.a.
J. J.o
( i = l , ••• ,n) (2.47) (2.48) (2 .• 50)If some T. = 0 (1
<
j ~ n), in order for p ~ r at any ~ and p (again, thisJ
constraint will not appear explicitly in the equations), it is necessary to have T.
=
0 (i=
1, •••,j)
J. (see Appendix)
Now equation (2.49) requires at least one $.
r
0, Le., the solutions to (2.48) to (2.50) imply that there are always some h ~~IS where Re (~ ) = 0 (i = 1, ••• h ~ n).J. i
(i)
h=
1(2.48) to (2.50) give:
(2.52)
(2.53)
which is identical to the case (v) of the Hughes approach (ii) h
>
2 (2.48) to (2.50) give: Tl =T
2 = T h 0 hdT.
L
$. _ J . = 0 J.dl[
-
(2.55) 1 hdT.
L
$. J. -1 J. dp (2.56) 1 (T h+1 o • 0 , T>
0) n,.
Since when T
2 = 0, Tl equals 0, (2.54) is reduced to:
T
2 T = n 0 (2.57)
Since dT
1 = bn dT2 +T2 db n (2.58)
when (2.57) is satisfied, (2.55) to (2.§6) become: dT 2 h dT.
t'
I
t.
~d1T
+d1T
= 0 2 ~ 7T -i=3 dT 2 h dT.t'
2dp
+I
t.
~ -1 ~dp
= (2.60) i=3 wheret'
=
t
b +t
2 2 1 n (2.61)(2.57) to (2.61) provide h + m equations for h + m unknowns
IE*,
P*,
t?
' t
3 ,···,
t
hIt should be noted that with this approach, one cannot distinguish the case where
b = 0 from the case where b
f
o.
n n (a) h
=
2p 1. p=
1 (2.57) to (2.60) give:dT
2 ~= 0 o7T - (2.62)-( (2.62)-(2.60) requärest
2
f
0)
Note that no information about band db /d7T can be obtained from this
n n -approach. Therefore if b > 0, n b = 0 it is a particular case n ' 2. p
>
1(2.62) is a particular case of case (v) while ruf
of (2.35) in the Hughes approach.
The necessary condition for the non-trivial solutions of
t
2,
t
3
, •••
,t
h to existis:
m+l > h - l (2.63)
b
n, Obn
joE:.
,
OT2JOE:.
rnay take on any values. If bn =°
(T3
°
since then there is at least one double eigenvalue 11. = 0, hence1 T 2 = 0, ClT 2 =
°
(f;T-This is exactly the case (ii) in the Hughes approach.
If b >
°
(no root at the origin of the complex Il-plane) thenn
p '> 1),
(2.64)
(2.65)
which is exactly the case (iv) in the Hughes approach. If m + 1
>
h - 1,(2.60) can be viewed as m homogeneous equations in t
2
and ti's. To ensurenon-trivial solution, one must have
Cl
(T2, ••. ,Th)
=
°
(j
=
h - l, ••• ,m) (2.66) Cl(?Tl'.'. ,?T
j )
If bn
>
0, then T2
=
°
and OT2joE:.
=
Q
(case (iv)) is a solution of (2.57) and (2.66).If b
n 0, then (2.35) is a solution of (2.57) and (2.66). Since p ~ r is used in both approaches, the 2 solutions are identical.
the cases (i) - (iv) in the Hughes approach is the
(b) h
=
2p + 1 (bn equals 0, see Appendix)
1. p - 1 (2.57) - (2.60) give But T 2 where Cof Since b n b T + b n-l 3 n -. Cofactor of b
°
T 2=
T3
OT 2t
2dE:.
+t
3
t 2 OT 2'(§'p"+
t3
Cof in (2.9) n=
T 3°
(see Appendix) ClT 2 b (f;T n-l OT 3dP
OT 3 (f;T+ Cof -1 Ob nd7T
14
If some
t.
's vanish, one ofsolution. 1
(2.67)
(2.68)
(2.69)
If b
n_1
f
0 (ie., on1y one ~i 0) then (2.68) becomes: dT 2d7T
- Cof db nd7T
= 0Thus (2.67) and (2.71) is nothing but case (i) and (2.35) in the Rughes approach. If b 1 n-
=
0 (i.e., 3 ~. 's=
0), (2.71) becomes1
db
= Cof
d7T
n=
0 Rence (2.35) is the solution of this case.2. P
>
1This is similar to case a) above (i.e., h
I f db
/dI!.
f
0, then(2.72)
=
2p), except b should now equal O.n b
=
T=
0 , n 2dT
2 either ~=
0 ord
(b n,T2)êî
(7T l ,7Tj )o
is the solution to (2.57) - (2.60) which is not hing but cases (i) and (ii) in the Rughes methode If dbjdE
=
Q,
then (2.35) is the solution to (2.57) - (2.60).Thus both methods yield the same result. It should be noted that with the Rughes approach one needs not care about"~.
>
0 (i< n-l).1 -2.3.4 Advantages of the Rughes Approach
Although both methods require solving m + 1 equations for !!.*, p* at a time, the Hughes method demands only 3 sets of m + 1 equations to be solved. More precisely, since one does not know in advance how many eigenvalues ~. 1ie on the imaginary axis of the complex ~-plane for the optimum solution, onê needs to solve m + :2 (if m + 2 <
nr
or n (if m + 2>
n ) sets,·of m + 1 equations i.f the 'n Hurwitz determinant constraint' method is used. When m=
1, both methods are identical1y the same.Compared with other methods, as pointed out by Rughes (Ref.10), this method is found superior to the c1assicaJ.. theory of maxima and minima and ot her direct methods part1y because the function r (7T) is required to be known analyti-cally and partly because of the discontinuityof dr /d7T. The present method takes advantage of the fact that while the eigenvalues-of
[A
(7T)] are not analyti-cally avai1able, the coefficients ai's are explicitly available.A
detailed knowledge of the location of the roots of the characteristic equation in thecomplex p1ane is not demanded; the real part of the right-most root is sufficient. On the other hand, the problemo1i'I'maximizing a solution with discontinuous derivatives has been reformulated as the solution to a set of simu1taneous, non-linear, a1gebraic equations which are smooth functions of 7T and p provided that
[A
(!)]
is a smooth function of!!.. It is we11 known that solving a set of such a1gebraic equations would be much easier than maximizing a function with dis-continuous derivatives.Using the present method, one can, in')principle, optimize the damping rate of any linear system of any order, with any number of free parameters as physically allowed. The number of equations to be solved, of course, increases with the number of free parameters and the length of time taken for solving m + 1 equations goes up with the system order. If a graphical method is used, it becomes tedious for m >2 and nearly impossible when a large number of free parameters are involved.
Except in some simple cases (e.g. n
=
2 and m = 1), the optimum damping rate cannot, in general, be expressed analytically in terms of the systemparameters due to the non-linearity of (2.14) - (2.16) which requires numerical computation. However, this fact cannot be avoided no matter what method is used. 111. NUMER ICAL METHOD
Since the Hughes 'optimum transient riddance' method is reduced to sol-ving a set of simultaneous, algebraic equations as shown in the previous section, and different dynamic systems differ only in t he dependences of a. 's on ~, a numerical method is now developed for the general case. The Newt6n-Raphson method can be used for solving these equations. However, the evaluation of the inverse Jacobian matrixih:the Newton-Raphson method takes a lot of computing time and since it is not essential that it be exact at any step, it had better be approximated for intermediate iterations. In this connection, the modified Newton-Raphson method, suggested by Broyden (Ref.~), is used. Moreover, in the Broyden (Ref.2) version, a relaxation factor is introduced to speed up the convergence. As will be illustrated in section IV.~ §ince the partial
(first and second) derivatives of the two test functions Tl and T
2 require long analytical expressions and it is too cumbersome to derive them for high order systems, all partial derivatives are approximated via the central difference, viz.,
cf
dx
x=
o f (x + E) - f (x - E) o 0 + 0(E2) 2E (3.1) where E is taken .001 xo Double precision is used to evaluate (3.1) in order to minimize the truncation error.
A computer program has been written in FORTRAN for the general case. The users just need to supply the dependences a.(~) (i
=
l, ••• ,n), the initial1.
-guesses for ~
,p* '
the tolerances for!
and to specify which case is to be solved.3.1 Coefficients of the Characteristic Equation
For simple case, especially when n ~ 3, the coefficients a. 's defined 1.
by (2.3) may be determined analytically by simply expanding (2.3). However, when n ~ 4 (particularly when the e}ements of [A
(E)]
are complicated functions of ~), it is too tedious to do so. In such case, any method for finding a. 's- 1.
can be used, for example, Bocher's formula (Ref.17) in which a. 's ane given by 1.
where a. 1 - S 1 2,.o.,n)
In some cases, the linear differential system is expressed as:
If [Al]is of order n and rank n'
«
n),(3.3)
cannot be put into the form of (2~2) by using [A]-l directly, however, it can be made equivalent to a (n + n') first order equations using appropriate transformations of coordinates (see Garg, Ref.7) .It is tressed here that t he computation hereafter is 'standard' for the problem at hand and only requires a. IS, no matter how they are calculated.
1
3.2
Test Functions Tl and T2
The eva~uation of the test function T
2 is performed in three steps:
(i) Computing the elements bits (i
=
l,oo.,n) for a given set (~ ,p) (ii)(iii)
3
.2.1
Förming the Hurwi tz matrix Expanding det (h .. )
=
T 2 lJ Evaluations of b. 's l [ho .] lJFor a given set (TT ,p) the coefficient b. 's are evaluated from (2.'8)
with a. 's determined as above. The binomial coefficients in (2.8) can be
l
computed from:
~(n - 1) o • • (n + 1 - m)
-
m~or from the recursion relation~
Cm )
(~l
)
+Cm~~)
(~)
= 1The test function Tl is then set equal to b
n, by definition.
**
This is the fast way to evaluate the binomial coefficient3.2.2 Formation of Matrix (h .. ) lJ
o
The matrix (h .. ) is formed using lJ
h .. ==
lJ b2J-l · .
=
1= 0
3.2.3 Expansion of det (hij)
its following property: (0 < 2j i < n)
(2j i
=
0) (3.5)(2j i < 0)
The matrix (h .. ) is first reduced to an upper triangle matrix (h .. )(n-2)
lJ lJ
using the Gaussian reduction, viz:
=
h .. lJ (k-1) (provided h kkf
0)
k 1,2, ••• ,n-2 i k + 1, ••• ,n-1 j k + 1, ••• ,n-1h (k-l)
kk(3.6)
Then af ter (n-2) steps of reduction, the test function T
2 can be computed from:
n-1 ( )
=
I T h.~-211 I I
i=l
Other method, e.g., pivot al condensation, can be used to eva1uate T2 when (3.5) is
formed.
3.3 Modified Newton-Raphson Method
The two test functions Tl' T
2 just computed together with their partial
derivatives with respect to
E
and P, form a set of m + 1 equations of the typewhere
If
x = column vector of m + 1 independent variables ~ andpp
f column vector of functions defined in 2.3.1
x.
- l
=
ith
approximation to the solution of (3.8) and f. = f (x.)
- l - - l
then the Newton-Raphson iterative method is defined by: where -1 ~i+l
=
~i - [J.] 1 - l f. [J.]=-[J'.]= 1 1=
x - id
(f1, f 2, •.• ,fm+1)d
(xl' x2 '···,xm+1 ) evaluated at x. - l (3.10 )Since the computation of
[J.
J-l requires (m+l)2 evaluations of partial1
derivatives and 1 matrix inversion,
it
is evaluated only at eve,f1
20th iteration.For intermediate iterations, according to Broyden (Ref.2), [J~]- can be
approxi-mated by [H.] 1 1 where [H.]
[J:]
-1 1 1 th for (20K
+1)--
iteration K=
0, 1, •••• { [Hl'] y. + T. '11, }~:
[H.] - l 1 ~l 1 1=
[H.]-1 _T [H ] p. . y. -l l - l[H.]f.
1 - l y - f - L -i - -i+l - l . x=
x. + T. :r'l. -i+l -l 1 ":l T.1 a relaxation factor chosen at i
th
iteration to
reduce the Euclidean norm of f. 1
-l+ (3.11)
(3.12)
(3.:n)(3.14)
(3.15 )Also, acco~ding to Broyden (Ref.2), the value of T. which reduces the Euclidean
norm of f . 1 and lies between 0 and 1 is given by:l
-l+
J
1 + 68' T. = 1 38 where 11 !i+lll 8' /I!i 11f. 1 are evaluated at x. 1 with T.
~+ ~+ 1
2
- 1 (3.16 )
2
3.4 Discussions on Numerical Solutions
As mentioned earlier, the partial differentials of Tl and T
2 with respect to ~ and p are approximated by (3.1) where E is chosen somewhat arbitrarily. Since there is a usual trade-off between round off errors when E is too small and curvature effects when E is too great, the value E
=
.001 x is found satisfactory andcomm-only used in practice (e.g. Ref.2). Using doubleoprecision and neglecting roundoff and -tr~ction errors, the approximation in (3.1) suffers an error of the order of 10-
6
xo' Hence if ~ and p* are of large magnitudes, a sc_aling of parameters is required.The tolerances for f vary with the dynamic system, sr more precisely,
the behaviour of Tl and T
2• -Usually a tolerance
!
S 10- 1 is sufficient. I t might happen that the functions f are too sensitive to a small change in x or too stationary for a large change in-x, or oscillating about some x. Fortunately, this is not of ten the case: In order to detect these cases, the values of x. and f. are programmed to be printed out at every iteration showing the historyot
.
t"fie iterative process and hopefully leading to better approximations to the solu-tions if these cases occur or if the process is not convergent witpin some allowed number of iterations.
As mentioned earlier, the necessary constraint p
<
r at any ~ and p used in formulating this maximizing problem is not shown up in any set ofm
+ 1equations. Moreoever, the set of m + 1 equations are solved independently of any stability inequality constraint; therefore it might happen ~hat when these equations are satisfied, some Re (~.) has become positive. This arises from the
fact that if the condition p
<
r is1not satisfied, there are many values of p making T2 vanish , while if p-S r at all ~ and P, there is at most 1 p which makes T
2 equal O. There àlso:.mi'ght be some local optimum damping rates depending on the functions a.(rr); i,e., more than one set (~,p) satisfying p
_<
rand the1
-m + 1 equations .•
Although, as will be illustrated in the examples later, these uncertain-ties of the optimum solution rarely occur (perhaps surprisingly), it requires a check on the solutions. In ordèr to ensure that the condition p*
=
r* has beensati~fied, one has to solve the characteristic equation (2.3) for Ài's (see subroutine PO~, Ref.ll) with a. 's evaluated at the solutions to the m + 1
1
equations. If r defined by (2.4) agrees with p* (obtained from the~ m·:·+ 1 equa-tions) within a numerically tolerable interval, then the solutions to this set of m + 1 equations are acceptable; otherwise the whole procedure should be re-peated with new approximations to ~ ,p*, Then to ensure that the solution is unique, one should try different g~esses for ~*' p*. If there is no loc~l
optimum damping rate, the solution should be unique. If the loc al optima were spread over a large range of rr (which, however, has never been encountered in this study), it would be ~iffIcult to find the best of these optimum damping rates. In the normal case, the large st p* of the 3 acceptable solutions to cases
(i), (iv) and (v) is chosen as the optimum one.
Evaluations of dT/d.! and dT!d~ at
Jk-,
p* do not gi ve a deterministic picture of the variation of r in the neighbourhood of ~. However, they, to-gether with the actual roots Ài's at ~* ,p* , do provide enough informationabout or/oK at K* and p* as mentioned earl ier in section 2.3.2.
Those analytical and numerical problems discussed so far may now be
ex-plored in the il~ustrative examples given in the following section.
IV. ILLUSTRATIVE EXAMPLES
The analysis presented in section 11 appears to be sound theoretically,
but the discussions in section 111 give ri se to some doubts on numerical
solu-tions. This section will illustrate the typical features of the problem at hand,
assess the computational practicability of the present technique and confirm its efficiency. Actual applications of this method to satellite dynamical systems
will be dealt with in section V.
A simple, second order system (n
=
2) with 2 parameters will be studiedanalytically first, followed by numerical considerations of the optimum transients
of a simple, fourth order mechanical vibrating system where
8
parameters areinvolved. Whenever possible, e.g. when no more than 2 system parameters are let
free, the behaviour of -~ in the neighbourhood of its optimum point will be
illustrated graphically. The undefined nature of or/oK at ~* and p* will be
shown analytically in the second order system. The damping rate of the fourth
order system is first optimized with one free parameter. The optimum solution
will then be compared with that obtained by directly .i.llsolving (2.3). The
opti-mization will then be carried on further with 2 and
7
free parameters. Thiswill serve as a test on the computational practicability of the present method as the number of free parameters increases and at the same time show the
improve-ment of the damping rate as more constraints on system parameters are released.
In the present optimization sCheme, at least one system parameter is held fixed.
4.1
Second Order SystemThe characteristic equation for a second order system is given by
~2
+2~ ~ +
w=
0 (4.1)The reason for examining this case is that quadratic factors may be thought of
as the constituents out of which higher order polynomials are built up.
There-fore studying the roots of (4.1) can be viewed as isolating a quadratic factor
of interest in a higher order characteristic equation (e.g. Ref.5). On the
other hand, this is the only case where the optimum solutions can be expressed
analytically in terms of the control parameters.
Since one of the 2 paxameters should be, kept constant, there are only
two possible cases:
(1) ~ fixed and
w
free parameter(2) w fixed and ~ free parameter
For convenience, let's define ~ by:
w
and
1T (case(2)) (4.3)
Then the greatest real part of Àl and À
2 turns out to be:
-r(1T) ~ (-1 +~) 1T< 1
(4.4)
for case (l)and
. -r(1T) .Jw (-1T
+
J1T2 -1)
1T>1
=-.Jw1T 1T<1
(4.5)
for case (2).
The case in which ~ and w are negative or zero is discarded because it
leads to unstability of the system. The real part of the right-most eigenvalue
(-r) is normalized with respect to ~ and.Jw for cases
(1)
and (2) respectively.The variations of these normalized quantities wit~ 1T are depicted in fig. 2.
As can be seen from (4.4) and (4.5) or from fig. 2, the discontinuity
in dr/d1T occurs at 1T = 1 in both cases because under this condition, 2 real
eigenvalues become a double eigenvalue. From (4.4) and (4.5), one has
lim dr lim dr _ ~
1T ~ 1
dii-
= 001T~ 1+
dii--
(4.6)
for case (1) while in case (2)
lim dr
JW
lim drdii-
=dii-
= - 001T ~ 1 1T ~ 1
+
(4.7) Applying the Hughes method, one obtains:
Tl P 2 - 2~ P
+
w(4.8)
T =
2 2(s - p)
As mentioned in Secte 2.3.1, since there is only one free parameter 1T, it is
only necessary to solve Tl = T
2 = 0; T12 = dT/d1T = 0 and Tl =ïdT/d1T = O.
The solution to Tl = T
2 = 0 is:
(4.10) or
1T* = 1 (for both cases (1) and (2) ) (4.11)
From
(4.8)
and (4.19), the partia1 differentials of Tl and T,
-" iï
ç
and w are gi ven by:(4.12)
dr
2(;:J = 0 (4.13)
Therefore in case (l), the optimum solution is gi ven by case (i) in
the Hughes method while case (2) corresponds to case (ii).
It is now shown that d~~d~ and d~~d~ are not defined at the optimum
point. In case
(1)
d~/O~ is given by:while in case (2)
d~
_
+ _ç~_d-iT--
2~ (4.14)(4.15 )
Hence, at ~
=
1.(4
014}
and(4.15)
are nat defined and it is not appropriate toevaluate.DT~~ and dT~d~ fr om
dT
l d~2 d~ld7T'
=~l
d7T
+~2
d7T
(4.16)(4.17)
at ~=
1 (~l=
~2=
0). It is because of this fact, the undefined nature ofd~ïf<JE at ~*
,p*
is emphasized in section 2.3.2.Now, in case
(1),
the solution to T2
=
èT~d~=
0
(case (iv)) pT
=
1 ~>
1 is: (4.18)The situation in fig. 2 (a) is indeed discovered by
(4.11)
and(4.18).
SincedT~dS
=
2, there is no solution to case(iv)
for case (2). It is now provedthat there is no solution
to
Tl=
dT~~=
0 (case(v».
In case
(1),
since dT~w=
1,
obviously there is no solution to case(v)
while in case (2), it reqUlres:... """""'",.
which is unacceptable because the system is not asympuotically stable at w 0
and p should be areal, positive numbe;r, by hypothesis •.
It is thus seen that the present method yields satisfactory results for second order systems. It is now applied to a fourth order system where more
system parameters are involved and long expressions of the two test functions
and their derivatives with respect to 7T prevent analytical considerations.
4.2 Fourth Order System
The equations of motion of the fourth order mechanical vibra~ing system
shown in fig.3 a) are
~
xl + clxl + c2 (xl x2) + Kl xl + K2 (xl x2) 0 (4.20)..
+ c 3x
2 + (x2 - xl)~
x 2 c2 + K3 x2 + K2 (x2 - xl) 0 (4.21) or in the form of (2~2)·
0 0 1 0 xl xlx
2 . 0 0 0 1 x 2 Kl + K K2 cl+c2 c2 x' • I 2 1 xl1\
~
Ml Ml (4.22) • I ~ ~ + K3 c2 C2+ 83 x' x 2 M 2 M M2 2 M 2 2Prior to optimizing the transients of such a system, it is worthwhile to note that, due to dynamical analogies, equation (4.22) can be obtained from an electrical circuit or a mechano-acoustical circui~ shown in fig.3 b) and c) respectively. It .then suggests that an applieation of the present method in the fields of electrical engineering and acousties ean be done as weIl whenever the system transient is to be optimized.
The eharacteristie equation of (4.23) is given by:
~-r "',.---:' ,,4 (cI+c 2 + c2+c3),,3 +( KI+ K2 K2+ K3 cl (c2+ e3)+ C2C3),,2 ': + + + MI M2 MI M2 M M' I 2 (4.23)
The eoefficients in (4.23) can be computed numerieally using hhe method mentioned in Secte 3.2., however, they are shown here for the ~urpose of
illustration. It can be seen that the coefficients b., the 2 test functions
Tl' T
2, and their derivatives with respect to
!
requi?e long expressions in terms or the 8 parameters (even for this simple systemn,
whieh results in tremendousdifficulty in handling this problem analytically.
In order to investigate the effe cts of each individual parameter on -r
a purely arbitrary nominal set of
8
parameters is chosen as follows,MI
=
3 M2=
1 Cl=
4 C2=
2 (l+.2Lf) C 3=
2 Kl=
3 K 2 = 5 K3=
1With these values of parameters, the system is initially asymptotically stabIe.
A direct search for the optimum transieht is first made with one parameter
vary-ing at a time. This is done by solving (4.23) numerically for À. 's using the
J.
Newton-Raphson method (see subroutine POLRT, Ref.ll). The greatest real part
-r of eigenvalues is easily selected with a simple
DO
loop. The optimum -r*is searched wi th a starting 1T = 0 (1T i~ not physically possible if negati ve)
and an increment 6.1T= 1 untilOa sign change in Ör/èhris detected, say at ~.
Then the procedure is repeated with 1T
=
1T -2 andL:w= .01 until 1T= 7T + 2.The optimum -r * is determined by plot~ing ~r for 1T -2 < 11"
<
1T + 2. P Usingp - - p
a scheme as such, no optimum point is found when 1T
=
M2 and 1T
=
K3' In theformer case, a graphical investigation of -r for M
2
<
1 shows that:1. the system is unstable for M
2
<
0 and -r -700 asM
2 -70_2. as long as 1T
=
M2' then in order for the system to attain itsoptimum performance transientwise, M~ should be as small as
possible.
In the latter case, the optimum -r* is found at K3 = -.152.
The effects of each individua1 p~rameter on the optimum damping rate
are depicted by fig.
4
a) -h). Various tjpical features are observed: withdifferent parameters of the same system, g corner, a cusp or a "va11ey" can
occur at the optimum point. None of these can be predicted ana1ytica11y.
The present indirect method is ndw applied to five typical cases
se1ected from fig. 4: and the fo11owing rdsu1ts are obtained:
1) case 2) case (i) (Tl -- T 2- 0)
-~
=
1.2531 * c3*=
3.0840 (iv) (T 2=
oT2/01T=
0) c 2=
2.1512*
p=
1.2581*
(4.25) (4.26) (4.27)Kl
=
29.495*
p *
=
1. 043565 (4.28}K
=
3.10922* p *
=
.76807 (4.29)(4.25) - (4.29) completely agree with fig.4. Evaluations of dT/d7T and dT2,07T at 7T*, P * show that
1. dTlld7T
f:
0 in all 5 cases2. dT21d7T ~ 0 when evaluated at (4.25) and (4.26).
Moreover, solving (4.23) for~. 's at (4.25) - (4.29) shows that:
1 1. 2.
3.
1 double eigenvalue 1.1.=
0 at (4.25) 1 3 1.1.=
0 at (4.26 ) 12 complex eigenvalues 1.1. whose Re (1.1.)
=
0 at (4.27) - (4.29)1 1
Therefore dl.1./ d7Tare not defined at (4.25) and (4.26) (see section 2.3.2) and a
1
cusp occurs at the optimum point:;; while the real part of the right most eigen-value attains its maximum under the conditions (4.27) - (4.29). As a matter of interest, the variations of the two test functions Tl' T
2 in the neighbourhood of 7T* are shown in figs. 5 and 6. Note that drl/d~ and Cîr.'2
/w
are negative at7T* and p*, this fact confirms the true optimum solution.
The present method is now applied to the case where 2 free parameters are taken into consideration. Three pairs of parameters, namely (M , c
3), (K2,c
3) and (K2, c2) are chosen. It is recalled that cusps occur at Ml and c3 while "valleys" are observed at K
2 * and c 2* It is because of this fact that the three above sets are chosen hoping some combinations of these individual phenomena can be revealed in the present case. The numerical optimum solu-tions are obtained as follows:
1. case (i) a) M l
=
2.6254*
b) K 2=
5.6312*
2. case (iv) K2=
.79567*
=
2.9569 c 3 = 3.2127*
=
.240l3 p* = 1.4331 (4.30 ) (4.31)=
.9100 (4.32) Again, dTlldE is different from 0 under the conditions (4.30) - (4.32) anddr2kl1r
1-
Q at (4.30) and (4.31). Solving (4.23) for \ ' s shows that there are 2,3 IJ. .=
0 at (4.30) and (4.31) respecti vely and 1 double complex eigenvalue IJ. . whose1 1
Re (".) = 0 (i = 1, ... ,4). Thus dIJ../d?Tare undefined at (4.30) and (4.31) and
1""1 1
-cusps occur under optimum condition while a valley occurs at (4.32).
As readily seen from (4.30) - (4.32) the optimum damping rate is improved at the cost of releasing one parameter constraint. The above results are confirmed by figs. 7, 8 and 9, where cusps still appear at (M
l * ' c3 * ) and (K 2* , c3* ) while "valley" is still predominant at (K
2 ,c2 ) and i ts neighbourhood.
*
*
If in the above cases, one of the free parameters, sey ?Tl was held fixed and if r~ was defined by
r'
*
= max{
n max (-Re" . )I
1 i=
1then the p*'s in (4.30) - (4.32) would be nothiog but:
?T
1
(4.33)
( 4.34)
the so-called cusp loci (or "valley" loci) which are the loci of r~ are shown in figs. 10-12. It is interesting to note that in all 3 cases, a discontinuity
in dr~/d~ occurs at r* defined by (4.34).
The present method is now used to optimize the damping rate with
7
free parameters. Of the 8 available parameters, the mass M
l seems not likely permissible to be free, therefore it is kept constant as MJ_
=
3. Consequently, the optimum damping rate in this case is not necessarily the best attainable one ( in the sense that the optimum for some other choice of seven parameters might be better). The numerical solutions are obtained as follows:M 2