•
ON THE LATERAL INSTABILITIES OF AIRCRAFT DUE TO PARAMETRIC EXCITATION
by M. Masak
-ti
.
'
ON THE LATERAL INST ABILITIES OF AIRCRAFT DUE TO PARAMETRIC EXCITATION
by M. Masak
-
.
•
ACKNOWLEDGEMENT
Financial support for the work reported herein was received from the United States Air Force under Grant No. AFOSR-222-63 monitor-ed by the AFOSR, Applimonitor-ed Mathematics Division.
Also, financial assistance of the National Research Council is deeply appreciated.
I would like to express my gratitude to Professor Etkin, whose guidance and assistance were a major source of inspiration .
-
.
SUMMARY
Lateral instabilities of aircraft due to parametric excitation are studied. A theoretical discussion involving asymptotic approximations to the state transition matrix and a Liapunov stability analysis are present -ed first. This is followed by an analogue computer study to observe the actual phenomenon. Both high-aspect-ratio swept-wing and slender delta-wing jet transports are considered.
I. ,
..
1.IT.
IIl. IV. T ABLE OF CONTENTS NOTATION INTRODUCTION 1. 2 Equations of MotionGENERAL REMARKS ON TIME-VARYING LINEAR DIFFERENTlAL SYSTEMS Page v 1 1 5
2. 1 State Transition Matrix 6
2.2 Spectral Representation of Linear Operators 9 2.3 Exact and Approximate Solutions for State Transition 11
Matrix
2.4 Stability Theory 18
PURPOSE OF ANALOGUE STUDIES
3.1 Results for Subsonic Jet Transport (Aircraft 1) 3. 2 Results for Slender Supersonic Delta Transport
(Aircraft 2)
ANALYSIS OF COMPUTER RESULTS 4. 1 Conclusions
REFERENCES
APPENDIX A: Equations for Section I
APPENDIX B: Numerical Data for Section III
23 24 25 26 29 32 33 35
, •
' ...
NOTATION
The following notation and symbols are used in the paper. Small letters denote scalar quantities, eg. rn
Small letters underlined denote vectors, eg. c Capital letters refer to matrices, eg. M
A B b C c C Lc D D(t) E(t), F(t) F G H h i
Z
some constant n x n matrix
T-1NT, a constant matrix span of flight vehic1e T-1pT, a constant matrix initial condition vector lift o0efficient
constant mean value of lift coefficient
i force or moment coefficient due to variable j, properly non-dimensionalized.
"
d/dt
some time-varying matrix
some time-varying matrices
total aerodynamic force on flight vehicle moment of the forces on flight vehicle acceleration due to gravity
matrix of aerodynamic coefficients, defined by Eq. (A.3)
angular momentum of flight vehicle
non-dimensionalized moment of inertia,' x-axis
non-dimensionalized moment of inertia, z-axis non-dimensionaliz~d product of inertia, y-axis
J L M m m N n p p p.(t) 1. A P Q(t, to> A q R S(t, to} S T T- 1MT, a diagonal matrix
integrated kernels in a Neumann series, defined in Sec. 2.3. constant matrix, whose eigenvalues are the characteristic exponents of
p
(t. to)Il " IJ - pt constant,
y
~",e
constant part of H, when CL is given by CLc (1
+
(cos.n.t)constant,
~
E
H
F<t)11
mass of flight vehic1.econstant matrix in H, multiplied by €cosn,t dimension of state vector
constant matrix in H. multiplied by (2cos 2..n..t constant, defined in Sec. 2. 5
part of weighting function qri, defined by Eq. {2.35) non-dimensionalized rate of r'otation about the x-axis portion of
cp
(t, to). defined by Eq. (2. 26)weightmg function. defined by Eq. (2.36)
non-dimensionalized rate of rotation about the y-axis some consta.nt matrix
non-dimensionalized rate of rotation about the z-axis
periodic part of state transition matrix, defined in Eq. (2. 9) wing area
periodic part of
qp
i(t)transformation matrix whose columns are the eigenvectors.
~. of M.
period of a periodic function
time in airseconds
u(t) V(x)
v
w
• xE
?]i
e-Ài A(t)f-T
•<p
(t. to) <Pi (t)<P
Cf
n.
WpR
eigenvectors of a constant matrix time dependent input vector
Liapunov function, discussed in Sec. 2. 5
velocity of the centre of mass with respect to inertial
reference frame.
inverse of ui, defined in Sec. 2.3 weight of flight vehicle
angular velocity of flight vehicle with respect to inertial reference frame
state vector,
{!J
~=transformed variabie, y = T- 1 ~
angle of attack of flight vehicle angie of sideslip of flight vehicle
smal! parameter giving the magnitude of the varying part of CL
eigenvalues of L. characteristic exponenets of ~ (t, to)
Euier angIe, (p. 100 Ref. 1) eigenvalue of some· matrix
part of solution for
4>
(t, to), defined by Eq. (2. 31) mass parameterof flight vehicledummy integration variabie
state transition matrix, defined in .Sec. 2. 2
basis functions of system with periodiccoefficients Euier angle (p. 100, Ref. 1)
Euier angle (p. 100, Ref. 1)
frequency of oscillation of variabie part of CL frequency of I?utch rol! oscillation
•
'
..
'1. INTRODUCTION
The equations of motion of a flight vehicle are frequently written in a linearized form with small perturbations from a steady flight condition playing the role of the primary variables. In most cases, it suffices for the purpose of stability and control analysis to consider as con
-stants the parameters associated with these equations. However it is the
purpose of this thesis to show that under certain conditions, the stability
properties indicated by considering constant coefficients leads to erroneous
results. and even gross instabilities may unexpectedly arise .
The phenomenon responsible for this effect might be called
lateral-Iongitudinal cross-coupling through parametric excitation. In order
to study this phenornenon, the equations of motion remain small perturba-tion equaperturba-tions, as is common with stability analyses. However, the
associated parameters turn out to be periodic functions of time.
The thesis consists of three principal sections: the first is
devoted to the origins of these equations and their formulation, the second.
consists of a rather intensive study of time-varying parameter equations along with predictions of areas of instability, and the third is composed of,
the analogue computer dernonstrations of the regions of instability.
The emphasis in this thesis has been to find simple qualitative analysis techniques, and a certain measure of success has been achieved. Even though the techniques are substantially improved by the use of a digital computer, the role of the computer is merely to carry'out routine and
tedious procedures such as the solution of algebraic equations and finding
the eigenvalues and eigenvectors of complex matrices.
1. 2 Equations of Motion
Let us begin by considering the equations of motion of the
flight vehic1e. The basis for the equations is, of course, Newton's second
law, which, when written with respect to a proper reference frame, becomes
where
•
F+m~=mV+m(wxV) (1. 1)
F = total aerodynarnic force acting on the system
!
= derivative of V obtained by. treating XYZ reference frame as if it were inertial space~
=
velocity of the center of mass' with respect to inertial reference frame~
=
angular velocity of the flight vehicle with respect to inertial reference frameSimilarly, a.torque equation can be written
where
•
G=h+Wxh (1.,2)
-
-
-
-h
=
angular momentum of the flight vehicle with.respect to inertial : reference frameh = derivative of h obtained by treating. XYZ reference frame as inertial space.
G
=
moment of the total forces acting on the vehicle about its center of massBy introducing smal! perturbations about a reference flight condition, the equations can be linearized.(The linearization procedure wil! be considered in detail in the next section. ) Now, provided we restrict our considerations to a small time span during which the reference flight condi-tion remains constant, we are justified in treating the partial derivatives arising from the linearization procedure as constants. These equations, when properly non-dimensionalized, are the standard smal! disturbance equations of a flight vehicle. For a ful! discussion of these, Ref. 1 is recommended.
It is wel! known that these equations separate into two. inde-pendent systems, commonly known as the longitudinal and the lateral equa-tions. In this thesis, we are concerned entirely with the lateral system and from now on, only this system is discussed.
For completeness, the lateral equations have been tabulated in Appendix A as (A. 1). It is noted that we have neglected al! terms involv-ing controls,suchasaileronoI1:t'udderinputs. The reason for this wil! be-come clear in the next section.
For convenience in the ensuing notation, we multiply the second and fourth of these equations by ic and iE respectively, and add, and then multiply by iE and ic respectively and add. This results in four simultaneous first order differential equations, each containing only one derivative. These equations are tabulated in Appendix A as equations (A. 2). Al! the quantities associated with these equations are also tabulated in
Appendix A. .
Now, throughout this thesis, we use vector matrix analyses, and consequently, we now write equations A. 2 in this notion. We consider
cP
'
~,
f$
and~
as the components of a vector in a four-dimensional state space of the vehicle, and hence, we letbe the state vector. If we then let H represent the matrix consisting of the parameters in Eqs. A.2, af ter they have been divided through by
(ic iA - iE 2 ) or
2f '
the expression for H becomes Eq. (A.3).Consequently, when the aileron and rudder inputs to the
vehicle are zero, the equations of motion of the vehicle are written as
Dx
=
Hx (1. 4)where D is the differential operator.
Now, it has been shown both experimentally and from
theore-tical considerations th at many lateral stability derivatives are functions of CL. For example, Cnr can be expressed as
Cnr = a
+
b CL 2 (1. 5)and many others are similar. For a complete list of these, the reader is
referred to Appendix B.
Let us consider a physical situation. An aircraft flying through turbulence is continuously excited by random inputs about all three axes, containing spectral components that span a wide range of frequencies.
(See Ch. 10, Ref. 1.) As a consequence, all the lateral and longitudinal
modes of motion wil! be excited" and the question then arises - which
re-ference motion should be used in the linearization procedure? Most authors
have assumed that a mean constant value of CL will suffice for stability con-siderations. However, more rigorously in this case, CL should be taken as
CL :: CLc (1
+
f(t) ) (1. 6)where CLc is the mean value of CL, and f(t) represents some function of
time to take into account the fact that the aircraft is oscillating longitudinally.
It may be argued that the longitudinal oscillations, specifically in the angle
of attack, are a1so small disturbances and if these are to be multiplied by
other smal! disturbances such as;B or ~, the products, according to our
rules of linearization, should be neglected. However, one should keep in
mind that 'while we are multiplying two small disturbance quantities, these
quantities are governed by different sets of equations. Thus, the transients or steady state oscillations in one set are independent of the other.
Conse-quently, a term containing the product of the form, say q{~ ,may be a
magnitude higher than a term, say
p~
.
Therefore, for a betterapproxi-mation, CL should be kept in the form mentioned. This amounts to saying that we do not have to restrict our solution to a smal! time span during which the reference flight condition remains constant, but is rather some function of tim e.
In this thesis. for simplicity. and because it is a suitable representation of the effect on CL of a single spectral component of the
turbulence. we have taken CL as
where
CL
=
CL (1+ €
cos.Qt)CL
=
mean value of CLl
=
small quantity<
1..sl-
=
some constant frequency(1. 7)
and we proceed to study the effects on the lateral transients of changes in
€ and.Q. when we include, CL in this form.
Hence. if we find that under certain combinations of
€
and.n.. significant effects on the transient areobserved. then we must decide whether an aircraft could have that combination of parameters and if so. are the effects detrimental? Also. since we are considering the aircraft as a linear system. the total effect of the turbulence will be the sum of thecon-tributions of ,each of these spectral components.
Now. if we substitute CL as given by (1. 7) into matrix H. we can rewrite H as
H
=
M+
~ N cos~t+
€
2p cos 2..n,.t (1. 8) where M. N and P are constant matrixes arising out of the sub-situation. Furthertnore. in order to define the motion unambiguously the initial con-ditions must be stipulated.Hence. the form of the equations to be studied becomes Dx
=
(M+
e
N cosSlt+
e
2 P cos~!.> xx(O) = c (1. 9)
This vector-matrix eq~ation. linear with periodic coefficients. is consider-ably ~ore complex than that which would hav~ been obtained if all the aero-dynamic derivatives had been considered as constants. It is noted that th is equation reduces to the constant coefficient case when
E
=
O. In that case. the solutions are standard and relatively simple. whereas in the case under consideration. explicit solutions are virtually impossible to obtain except by machine computation. A certain degree of simplification results when we attempt to obtain asymptotic approximations to the solution. In the next chapter. we take up the theory of linear time varying systems.··•
Il. GENERAL REMARKS ON TIME - V ARYING LINEAR DIFFERENTlAL SYSTEMS
As we saw in the last chapter, the equations of motion of the vehicle under study constituted a li.near system with time-varying coefficients.
In this chapter, we discuss in general the origins of this type of system •
and then point out various possibilities of analysis.
It is emphasized that, unless stated otherwise. the equations
discussed are always written in vector-matrix notation. Consequently. one
should bear in mind the allowable matrix manipulations .
To a large extent. most time-varying parameter systems arise as an outcome of a small perturbation analysis of a non-linear system. The equations so found are usually known as the variational equations of POlncare. · ;'
Consider a dynamical system whose motion is governed by the following equation,
(2. 1)
It is noted that by introducing new variables. any ordinary differential
equation or a system of ordinary differential equations can be put in the
above form.
Now. if we know a particular solution. or motion of the
above sy:stem. and we wish to test Hs stability, we use the method of
Poincar~
(Ref. 3).Let xo(t) be the known solution, and let ~(t)
=
xo(t)+.J.
(t)be a perturbed solution. When ~ (t) is sufficiently small so that powers
higher than the first can be neglected. we obtain af ter substitution •
.;. (t)
=
(!
~i.
) ..).. (t) J x=x - ..:.:0 (2. 2)(
;;)~)
where ;) x. J _ x =x 0is a n x n matrix consisting of the partial derivatives
of the n components of the vector X (x) with respect to
the n components of x. eg. ~
xii
~-xj is the derivativeof the ith component of X with respect to jth component
of x. All the coefficients are taken at x
=
xo'Thus, the equation (2.2) so obtained is a linear ordinary vector-matrix
differential equation with the coefficient matrix being a function of the
particular known solution. Consequently, if the known solution is periodic,
Although a large: number of equations with variable coefficients
arise in this rnanner, it is by no means the only origin of this type of systern.
In sorne instances, such as in circuit theory, the varying coefficients arise
quite naturally out of varying inductors or capacitors. Parametric ampli-fiers are hence also governed by similar equations.
2. 1 State Transition Matrix
Let us now consider some properties of this type of system. Keeping in mind the form of the equation which was derived in Section 1, let
us consider then systems of the form
•
x
=
(A+
€
F(t) ) xx (to) = ~ (2. 3)
where
x and care n-vectors
E
is a scalarA is a constant n x n matrix
F(t) is a variable n x n matrix
We known (Ref. 4), first, that, provided F(t) is continuous in t, the
equa-tion has a unique solution. Second, the solution also exists if instead of x,
we use a variable matrix
p
(t, to).-; I
(t, tol , (A+
~
F(t»p
(t, tol,P
(to, tol = I (2.4)Furthermore, if ~ (t, to) represents the matrix solution to the matrix
equation (2.4) then the solution to Eq. (2.3) is given by
~(t)
=
P
(t, to)~
(2. 5)This is easily proved by substitution of Eq. (2.5) in Eq. (2.3).
Because of Eq. (2.5), the importance of ~ (t, to) becomes
obvious. In fact, this matrix (commonly known as the State Transition
Matrix) plays the central role in the study of time -varying parameter
linear systems. In the case where ~ represents the state of the system,
the state transition matrix can be considered as a linear operator operating
on the initial state to bring it to its state at time t. The knowledge of
~ (t, to> gives us all the required information regarding the s~stem
be-haviour with time. Also, extremely important, if we know
Q?
(t, t o ), wecan compute the response of the system to any forcing function.
i. e. the solution of
··
i
=(A+EF(t»,r +u(t) (2. 6)-
-2
(to)=
c -'.becomes t
~
(t)=
P
(t, tol.<:.
+
J
t
(t,1: ) u (1: )dl:
.
to
(2.7)
Hence, if we can obtain the state transit ion matrix for a system, we have to a large extent, solved the problem of stability and response. This is
also. then, the reason we omitted forcing function in Sec. 1. The major
roadblock in the study of these systems is our inability to obtain a useful
and yet simple expres sion for
~
(t, to).It must be admitted that from a practical point of view, the
problem of solving either Eq. (2. 3) or Eq. (2. 4) present almost equal
difficulties and in this sense, the concept of ~ (t, to) does not represent
any simplification in work.
We later include a secti'Ün on system stability in order to try to circumvent the computational difficulties.
We might initially hope to obtain an expression for
p
(t, to>analogously to a one-dimensional case, and write
t
<p
(t, to)=
exp ( A+
E:.F~)
) dl:)to
where the matrix exp R is defined as
exp R
=
I+
R+
R2+
R3+
2 I 3 I . . .
This expression for ~ (t, to) can be shown to be correct only if the matrix
expressions
t
k~A
+ E
F(t) ) äC andA
+E
F(t)commute for all t. In the system considered in this thesis, this is not the case.
However, it should be observed that if
e
=
0, theseex-pressions become A(t-to> and A, and these obviously commute. Thus the
solution for a constant coefficient system can be written simply as
~ (t, to)
=
exp A(t-to ).The reader is referred to Ref. 5 where certain other
pro-perties of
<.p
(t, to) are discussed. None, however, are very pertinent tothe present study, and are omitted here.
Now, in order to simplify matters just a little, we hence-forth confine ourselves to time variable parameter systems with periodic coefficients, since the equations we originally derived were periodic.
Thus, if in Eq. (2. 3) or (2. 4), F(t} is periodic, i. e.
F(t} = F(t +T } (2.8)
we can make use of the classical theory of Floquet (Ref. 6). This, in effect, states that when Eq. (2. 8) holds,
4>
(t, to) can be written as~ (t, to)
=
S(t, to} exp L (t-to ) (2. 9)where
S(t, to} = S('t
+
T, to}S(to, to}
=
IS(t, to}
=
non-singular n x n matrixL
=
constant n x n matrixThe characteristic roots of the matrix exp L(t :- to}, say
0<
i ,
defined by later equation (2. 16), are called the multipliers of Eq. (2. 9) and thecharacteristic roots of L, say ~:' are called the characteristic
expon-ents. Note that
G(i
ce
)\iTand hence, the imaginary parts of oU 'are determined only up to multiples
of
IW
,the imaginary parts of"i. .
It should be observed also that if
€
= 0 in Eq. (2.4), thecondition of periodicity is valid f<?r an arbitrary period T~ 0, and in
particu-lar, the solution can be written as
(2. 10)
Consequently, if we c.l:Iange our coordinates to a periodically varying sys-tem, such that the basïc variabie becomes the matrix
(2. 11)
then ~ (t, to) satisfies the equation
•
~ (t, to> = L ~ (t, to) (2. 12>
These facts are interesting trom the theoretical point of view, since they impose rather severe restrictions on the form of the solution. However, it must be remarked that there is no method of obtaining either S(t, to> or Lexcept by numerical integration or approximation. This we take up in the next section, but before proceeding, we include a useful discussion on
linear operators, 8uch as ~ (t, to).
2.2 Spectral Representation of Linear Operators
Most of the theory of the spectral representation of linear operators discussed here is from Ref. 5 and 7, but Ref. 8 must also be mentioned as a basic book on this theory. First, we point out the foUowing facts, which, though weU known, must be emphasized:
1. Any free osciUation of a linear system with constant or varying
coefficients can be thought of as a superposition of non-interacting modes.
2. In the case of free oscillation 3 the amount of excitation of each
mode can be determined solely on the basis of initial conditions.
3. Any forcing function can be considered as exciting each mode
inde-pendently.
We note then that the analysis of a complex linear system could be considerably simplified if we consider one mode at a time. Cen-sequently, we look for a modal or spectral representation of the resulting
motion, i. e. the spectral representation of the linear operator ~ (t, te).
In the ensuing discussion, we have assumed for simplicity th at the systems
with which we are dealing shaU have distinct roots.
Let us consider first the foUowing equations,
•
x=Ax x(o)
=
c (2.'13)where A is a constant n x n matrix.
As before the solution becomes
(2. 14)
However, the expression e At is a very compact notation for the actual
solution which can be written as
where u' 5 c· 1 n x
=
<::.
(2. 15) i=
1 =eigenvalue of matrix A= eigenvector of matrix A, corresponding to
À
i= constant chosen so th'ft x(o) = c = ~C~i
..
-By definition, the eigenvàlues of a matrix are the solutions of the equation
Au- 1 .
=
~i ~
(2. 17)the solution of which, when normalized to have magnitudes of 1, are known as the eigenvectors
.!:i'
Hence. if
-
u = (,,"
~\ '
fAT =
(7'
.,'
n
then by definition,
<
~
)
~
> :
\~
~i
rti
,~ \
Thus,
for all i.
Now, a linearly independent set of vectors is said to be a basis of a linear vector space if.every vector of this vector space can be expressed as a linear combinatioh of this basis set.
Moreover, if al' ~2 '.' . an form a basis for a vector space.
then every vector has a unique representation in the form
n
x =
~
i
=
1where the numbers
6',
are called the components of ~ with respect to the basis ~. Also, if ~1, ... un, is a basis of a vector space, the reciprocal basis is the set of vectors vI, V2 . .• Vn, such that=
~
ij i, j=
1, 2, ... n.Consequently, a very convenient expres sion for any vector relative to a given basis ~ is
n
x =
2.
<
~i' ~
>
uii
=
1Then using these concepts, and noting th at the eigenvectors of A, ui. form a basis for the vector ~ in Eq. (2. 15), we obtain a very concise expression for x in the modal representation,
x(t}
=
~
n<
~i, ~
>
ui eÀit
(2. 18)i=l
That is. x (t) is a weighted sum of the modes of oscillation e
Àit
~
and the amount of excitation of each mode is given by<
vi, c>
Since formally, expression (2. 18) is equivalent to (2. 14), we have-'
.
n
~
(2. 19)i
=
1where ui
"><5.
represents a dyad.A dyad , ~i
><
vi, is a linear transformation which transforms a vector x into a vectorSim ilar ly, we have
n
A
=
~
(2.20)i
=
1Thus, the right hand sides of Eq. (2. 19) and Eq. (2.20) give the spectral representations of the matrices e At and A respectively.
In the foregoing discussion, the eigenvectors ~i were said to con-stitute the basis for the vector solution to the constant coefficient equation. Similarly, we can define ui e~ it as a set of basis functions , since they are also linearly independent and any resulting motion can be expressed as their weighted sum.
Up to this point we have confined our discussion to the con-stant coefficient system, Eq. (2.13). However, we know that in the case of periodic coefficients, the basis functions can be expressed in a form
where the vector
A
i(t) is periodic of peri.od T.-However, no simple relationship exists between the ex-ponents ~i and the periodic coefficients. ThuS we arrive in a similar situation as before with the transition matrix . ~ (t, t~). We leave th is section now to proceed to asymptotic analysis. However, we shall fre-quently make use of the ideas which have been presented here.
2. 3 Exact and Approxim ate Solutions for State Transition Matrix
As was stated before, the key to the analysis of time -varying parameter systems lies in finding the state-transition matrix
~
(t,t~).
In this section we shall present first a formal solution, and then try to obtain certain other more useful approximations .FORMAL SOL UTION Consider the equations
•
~ (t, to)
=
E(t) ~ (t, to)~ (t o' to)
=
I This can be integrated to obtaint.
~
(t, to) :: I+
1
Eer)~
cr,
to)dL
t.o
Now, we can define the following sequence of matrixes
where
(2.22)
(2.23)
It can be shown, Ref. 9, that this sequence converges uniformly to the matrix ~ (t, to) satisfying Eq. (2.22). To say anything about how rapidly this solution is approached would take us too deeply into the theory of in-tegral equations. It must be evident that simply displaying the solution in
this form does hot offer us a solution to a practical problem. However, digital computers can be programmed to use this technique quite readily.
APPROXIMA TE SOL UTION
In th is thesis, however, the emphasis has been placed on the qualitative aspects of
j
(t, toL and therefore we shall proceed in adifferent manner. We return to equations in the form (2.4), re-written here; and with no loss in generality, we have taken to
=
O ••
~
(t, 0)=
(A+
EF(t) )~
(t, 0)~O(O,
0)=
ILet us interpret th is as a perturbation by
e
F(t) equation~
0 (t, 0)=
A~
0 (t, 0)~0(0,
0) = IThe solution to this is easily given as
~o(t,
0) = exp At Hence, let us assume a solution of the form(2.4)
~ (t, 0) of an unperturbed (2. 24)
~
(t, 0)=
~o(t,
0) Q(t, 0)=
eAt Q(t, 0) Q(O, 0)=
ISubstituting in Eq. (2.4) we have
•
•
~
(t, 0)=
AeAt Q(t, 0)+
e At Q (t, 0)=
(A+
€ F(t) ) e At Q(t, 0) or•
E
-At At Q(t,O)=
e F(t) e Q(t, 0) ••.Integrating this, we obtain
t
.
. Q(t, 0)=
I+
é.
Ie-At F(l:) eA'r Q(C 0)dt
o and heneet
(2. 26) (2.27)~
(t, 0)=
e At+
E
[eA(t-1: ) F('[ )~
o
('t., 0 )d't
(2.28) This equation is a matrix: Volterra equation having a unique solution. It ean be shown (Ref. 9) that the solution ean be expressed as a Neurnann series of _ iterated kernels . Let Now, K(t:, t)=
eA(t-1: )
F(l: )t
.
K(2)cr ,
t)=
f
K (-r , 5 ) K( 6 . t)d
6-Ot
K(n) (1:, t)=
J
K(n-1)(L
,5') K (6', t)cl
6" \)it ean be seen that equation (2.28) ean be written as
~
(t, 0)=
e At +t.
E/K(L,
t)~("'t,
O)d"t
ot
E
10
K (t, t)eATdt
@
(t. 0)=
eAt+
t.
+
e
2f.
K(2)cr,
t) eKC
"rt"
or'+ ... - ....
(2.29)Now. if we are justified in negleeting higher powers of
e .
the solution beeomes~
(t, 0)=
eAt+
= e At+
=
eAt (I+
where -A(t)=
1:e
lK<1:,
t) eA1:
cl
L
o
tE
f
eA(t-'! )
F(!:) e A 1:dt
o
€
A
(t) )t
I.
e-AL
o (2. 30) (2.31)In this case, we have arrived at an expression which is
re-latively easy to calculate. However, because of its simplicity we must be
wary of pitfalls. For example, it is we,ll known that with certain problems
which this type of technique had been expected to solve, difficulties with secular terms were encountered. These difficulties have been discussed elsewhere (Ref. 10) and we will not elaborate here. Be that as it may, we might still expect to obtain sorne useful data from expression (2.30). For
this, we make use of the spectral representation theory discussed in the
previous section.
Also, we shall use the actual form of Eq. (1. 9) which we found in Sec. I, and we neglect the term
E
2 P cos2..sJ..
t as being small. Further let us assume x(o)=
k ur , wherek
=
sorne arbitrary constant~ = rth eigenvector associated with the matrix M.
The reason for using this particular initial condition is two-fold. First, it simplifies analysis, and second, it follows the procedure used in the next section, the purpose of which is to study the effects of changes in
€
and -Sl... on the transient motion. Thus, if€
= 0 and x(o) = k ~r' the result-ing motion will consist only of the rth mode of the constant coefficient
sys-tem. This analysis, then, shows the effect of a small
E
on this motion.Consequently, if
•
x=
(M+
€
N cos..n.t) x (2. 32) x(o)=
k ~ Then x(t) =~
(t, 0) k~
=
k e Mt (I+
e
A(t) ) ur (2.33) where A(t)=
.'
and ~ represent the eigenvectors and their inverses of matrix M. Also, ~
i
are the eigenvalues of M.Hence,
Interchanging the order of integration and summation, and keeping the con-stant factors outside of the integral, we have
A(t)
~
=$
<vi N~> ~
ft
e(À,.-À~)l:
cos.sl'tcl!
(2.34)i
=
1 IJNow, for i = r, the integral becomes sin.sl.t Pret)
=
..Sl.. and for ii
r, the inte gral is(2.35)
Po.
(t)~
e(hr->t;)t(
.n.,i~.n.t
+
(lI.-lI;)cos.nt)_
(.>tc-
Ài)
(2.35.) (Àr-)\i )Oi. .... ~ (~r-i'\il'\.n.1.Let us then caU
(2.36) and
where
P
iet) is defined by Eq. (2.35) .Consequently, Eq. (2.34) becomes n
A(t) ~
=
~
(2.37)Then, using Eq. (2.33), we have or n x(t)
=
k (eMt~r
+
t
e Mt~
qq(t)~
) i=
1 ~ t n ~t x(t)=
k (~e r +E
~ ~j qrj (t) e ) j=
1 (2.38)Now, Eq. (2. 38) is a very interesting result. It says that for small
E ,
the resulting solution can be approximated by a weighted sum of all the modes appearing in the constant coefficient case. Further, it is not-ed that the weighting factors, qrj(t), are functions of N and..s:2- ,
and con-sist only of exponential and periodic functions of time, or constants. The only exception occurs when we let-S2
~O.Let us consider the weighting functions qq (t) as functions of..sl..
1. When S1-~ 00, it is seen that both Pr(t) and Pi(t) get very smal!. Therefore, we would expect that when the frequency of the para-meter oscillation is large, the weighting functions qrj (t) are very small, and consequently the resulting transient is virtually identical to the case when
E
=
o.
2. The case of
..st ..
0 is more involved. We know that according toEq. (2.32),
if.s?. ...
0, the solution should approach the solution of the constant coefficient systemand
..
x = (M
+
€ N) x-
-~(o) = k ~
However, Eq. (2.35) says that with
....Q.. ...
0Although the term Pi (t) could be expected to contribute a term of the right form to the resulting solution, the term Pr(t) is clearly a
divergent linear factor. This factor is obviously impossible if we
know that aU the eigenvalues of M +
e
N have negative real parts,and the systems we are considering do have this property at least
for
é.
E .2. Thus, we find the appearance of a secular term inthe solution whenS7~ = 0, and consequently the solution is invalid
under this condition. However, since we are not overly interested
in this condition, we shall not go into the details of getting rid of
this term, except to say that the root of the trouble lies in represent
-ing trigonometrie functions by finite series.
3. Let us now consider the case when.n..= 2
W
DR. Now, if we choosethe initial condition such that ~ is the eigenvector corresponding to
the Dutch roU mode when
E
= 0, then we can let " r =rt
DR +j wDRoConsequently, the weighting factor qri (t) corresponding to the com
-plex conjugate root .. :
7\
i =t7
DR - j wDR wil! have as adenominator •"
<'fI
DR + jW~R
- f'lDR + j WnR)2 + (2t.J
DR ) 2 :: 0Hence, when the frequency of the parameter oscillatiO'n,
S2-
istwice the Dutch roU frequency, the weighting function becomes
infinite regardless of the value of
€
Intuitively, we might againquestion the validity of this result. However, even if the solution
is not exactly correct, it indicates that in the region of
...Q..
= 2 %R'large instabilities might be expected.
4. Finally, let us see what happens when
Q
=~R
'
Again let uschoose ~ r = YlDR + j WDR' Now, if there is any real root ~ i
which is very nearly equal to
rt
DR' the denominator wiU againbecome
( ry
DR + j .;; DR -À
i)2 +W
DR 2~
0Hence, again in this case, we find very large values for the weight
-ing function.
Therefore, from this analysis we can conc1ude that at high and low values of.!2.. , the solutions are stabie and can be predicted from
considering the constant coefficient systems. The two frequencies at
which ánomalous behaviour might be expected are
..n. /
WDR = 1 and..n..
jWDR = 2. Because of the limitations of this analysis, in the next ..section we shaU reconsider the entire question from the stability view
2. 4 Stability Theory
In this section, we deal with methods concerned with investi
-gating the· stability of the system.
We shall not go into elaborate definitions of various types of stability, as these are available in many good references, eg. Refs. 5, 11
or 12. It will suffice for our purpose to understand the concept of stability as follows:
If a system is stabie, it will remain in a certain finite region of the state space for all time. In other words, none of the state variables approach infinity as t ~oo. We should rem ark here that we are dealing with a linear system, and consequently, if the system is stabie, it is asymptotically stabie, all variables approach zero as t~oo.
Numerous stability theorems exist (Ref. 5 and 13) for test -ing the stability of a system characterized by the equation
•
~ (t, 0) = (A +
e
F(t) ) ~ (t,O)~ (0,0)
=
IHowever. almost all but the one with which we shall deal, are either much too complex to apply easily, or else impose severe conditions on the
nature of F(t).
The one theorem pertinent to this analysis is taken from Bellmann, Ref. 13. Before we come to the theorem however, we note
the following definitions and lemma. Definition: Hence n
U
Mil
=
t
I
mij , i, j=
1UM
+
NU
nM.
xII
10 -11lMdtll
E /lMIl
+
IINI!
~
UM
11 .
ti
x"
~ I~IIMI\
..
d
t Bellmann-Gronwall Lemma: thenIf u, v ~o, if Cl is a positive constant, and if
u
~
c 1+
l
'tu vd
t1 o t u~
cl exp (f.
vd
t1) o (2. 39) (i) (ii)Proof:
From (i) we have
uv ~
- - - ::: v
c + ftu v
1 0 Integrate both sides between 0 and t,
t
t
log(cl+ luvdtl)-logcl~
{vdtlJo
D cl + {tu v dtl~
Cl expf't
v dt l o 0 u~
Cl exp !atv dtlHence, using these definitions and the Bellmann-Gronwall Lemma, we have
the following theorem.
THEOREM: (from Ref. 13)
If all the solutions of
where A
=
constant matrixapproach zero as t~ 00, the same holds true for the solutions of
•
x
=
(A+
e
F(t) ) x (iii)provided that
E
U
F(t>U ~ m for t.?; t~, where m is a constant depending on A.Proof:
Using a similar argument as in Sec. 2.4, we have by direct
integration
t
x
=
y+
f
f
Y (t -1:
)
F('t ) x (1: ) d1:
owhere y(t) is the solution to
dY
=
AY, Y(o)=
I. dtFrom the explicit representation of the solution of dl/dt
=
Ay, it follows that if \lYl\ -... 0 as t ... <X>, there exist positive constants
pand 1 such that
II
y"11
!::J.e- pt andIIYII
~ te-pt for t ~ O. It is notedso because the initial conditions on Y have been stipulated as Y(o)
=
1. AIso, the most positive value possible for p is the magnitude of the real part of the most positive eigenvalue, e. g. , if the system has four eigenvalues -3, -2+
j. -2 - j, -1, then p cannot exceed 1.Hence,
t-Il
xJI
~
,ie- pt+
~
i. [
e -p(t - r ) IIF (t)1III~('!)I\ d!
oor
Making use of the lem ma,
or
n
xn
~
R
exp (mI. - p)t (2.40)Thus. if 6I1F(t)\\
.1..:.
p the system is stabie.The theorem is very easy to apply, and therefore in some
cases may prove useful. However, as is discussed later, usually, it im
-poses very weak conditions on stability.
LIAPUNOV TECHNIQ UE
The one powerful technique for ascertaining system stability is to make use of the second method of Liapunov. Here, for brevity, we shall only introduce the basic principles, state some definitions and theorems
without proof, and then proceed to utilize the method. For a complete dis
-cussion of this method, the reader is referred to references 11, 12, or 14, where further references may be found.
The technique reasons as follows. If we are given a system
characterized by an ordinary differential equation (in fact the equation may
also be partialor a difference equation), rather than attempting to obtain an outright solution in a closed form, we ob ta in by some means or other. the Liapunov function. The characteristics of th is function, which depend only on the system equation, are sufficient to divulge whether or not the system is stable. A difficulty is encountered usually only in trying to find the Liapunov function. We now note the following definitions .
Definition 1: Positive (Neg. ) Definite
A function is called positive (neg. ) definite in a given region about the origin if it has a positive (neg. ) sign at all points of the region except at the origin, where it is zero.
Definition 2: Semidefinite
A function is called semi-definite in a given region about the origin if it is zero at the origin and is zero or of the same sign in the given region.
Definition 3: Liapunov Function
•
If x
=
X (x, t), then a positive definite function of x. V(x).~n a certain region about the or'g~n, is called a Liapunov function provided V(x)
=
X . grad V ~ 0 in that region.The next two theorems, presented without proof (see Ref. 12) form the foundation of the Liapunov technique. It must be pointed out that on reading the literature, sorne authors use slightly different definitions of Liap mov functions, and it is weU to keep this fact in rnind.
Theorem 1, Stability Theorem
~
For a system of nth order described by x
=
~ (x, t), if a Liapunov function can be found, the system is stable.Theorern 2, Instability Theorem
Qo
For a system of nth order described by x ::: X(x, t). if there exists a definite real valued continuous functi n V(x) wUh a- definite time
~
-derivative V(~) of the same sign, then the system is unstable.
In order to have an intu.itive feel for these theorems. the
ana.logy between Liapunov funetion and the concept of energy might be use
-ful. A system whi.ch ~as a net decrease in energy is always stable, a.nd similar.ly, a negative V ensures stabLity. In some applications, in fact, the energy of the system provides a useful Liapunov function. However, it must be remembered that the Liapunov function is of much wider generality than energy, since the latter concept usua Iy dea.ls solely with physical sys -tems. while the former is un~versal, dependent nly on the fact that the system under study must be describable rnathematically.
Now we come to the problern of finding a suitable Liapunov function for the given system. Far constant coefficient systems, this chore is simple. and has been sufficiently covered i.n the references rnentioned. For time varying parameter systems. the problern is considerably more complex, and for the general case, the reader is referred to Ref. 15. Here, we try to employ some trickery in order to avoid the complexities of the general case.
In order to find a Liapun.ov function for the case under con-sideration we proceed as follows. Since we are 'nterested in finding the
effects of ~ in our system equations (1. 8), we construct a Liapunov function
for the case with
e
= 0, and then study this function as€
is increased.By this method we would hope to obtain a useful upper bound for
€
To find the Liapunov function for the constant coefficient case. we introduce a linear transformation matrix T to be determined. We
know (Ref. 13) that there exists a constant matrix ~ such that
(2.41)
where diag.
(~i)
is a diagonal matrix whose elements are .,.. i and~
i arethe set of eigenvalues of M. It should be noted that the matrix M in our
case has distinct roots with negative real parts. On inspection, one wil!
see that the matrix :I is nothing more than a matrix whose columns are the
familiar eigenvectors of M. Similarly, X-I is the matrix whose rows are
the inverses of these eigenvectors, defined by
Now, letting
x =.Iy
B = X--1NJ: (2. 42)
C
= X-1p:;rand substituting into Eq. (1. 9), we obtain
or
(2. 43)
Obviously (from the first of Eq. (2.42» if y is bounded, so is x. Now let
where y* represents the complex conjugate transpose of y. Consequently,
V is positive definite .
-have
Hence, keeping in mind that if Rx
=
y, then x* R*=
y*. we..
..
V
=x..*
y+
y*x..
=
x..*
(J*+
J+
e
cosSL t(B*
+
B)+
e-
2 cos2..n,t(C
*
+C»
l
(2.45)
When
€.
.
::
0, since J has eigenvalues with negative real parts,V
is•
long as
e
is small enough to keep V ne-gative definite.To test for negative definitness. we use a theorem by Bhatia (Ref. 15).
The quadratic form xTD(t) x where x T is transpose of x is positive definite in the sense of Liapunov lfthe following inequalities are satisfied;
K
=
1. 2. -... n - 1. for t 7' 0for t
>
0where
I
DK(t)I
are principal minors of the determinantI
D(t)I
of the matrix D(t) and ~ is an arbitrary constant.Hence. by applying this theorem to the quadratic form Eq. (2.45). we obtain a set of inequalities from which we can determine an upper bound on
e
for stability.lIl. PURPOSE OF ANALOGUE STUDIES
The equations obtained in Chapter I were programmed on the UTIAS PACE 221R analogue computer. Two separate sets of parameters were used. one corresponding to a swept-wing subsonic jet transport dur-ing a low level. low speed flight condition. and the other to a slender delta-wing supersonic transport also during a low speed condition. These we have called AIRCRAFT
111
and #2 respectively. and.their parameters are tabulat-ed in Appendix B. The analogue computer diagram is shown in Figs. 24 (a). (b.) and (c).The reasoning behind the method of experimentation proceed-ed as follows. From previous discussions, we have observproceed-ed that the
function of utmost importance is the state transition matrix ~ (t, 0) .. We also know that by observing the motion of the homogeneous system. we are in effect stu,dyinl ~ (t. 0) as it operates on different initial conditions .
Furthermore. since our discussion of spectral representations. we ky.ow that ~ (t. 0) can be decomposed into a set of independent modes. Consequently. by choosing different initial conditions, we should. be able to study each of these modes independently .
. ~ .' . Now. with constant coeffici~nt systems we can calculate the system eigenvalues and the corresponding eigenvectors. and hence determine the initial conditions required to isolate the various modes. However.. when the coefficients are varying. the required initial conditions for modal isola-tion are not known. and hence. unless one happens to be very lucky. the re-sulting motion will be a superposition of all the modes. It must be pointed
out, however. that the equations which we are ,using are of the form
~
=
(M+
ê
F (t) ) xThus, we can expect that for small values of ~ • the characteristic ex
-ponents of the basis functions are close to the eigenvalues of the constant
coefficient system. Then we might 'hope that by choosing initial conditions
which would exCite only one mode. say
"'1\
i. of the constant coefficient case, (e.=
0 ). the resulting motion when€
~ 0 will consist predominantly of the mode corresponding to the characteristic exponent close to">\
i. This. in fact, was observed from computer solutions. and this constitutes the core of the studies undertaken.As was mentioried in Section I, the primary oscillatiIlg para -meter was taken as
CLc values of . 5 were used for both aircraft. since they had identical wing loading. The results are then shown for various values of
E
and..r2. The computations were done for€
values only up to 1. as it was felt thatgreater values would be of no practical interest. 3. 1 Results for Subsonic Jet Transport (Aircraft 1)
The first three figures. 1. 2 and 3 show the three modes of
the aircraft under constant CL values of 0.25. 0.50 and O. 75. It is noted
that the rolling convergence and the Dutch roll mode were stabie under each
CL value. while the spiral mode was unstable at CL = • 75. Further. it was noted th at this spiral mode remained stabie as CL was increased up to CL
=
. 68 at which value it became neutrally stabie. It is pointed out that for CL values other than .5, a super-position of other modes can be observed. This is so because the initial conditions for each motion were kept at the values giving modal isolation at CL :: .5.
Figures 4 and 5 show the effect of..Q. and ,
€
on the Spiral Mode. Looking at figure 4, we see that except for small oscillations. the spiral mode remains basically identical to the case of6
=
0 except at..Q./
WoR
= 1. There, a new unstable mode appears which is illustrated byfigure 5. The boundaries for this unstable mode, called MODE I, were obtained using the analogue computer by increasing, the amplitude at each frequency until neutral stability was reached. These boundaries are shown
in Fig. l1~a) and l1(b). Both graphs are included. l1(a) plotted on the same scale as ,the three other stability region graphs. and l1(b) plotted to show the
entire unstabfe region.
Figures 6 and 7 show the effects of.n. and
€
on the rollingconvergence mode. Except for the characteristic constant amplitude oscilla
tions, the mode was not affected by the varying parameters. This result might weU have been expected, since to a good approximation, the eigen-value
À
2 for the roU mode is given byCt.,)
iA which is independent of CL·In figures 8, 9 and 10 we see the effects of.!L.. and
€
on the Dutch RoU mode. Figure 8 shows that with an6
value of .5, the Dutch RoU is unstable only at ~/WDR
=
1, while at .s1 / WDR=
2, the mode isconsiderably less damped than the case with
é
=
O. Figures 9 and 10 show how the magnitude of€
affects the Dutch Roll mode with.n. /
~DR=
1 and 2 respectively. It is seen that withn /
~DR=
1, the onset of instability occurs at a lower value ofe
but a t n /WDR=
2, the instability, when it occurs, is much more severe.In figure 12, we show the stability boundaries of this mode, caUed MODE 2. It is interesting to compare this with figure 11, where only one unstable region appears.
3.2 Results for Slender Supersonic Delta Transport (Aircraft 2)
On the whoie, the behaviour of AIRCRAFT 2 is qualitatively similar to that of AIRCRAFT 1. Comparison of figures 22 with figure 11 and figure 23 with figure 12 reveal a similar pattern. However, the regions of instability for MODE 2 are muoh larger for aircraft 2, while the total
region of instability for MODE 1 are approximately the same for both air
-craft.
The results for AIRCRAFT 2 are presented in the same se
-quence as those of AIRCRAFT 1, described in the previous section. Again, for similar reasons as discussed previously, osciUations appear on the re -cords of spiral mode and roU mode for CL values other than . 5.
Further it is noted that the spiral mode is stabie for CL values at least up to .75. This is probably one reason why the nature of the instability regions are different around
.S2
/~R=
1 for the two aircraft.Thus, while figure 22 appears to be symmetric around ~/WDR
=
1, figure 1Ha} and l1(b} show instability at low values of SL/WDR whenE
exceeds .5.AIso, generaUy speaking, the instabilities associated with AIRCRAFT 2 are considerably more severe than those associat~d with AIRCRAFT 1. This is clearly demonstrated by comparing. for example, figure 8 and figure 19, and noting the time scales.
3.3 Discussion of Results
I
The first point we wish to make is that for 50% deviations Jin
I
the value of CL (with the exception of the spiral mode of AIRCRAFT 1 noted). aU constant coefficient systems considered remained stabie. Therefore,
the effects observed were due to the fact that the coefficients were oscillat
-ing, that is, to parametric excitation.
Secondly, we note that the most severe instabilities occurred
with
n.
/WDR
=
2 while atQ /WDR = 1, less serious instabilities alsoappeared. Further, no instabilities were observed with centers around
other frequencies, again with the exception of the spiral mode of AIRCRAFT
1 being noted.
It must be remarked that while there appear to be two
un-stable modes, which we have called MODE 1 and 2, traces of both modes
appear superimposed on several recordings. Again, the reason is that we
are unable to calculate the initial conditions required for modal isolation. Finally, we note the observation that whereas the instability graphs give the actual regions where the motions become exponential in-creases, these graphs tell us nothing about the rate of decay or increase
in the areas removed from the curve itself.
Therefore a useful graph should show lines of equal real
roots to indicate how areas of equal dam ping are distributed. It was felt
that this would be going too deeply into details for this thesis. Therefore.
however. one should bear in mind that are as labeUed as stable may weU
have dam ping and frequency characteristics. significantly different from
those of the constant coefficient case.
In the next section, we take up the question of relating these observations with the theory discussed in Section Il.
IV. ANALYSIS OF COMPUTER RESULTS
As we observed in the preceding section both aircraft had unstable modes of oscillation associated with them under certain conditions
of..sL and
E' .
We must now utilize the theory from Section II in order toexplain our observations. SpecificaUy, we must obtain the functional
de-pendence of the regions of instability on ..sl. and
ê
,
and give somequantitative data on the rate of increase or decay of the transient motions.
In effect, this m eans that we require the characteristic exponents of our
basis functions .
However, as we pointed out earlier, no satisfactory tech-niques exist for finding these exponents, and therefore, we must content ourselves with less satisfactory analysis.
We might initially hope that we can obtain a fair analysis by employing Mathieu's equation. By this reasoning, we would hope that by using a second order equation approximation to the Dutch roll osciUation, perhaps of the form
=
0we can reduce the equation to a standard form of Mathieu, and hence use tabulated solutions to find regions of instability. However, the first
difficulty we encounter is the fact that no second order equation is suitable to describe the Dutch roll. Nontheless, if we accept this as an approxi
-mation, we then find that substituting our form for CL gives us a rather complex Hill's equation, for which no simple solution exists. By making further simplifications, we do indeed reach Mathieu's equation, but its usefulness is then highly questionable. Suffice to say, better techniques are necessary.
The two tools we have at our disposal are the approximation technique and the stability theory. The former we won't use quantitatively because of the difficulties previously mentioned. We shall only point out that using this theory, we predicted possible instabilities in the regions .
arqund..s2 /VnR
=
1 and 2, and no other instabilities, provided we also holde
small.The practical problem is then, how small must
Eo
be in order that rio instabilities exist at any frequency of -S:L. To answer this question, we turn to the stability theories discussed. AIso, for compari-son purposes, the techniques wil! be applied only to AIRCRAFT #2.The simplest approach to use for this is the Bellman-Gronwall stability theorem proved in Section 2.4.
The criterion is very easy to apply but as usual with simple criteria, the usefulness is generally limited. However, because of its simplicity, it must be included and it proves to have a certain amount of value, as is shown by the next example.
ANALYSIS by BELLMANN-GRONWALL TECHNIQUE
It will be recalled that this theorem stated that
11
~11
=
c 1 exp ( m cl+
p) t where the quantities werem
p
=
magnitude of the state vector=
.2!
lXii= quantity associated with the initial conditions, having
a minimum value of n, the dimension of the state transition matrix.
=
maximum magnitude of the variabIe parts of the matrix in the equation of motion.Applying this theorem to AIRCRAFT #2, we have from Appendix B, the following numerical values
c 1
=
4m
=
1. 11€
+. 143€
2P
=
-
.
0128We may then calculate the value of
e-
for which \\ x \\ ~ c 1, i. e. mc I" + P=
O. This turns out to be€
= • 00288.However, from the analogue computer results, we know "
that this limiting value of
€
is much too conservative to be of practical use. Therefore, for this configuration, the other technique, by Liapunov,must be used. Before we proceed, however, it should be noted that in cases where 11 NU and
ft
pil
are smaller, one might find that an expectedé
value of . 1 would restrict the motion to an acceptable region. That is,if the theorem says that the time to double won 't exceed T seconds and it is felt that a T second time to double is controlÎable by a human pilot.
the quick Bellman analysis is sufficient to ensure stability.
Let us now proceed with the Liapunov analysis which we .
derived in Section 2.4.
ANALYSIS BY LIAPUNOV THEORY
The transformation matrix I and its inverse for AIRCRAFT #2 were calculated using the University of Toronto IBM 7090 digital com
-puter. Although the columns of I are proportional to~. they were for simplicity not norrnalized to have unit magnitudes. "
and
Thus, we let
J:
=
Xr"1+
j 1:ilX-I
=
:Ir2 + j Xi2where X r l' and 1:r2 are the real parts and XiI and l',i2 are the irnaginary
parts of 1:. and 1:-1 respectively. All are tabulated in Appendix"E.
Using these, the matrix B =
.:r..
-
1NJ:, was calculated, and then {B* + B) was found. It was assumed that since"IIPil
was small, the effect of adding its contribution would be negligible".Then, we apply the theorem of Bhatia to the quadratic form
v
'"
l.T
[(J* +J) + (B* + B}€cos.!lt ] ywhere J is the diagonal matrix of the eigenvalues of M, given in Appendix
B. When this theorem is applied, four inequalities result, from which we can obtain four conditions on
e..
These inequalities become, Dl{t) 1 70
1
D2(t)I
7 0 , D3 (t)I "
0 \ D4(t)I
~'7
0E
<.512 €<.064 ~<. 1358€
<.
0738Hence, if
ê
< .
064, all inequalities are satisfied and we are assured of stability.Comparing this limit t0 the one obtained using the Bellman -Gronwall Theorem, we see a distinct improvement. On the other hand, on the basis of the analogue computer studies, Fig. 23 shows that values of
E
up to . 3 could be tolerated and still maintain stability.There are at least two possible explanations for this difference. First and most probable is the fact that -if a better Liapunov function had been chosen, a stronger limit would have been obtained. It is recalled that this particular form for the Liapunov function was chosen on the basis of convenience.
The second plausible but not too probable explanation is the following. When the analogue results were obtained, the points on the
stability graphs, Fig. 22 and Fig. 23 were measured at specific frequencies. It is therefore possible that at some frequency in between experimental
points, the graph has a sharp peakreaching down to an
€
value of the order of . 1. This is a question requiring further investigation.4. 1 Conclusions / ' ~
Let us consider now what we can conclude from this dis-cussion. By extending the domain of the applicability of the lateral
stability equations of a flight vehicle to inc1ude a regime where the vehicle is also oscillating longitudinally, we found ourselves involved with a set of linear differential equations with periodic coefficients. On analyzing these equations, we found that under sufficiently large parameter
oscillations, the flight vehicle lateral dynamics became unstable, although a constant coefficient analysis at these parameter magnitudes indicated stability.
Further, the instabilities were most severe in a situation where the parameter was either oscillating with the frequency of the Dutch roll oscillation, or twice this frequency. These predictions were con-firmed by programming some typical aircraft on an analogue computer and studying their transient oscillations. Also noteworthy was.the fact that even under conditions of stabie transients, the natural frequency and dam ping of some of these transients were significant!y altered by the