• Nie Znaleziono Wyników

OPTIMIZING THE LINEAR QUADRATIC MINIMUM–TIME PROBLEM FOR DISCRETE DISTRIBUTED SYSTEMS

N/A
N/A
Protected

Academic year: 2021

Share "OPTIMIZING THE LINEAR QUADRATIC MINIMUM–TIME PROBLEM FOR DISCRETE DISTRIBUTED SYSTEMS"

Copied!
8
0
0

Pełen tekst

(1)

OPTIMIZING THE LINEAR QUADRATIC MINIMUM–TIME PROBLEM FOR DISCRETE DISTRIBUTED SYSTEMS

M OSTAFA RACHIK , A HMED ABDELHAK

∗ Faculté des Sciences Ben M’Sik, Département de Mathématiques Université Hassan II Mohammadia, Casablanca, Morocco

e-mail: abdelllak@hotmail.com

With reference to the work of Verriest and Lewis (1991) on continuous finite-dimensional systems, the linear quadratic minimum-time problem is considered for discrete distributed systems and discrete distributed time delay systems. We treat the problem in two variants, with fixed and free end points. We consider a cost functional J which includes time, energy and precision terms, and then we investigate the optimal pair (N, u) which minimizes J .

Keywords: discrete distributed systems, time delay systems, minimum time, optimal control

1. Introduction

The linear quadratic minimum-time problem was con- sidered before (Athans and Falb, 1996; Schwartz and Gourdeau, 1989), but is was not fully exploited. Verri- est and Lewis (1991) treat the case of continuous finite- dimensional systems. Discrete systems in the finite- dimensional case were considered later (El Alami et al., 1998). In the present paper, we investigate discrete-time distributed systems. In the first part of this work, we con- sider systems described by

x(i + 1) = Ax(i) + Bu(i), 0 ≤ i ≤ N − 1, x(0) = x 0 ,

(1)

where N is taken to be free, x(i) ∈ X is the state vari- able and u(i) ∈ U is the input variable. X and U are Hilbert spaces, the operators A and B are bounded (A ∈ L(X) and B ∈ L(U, X)).

We consider a cost functional J (N, u) which in- cludes time and energy, that is to say,

J (N, u) = ϕ(N ) +

N −1

X

i=0

hu(i), Ru(i)i, (2)

where u = (u(0), . . . , u(N − 1)) ∈ U N , ϕ: N → R + is assumed to be positive and increasing, i.e.

ϕ(N ) ≥ 0, ∀ N ∈ N,

N ≤ M ⇒ ϕ(N ) ≤ ϕ(M ), ∀ N, M ∈ N, and

lim

N →+∞ ϕ(N ) = +∞. (3)

R ∈ L(U ) is a self-adjoint positive definite operator.

Then we investigate the optimal pair (N , u ) ∈ N × U N

which minimizes the cost functional J (N, u) un- der constraints

(N, u) ∈ {(M, v) ∈ N × U N : x v (M ) = x d }, where N is taken to be as small as possible, x d is a given desired final state, x v (·) is the trajectory of system (1) corresponding to the control v, and N is the set of all non-zero integers.

We establish that the optimal solution (N , u ) ex- ists, is unique and is obtained by solving a finite sequence of algebraic equations and by minimizing a time func- tional over a finite sub-interval of N. An example is given to illustrate the results. The case where the final end point x(N ) is free, is also considered. In this case, the func- tional cost includes time, energy and precision terms, i.e.

J (N, u) = ϕ(N )+

N −1

X

i=0

hu(i), Ru(i)i+hx(i), M x(i)i

+ hx(N ), Gx(N )i, (4)

where M, G ∈ L(X) are self-adjoint positive operators.

Since J contains both the final time N and quadratic components of x(i) and u(i), we shall call J a linear quadratic minimum-time performance index. In the sec- ond part of this paper, we treat the case of discrete dis- tributed time delay systems. To settle the problem, we de- fine a new state variable which satisfies a discrete system without delays.

In what follows, we denote by h·, ·i and h·, ·i U the

inner products defined respectively on X and U . We also

(2)

denote by N and R the set of non-zero integers and the set of non-zero reals, respectively.

2. The Case of a Fixed End Point

Consider the linear discrete-time system given by

x(i + 1) = Ax(i) + Bu(i), 0 ≤ i ≤ N − 1, x(0) = x 0 ,

(5)

where N is free, x(i) ∈ X is the state variable and u(i) ∈ U is the input variable. X and U are Hilbert spaces, A ∈ L(X) and B ∈ L(U, X). Let ϕ be a posi- tive increasing function such that

N →+∞ lim ϕ(N ) = +∞. (6)

The problem can be stated as follows: Given the per- formance index

J (N, u) = ϕ(N ) +

N −1

X

i=0

hu(i), Ru(i)i (7)

and a desired final state x d ∈ X, we investigate the opti- mal pair (N , u ) ∈ N × U N

where N is as small as possible and

J (N , u ) = min

(N,u)∈V

J (N, u), (8)

with V = {(N, u) ∈ N × U N : x(N ) = x d }.

Definition 1. An integer N is said to be admissible if there exists a control sequence u ∈ U N such that x(N ) = x d .

To determine the optimal sequence (N , u ), we proceed as follows: For each admissible inte- ger N , we determine an optimal control u N = (u N (0), . . . , u N (N − 1)) which minimizes the cost J (N, u) over all controls u = (u(0), . . . , u(N −1)) such that x(N ) = x d . The optimal time N is the smallest integer which minimizes J (N, u N ) over all admissible integers N .

Let N ∈ N be a fixed integer. From (5) it follows that for every control u = (u(0), . . . , u(N − 1)) ∈ U N , we have

x(N ) = A N x 0 + H N u, (9) where H N is the operator defined by

U N → X, H N :

(u(0), . . . , u(N − 1)) ,→

N −1

X

j=0

A N −1−j Bu(j). (10)

Consider the inner product on U N given by

hu, vi R =

N −1

X

i=0

hu(i), Rv(i)i U , (11)

u = (u(0), . . . , u(N − 1)), v = (v(0), . . . , v(N − 1)), and let H N be the adjoint operator of H N defined with respect to the inner products h·, ·i and h·, ·i R , i.e.

hH N u, xi = hu, H N xi R , ∀ u ∈ U N , ∀ x ∈ X. (12) Define the functional k · k F

N

by

kxk F

N

= kH N xk R , ∀ x ∈ X, (13) where k · k R is the norm corresponding to the inner prod- uct h·, ·i R . Then the functional k · k F

N

describes a semi- norm on X and a norm on F 0 , where F 0 is the subspace of X defined by

F 0 = Im H N = (Ker H N ) . (14) Indeed, if x ∈ F 0 and kxk F

N

= 0, then we deduce that x ∈ (Ker H N ) ∩ (Ker H N ) , which implies that x = 0.

We denote by h·, ·i N the inner product on F 0 given by hx, yi N = hH N x, H N yi R , ∀ x, y ∈ F 0 . (15) Now, we introduce the operator

F 0 → F 0 , Λ N :

x ,→ H N H N x. (16) For every x ∈ F 0 , we have

kΛ N xk F

N

= kH N Λ N xk R = kH N H N H N xk R

≤ kH N H N k kxk F

N

.

Hence Λ N is a bounded operator on F 0 endowed with the norm k · k F

N

.

Let F N be the completion of F 0 with respect to the norm k · k F

N

. Since we have

|hΛ N x, yi| = |hx, yi N | ≤ kxk F

N

kyk F

N

, ∀ x, y ∈ F 0 , (17) it is classical that Λ N has a unique extension denoted also by Λ N and defined from F N to its dual F N

0

(Li- ons, 1988). Indeed, for any x ∈ F 0 we define the map ψ x by

F 0 → R, ψ x :

y ,→ hΛ N x, yi. (18)

The map ψ x is linear and continous with respect to the

norm k · k F

N

. Since F 0 is dense in F N , ψ x can be ex-

tended to a bounded linear operator denoted by ψ x which

(3)

belongs to the space F N

0

. Now consider the map π de- fined by

Λ N (F 0 ) → F N

0

, π :

Λ N x ,→ ψ x . (19) We verify that the map π is well defined on Λ N (F 0 ).

Moreover, π is linear and injectif. This allows us to iden- tify the space Λ N (F 0 ) with a subspace of F N

0

. Using the operator π, we rewrite the operator Λ N as follows:

F 0 → F N

0

, Λ N :

x ,→ ψ x .

(20)

We show that Λ N is linear and continous with respect to the norm k · k F

N

, which implies that Λ N has a linear and bounded extension also denoted by Λ N and defined from F N to its dual F N

0

. Moreover, this extension is an isomorphism from F N to F N

0

. To show this, we prove that

N x, xi F

0

N

,F

N

= kxk 2 F

N

, ∀ x ∈ F N , (21) where we denote by hφ, xi F

0

N

,F

N

the range of x ∈ F N

by the operator φ ∈ F N

0

. From (21) it follows that Λ N

is injectif. Consequently, Λ N is an isomorphism from F N to Λ N (F N ). This implies that Λ N (F N ) is a closed subspace of F N

0

, and hence Λ N (F N ) = Λ N (F N ). On the other hand, if A ⊂ F N

0

, we denote by A the subspace of F N given by

A = {x ∈ F N /hφ, xi F

0

N

,F

N

= 0, ∀ φ ∈ A}. (22) If B ⊂ F N , we denote by B the subspace of F N

0

given by

B = {φ ∈ F N

0

/hφ, xi F

0

N

,F

N

= 0, ∀ x ∈ B}. (23) Let x ∈ (Λ N (F N )) . Then from (22) it follows that

hΛ N y, xi F

0

N

,F

N

= 0, ∀ y ∈ F N . (24) This implies

hΛ N x, xi F

0

N

,F

N

= 0 = kxk 2 F

N

.

Hence x = 0. Consequently, (Λ N (F N )) = {0}. Thus Λ N (F N ) = Λ N (F N ) = ((Λ N (F N )) ) = {0} = F N

0

,

(25) which implies that Λ N is an isomorphism from F N

to F N

0

.

Remark 1. Suppose that x ∈ ImH N . Then there exists u ∈ U N such that x = H N u. Consider the function ϕ x

defined by

F 0 → R, ϕ x :

y ,→ hx, yi. (26)

We have

|ϕ x (y)| = |hH N u, yi|

= |hu, H N yi R | ≤ kuk R kyk F

N

, ∀ y ∈ F 0 . Hence ϕ x is a bounded operator on F 0 endowed with the norm F N . Using the Hahn-Banach theorem, we de- duce that ϕ x ∈ F N

0

. Consequently, we may assume that ImH N ⊂ F N

0

since the map i given by

ImH N → F N

0

, ϕ x :

x ,→ ϕ x

(27)

is injectif.

Now, we can formulate the following proposition which characterizes the admissible integers.

Proposition 1. An integer N is admissible if and only if x d − A N x 0 ∈ F N

0

.

Proof. If x d − A N x 0 ∈ F N

0

, then there exists a unique f ∈ F N such that Λ N f = x d − A N x 0 . Consider the control u = H N f . Then

x(N ) = A N x 0 + H N u = A N x 0 + Λ N f = x d (28) and hence N is admissible.

Conversely, if N is admissible, then there exists a control u such that x(N ) = x d , which implies x d − A N x 0 = H N u. Hence x d − A N x 0 ∈ ImH N ⊂ F N

0

(see Remark 1).

Proposition 2. If N is an admissible integer, then the control u N being a solution to the optimization problem

J (N, u N ) = min

u∈U

N

J (N, u)

subject to x(N ) = x d is given by u N = H N f , where f ∈ F N is the unique solution of the algebraic equation

Λ N f = x d − A N x 0 . Moreover, the corresponding cost is

J (N, u N ) = ϕ(N ) + kf k 2 F

N

.

(4)

Proof. Let N be an admissible integer. From Proposi- tion 1 it follows that there exists a unique f ∈ F N such that Λ N f = x d − A N x 0 . Define u = H N f ∈ U N . Then

x(N ) = A N x 0 + H N u = A N x 0 + Λ N f = x d . On the other hand, for each control v ∈ U N such that x v (N ) = x d , we have

x(N ) = x v (N ) = x d ,

where x v (·) denotes the trajectory of system (5) corre- sponding to the control v. Hence

H N u = H N v, which implies

hH N (u − v), f i = 0 or

hu − v, H N f i R = 0.

Since u = H N f , we deduce that hu, ui R = hv, ui R ≤ kvk R kuk R . Thus kuk R ≤ kvk R , ∀ v ∈ U N .

Remark 2.

(a) By convention, if N is not admissible, we set J (N, u N ) = +∞.

(b) In order to obtain the minimizing control u N , we have to solve the algebraic equation Λ N f = x d − A N x 0 . However, we do not in general have an ex- plicit expression for the operator Λ −1 N , so we propose the Galerkin method to approximate f (the bilinear form F N × F N → R : (x, y) ,→ hΛ N x, yi is coer- cive).

Finally, the optimal sequence (N , u ) is given by the following proposition.

Proposition 3. Let A be the set of all admissible integers.

If A is bounded, then N is the smallest integer that minimizes J (N, u N ) over A. Otherwise, consider N 0 ∈ A and M ∈ A such that ϕ(M ) > J(N 0 , u N

0

). Then N is the smallest integer that minimizes J (N, u N ) over the interval [1, M ].

Proof. If A is bounded, the result is obvious. Suppose that A is not bounded and consider N 0 , M ∈ A such that ϕ(M ) > J (N 0 , u N

0

). It follows that N 0 ∈ [1, M ].

Indeed, if it is not, then ϕ(N 0 ) ≥ ϕ(M ), which implies J (N 0 , u N

0

) ≥ ϕ(N 0 ) ≥ ϕ(M ) > J (N 0 , u N

0

), a contradiction. Thus N 0 ∈ [1, M ].

On the other hand, for each N ∈ N such that N >

M , we have ϕ(N ) ≥ ϕ(M ). Consequently, J (N, u N ) ≥ ϕ(N ) ≥ ϕ(M ) > J (N 0 , u N

0

).

Example 1. Consider the discrete-time system described by

x(i + 1) = Ax(i) + Bu(i), i = 0, . . . , N − 1, x(0) = 0,

(29)

where N is free, Ω =]0, 1[, x(i) ∈ L 2 (Ω) is the state variable, u i ∈ R is the input variable and

A = S(δ) ∈ L L 2 (Ω), (30) S(t) t≥0 being the strongly continuous semigroup gener- ated by the Laplacian operator ∆, i.e.

S(δ)x =

X

i=1

e −i

2

π

2

δ hx, e i ie i , ∀ x ∈ L 2 (Ω), (31)

where h·, ·i is the usual inner product on L 2 (Ω), δ > 0 and e i (s) = √

2 sin (iπs), ( (e i ) i is a basis of L 2 (Ω) ).

The operator B is defined by

B = Z δ

0

S(σ)D dσ, (32)

where

R → L 2 (Ω) D :

u ,→ ue 1 (·).

Remark 3. The difference equation (29) can be inter- preted as the sampling version ot the following continuous diffusion system:

 

 

∂x

∂t − ∆x = g(s)u(t), s ∈ Ω, t ∈ [0, T ], x(0, ·) = x 0 (·) in Ω,

x(t, s) = 0 in ∂Ω×]0, T [,

(33)

where g = e 1 .

The linear quadratic minimum-time problem consists in determining the optimal pair (N , u ) which mini- mizes the cost functional

J (N, u) = N 2 +

N −1

X

i=0

Ru 2 (i) (34)

while driving the system from x 0 = 0 to x d = αe 1 ,

where α ∈ R is given.

(5)

Lemma 1. The space F 0 defined by F 0 = ImH N is given by

F 0 = E(e 1 ),

where E(e 1 ) is the subspace of L 2 (Ω) spanned by the vector e 1 .

Proof. For every N ≥ 1 and every u ∈ R N , we have H N u =

N −1

X

i=0

A N −1−i Bu(i)

=

N −1

X

i=0

S((N − 1 − i)δ)u(i) Z δ

0

S(σ)e 1 dσ

=

N −1

X

i=0

u(i) Z δ

0

S((N − 1 − i)δ + σ)e 1

=

N −1

X

i=0

u(i) Z δ

0

X

j=1

e −j

2

π

2

((N −1−i)δ+σ) he 1 , e j ie j dσ

=

N −1

X

i=0

u(i) Z δ

0

e −π

2

((N −1−i)δ+σ) dσe 1

= (c

N −1

X

i=0

u(i)e −π

2

((N −1−i)δ) )e 1 ,

where c is the constant given by c = R δ

0 e −π

2

σ dσ.

Hence ImH N ⊂ E(e 1 ). Conversely, if x ∈ E(e 1 ), there exists β ∈ R such that x = βe 1 . Choose u = (u(0), . . . , u(N −1)) such that u(0) = · · · = u(N −2) = 0 and u(N − 1) = β/c. Then H N u = x and

ImH N = E(e 1 ). (35)

Consequently,

F 0 = ImH N = E(e 1 ), F N = E(e 1 ).

Now, for every integer N ≥ 1 we have x d − A N x 0 ∈ ImH N , since x 0 = 0 and x d ∈ ImH N . Hence from Remark 1 and Proposition 1 it follows that every in- teger N ≥ 1 is admissible. In order to solve the equation Λ N f = x d , we first determine the adjoint operators B and H N . By simple calculations we establish that for ev- ery x ∈ L 2 (Ω), we have

B x = che 1 , xi, (36)

H N x = (H N x) 0 , . . . , (H N x) N −1 , (H N x) i = R −1 B A N −1−i x

= c

R e −π

2

(N −1−i)δ hx, e 1 i. (37)

Let f ∈ F N (= E(e 1 )) be such that Λ N f = x d in F N

0

. Then

hΛ N f, xi = hx d , xi, ∀ x ∈ F 0 . (38) Since f = a N e 1 for some a N ∈ R, (38) implies

a N hΛ N e 1 , βe 1 i = hx d , βe 1 i, ∀ β ∈ R or, equivalently,

a N hH N e 1 , H N e 1 i R = hx d , e 1 i = α.

Thus

a N = α

kH N e 1 k 2 = α ke 1 k 2 F

N

. (39)

Consequently, the optimal cost corresponding to u N is J (N, u N ) = N 2 + kf k 2 F

N

= N 2 + a 2 N ke 1 k 2 F

N

= N 2 + α 2 ke 1 k 2 F

N

. (40)

Using (37), we establish

ke 1 k 2 F

N

= c 2 (e −2π

2

(N −1)δ − e

2

δ ) R(1 − e

2

δ ) .

For numerical simulation we take α = 10, δ = 0.1, R = 1, N 0 = 7. Then we apply Proposition 3 to deduce that the minimum time N exists in the interval [1, 147]

and is equal to 4. The optimal control is u = H N

f , where f = (2132.4)e 1 . The evolution of J (N, u N ) with respect to N is given in Fig. 1.

Fig. 1. The evolution of J (N, u

N

) with respect to N .

(6)

3. The Case of a Free End Point

In this case, we consider a cost functional J (N, u) which includes time, energy and precision terms, i.e.

J (N, u) = ϕ(N ) +

N −1

X

i=0

hu(i), Ru(i)i + hx(i), M x(i)i 

+ hx(N ), Gx(N )i, (41)

where M ∈ L(X), G ∈ L(X) are self-adjoint positive operators and R ∈ L(U ) is a self-adjoint positive definite operator.

Then we investigate the optimal sequence (N , u ) where N is as small as possible and

J (N , u ) = min

(N,u)∈N×U

N

J (N, u). (42) To show that this problem has a unique solution (N , u ), we proceed in two steps: In the first one, for any fixed integer N , we determine the optimal control u N which minimizes the cost J (N, u) over all controls u ∈ U N . In the second step, we minimize J (N, u N ) over all integers N . By convention, we set

J (0, u) = ϕ(0) + hx 0 , Gx 0 i, ∀ N ∈ N, ∀u ∈ U N . (43) For a fixed N ∈ N , if we denote by u N ∈ U N the optimal control which satisfies

J (N, u N ) = min

u∈U

N

J (N, u), (44) then u N is unique and given by the following proposition:

Proposition 4. Let N ∈ N and K i : X → X, i = 0, . . . , N − 1 be a family of operators given by

 

 

K i+1 = A K i (I + BR −1 B K i ) −1 A + M, i = 0, . . . , N − 1, K 0 = G.

Given an initial condition x 0 ∈ X, the optimal control u N is given in feedback form by

u N (i) = −R −1 B K N −1−i (I + BR −1 B K N −1−i ) −1

× Ax(i), i = 0, . . . , N − 1.

The corresponding cost is

J (N, u N ) = hK N x 0 , x 0 i.

Proof. For the proof, see (Zabczyk, 1974).

Finally, the optimal pair (N , u ) being a solution of (42) is determined by the following result:

Proposition 5. Consider (N 0 , M ) ∈ N 2 such that ϕ(M ) > J (N 0 , u N

0

). Then the minimum time N is the smallest integer that minimizes J (N, u N ) over the inter- val [0, M ]. Moreover, we have u = u N

.

Proof. The proof is similar to the one of Proposi- tion 3.

4. Discrete Time Delay Systems

Consider the discrete time delay system described by

 

 

 

 

 

 

 

 

 

 

 

 

x(i + 1) =

m

X

j=0

A j x(i − j)

+

q

X

j=0

B j u(i − j), i = 0, . . . , N − 1, x(0) = x 0 ,

x(r) = α r , −m ≤ r ≤ −1, u(r) = µ r , −q ≤ r ≤ −1,

(45)

where x(i) ∈ X, u(i) ∈ U, X and U are Hilbert spaces, A j ∈ L(X), j = 0, . . . , m and B j ∈ L(U, X), j = 0, . . . , q. Furthermore, (α r ) r and (µ r ) r are fixed initial conditions. Here m ≥ 0 and q ≥ 1 are given integers.

Given the cost functional J (N, u) = ϕ(N ) +

N −1

X

i=0

hu(i), Ru(i)i (46) and a desired final state x d , we investigate the optimal pair (N , u ) which steers the system from the initial state (x 0 , (α r ) −m≤r≤−1 ) to x d with a minimal cost. We recall that ϕ: N → R + is a positive increasing map satisfying (6) and R ∈ L(U ) is a self adjoint positive definite operator. Similarly to the case of discrete sys- tems without delays, the determination of the optimal pair (N , u ) follows from solving the following optimization problems:

Find u N ∈ U N such that J (N, u N ) = min

u∈U

N

J (N, u), (47) and x(N ) = x d , where N is an admissible integer.

The determination of N is then performed by mini- mizing J (N, u N ) over an appropriate bounded subset of N.

First, we establish some results which are useful for the sequel. Define a new state variable e(i) ∈ X m+1 × U q by

e(i) = x(i), x(i − 1), . . . , x(i − m), u(i − 1), . . . , u(i − q)  T

. (48)

(7)

Then e(·) satisfies the difference equation

( e(i + 1) = Φe(i) + ¯ Bu(i), i = 0, . . . , N − 1, e(0) = e 0 ,

(49) where

Φ =

A

0

A

1

. . . A

m

B

1

. . . B

q

I 0 0 0 0

. . . . . . .. .

0 I 0 0 0

0 0 0 . . . 0 .. . I . . . .. .

. . . 0 . . . I 0

 , ¯ B =

 B

0

0 .. . 0 I 0 .. . 0

 , (50)

and e 0 = (x 0 , α −1 , . . . , α −m , µ −1 , . . . , µ −q ) T .

Let P ∈ L(X m+1 × U q , X) be the projection oper- ator defined by

X m+1 × U q → X, P :

(y 1 , . . . , y m+1 , v 1 , . . . , v q ) ,→ y 1 .

(51)

Then from (49) it follows that

x(N ) = P e(N ) = P Φ N e 0 + P ¯ H N u, (52) where ¯ H N is the operator

U N → X, H ¯ N :

(u(0), . . . , u(N − 1)) ,→

N −1

X

i=0

Φ N −1−i Bu(i). ¯ (53)

Let K N = P ¯ H N and G 0 = ImK N . Then consider the semi-norm k · k G

N

defined on X by

kxk G

N

= kK N xk R , ∀ x ∈ X, (54) where K N is the adjoint operator of K N defined with respect to the inner products h·, ·i and h·, ·i R . Since G 0 = ImK N = (KerK N ) , we deduce that the func- tional k · k G

N

is a norm on G 0 . Denote by G N the completion of G 0 under the norm k · k G

N

and consider the operator L N given by

G 0 → G 0 , L N :

x ,→ K N K N x. (55) Clearly, L N defines a bounded operator on G 0 en- dowed with the norm k · k G

N

. By standard results (Lions, 1988), the operator L N may be extended to an isomor- phism denoted also by L N and defined from G N to its dual G

0

N .

Proposition 6. An integer N ≥ 1 is admissible if and only if x d − P Φ N e 0 ∈ G

0

N .

Proof. If N is admissible, then there exists a con- trol sequence u ∈ U N such that x(N ) = x d , which implies P e(N ) = P Φ N e 0 + K N u = x d , or x d − P Φ N e 0 = K N u. Since ImK N ⊂ G

0

N , we deduce that x d − P Φ N e 0 ∈ G

0

N . Conversely, suppose that x d − P Φ N e 0 ∈ G

0

N . Then there exists y ∈ G N such that L N y = x d − P Φ N e 0 . Hence x d = P Φ N e 0 + K N K N y or x d = x(N ), where u = K N y. Thus N is admissi- ble.

Proposition 7. For each admissible integer N , the con- trol u N exists, is unique and given by u N = K N g, where g ∈ G N is the unique solution of the algebraic equation

L N g = x d − P Φ N e 0 .

Moreover, the optimal cost is J (N, u N ) = ϕ(N ) + kgk 2 G

N

.

Proposition 8. Let A be the set of all admissible integers.

If A is bounded, then N is the smallest integer that minimizes J (N, u N ) over A. Otherwise, consider N 0 ∈ A and M ∈ A such that ϕ(M ) > J(N 0 , u N

0

). Then N is the smallest integer that minimizes J (N, u N ) over the interval [1, M ].

Remark 4. By obvious modifications, Remark 2 remains also valid.

4.1. The Case of a Free End Point

In this case, the cost functional J (N, u) depends on time, energy, state and also delays in the states, i.e.

J (N, u) = ϕ(N ) +

N −1

X

i=0

hu(i), Ru(i)i

+

N −1

X

i=0

D X m

1

j=0

M j x(i − j), M

m

1

X

j=0

M j x(i − j) E

+ hx(N ), Gx(N )i, (56)

where M j ∈ L(X), j = 0, . . . , m 1 and m 1 ∈ N is such that m 1 ≤ m.

The problem is to determine the optimal pair (N , u ) which satisfies

J (N , u ) = min

(N,u)∈N×U

N

J (N, u) (57)

such that N is as small as possible. To determine the

unique solution (N , u ), we proceed in two steps: In the

(8)

first one, for any fixed integer N , we determine the opti- mal control u N which minimizes the cost J (N, u) over all controls u ∈ U N . In the second step, we minimize J (N, u N ) over all integers N . By convention, we set

J (0, u) = ϕ(0) + D X m

1

j=0

M j α −j , M

m

1

X

j=0

M j α −j E

+ hx 0 , Gx 0 i, ∀ u ∈ U N . (58) To settle this problem, we rewrite the cost functional in terms of the state variable e(·) given in (49). Indeed, since

m

1

X

j=0

M j x(i − j) = ¯ M e(i), i = 0, . . . , N − 1,

x(N ) = P e(N ), (59)

where ¯ M = [M 0 , . . . , M m

1

, 0, . . . , 0

| {z }

m−m

1

+q

], we deduce that

J (N, u) = ϕ(N ) +

N −1

X

i=0

hu(i), Ru(i)i

+

N −1

X

i=0

he(i), ¯ M T M ¯ M e(i)i

+ he(N ), P T GP e(N )i. (60) Consequently, in order to settle the problem (57), we consider the cost functional defined by (60), where e(·) is the solution of the system without delay given by (49).

Then we apply the results of Section 3 to obtain the fol- lowing propositions:

Proposition 9. Let N ∈ N and K i : X m+1 × U q → X m+1 × U q , i = 0, . . . , N − 1 be a family of operators given by

 

 

K i+1 = Φ K i (I + ¯ BR −1 B ¯ K i ) −1 Φ + ¯ M T M ¯ M , i = 0, . . . , N − 1, K 0 = P T GP.

Given initial conditions x 0 , (α r ) r , the optimal control u N is given in feedback form by

u N (i) = −R −1 B ¯ K N −1−i (I + ¯ BR −1 B ¯ K N −1−i ) −1

× Φe(i), i = 0, . . . , N − 1 and the corresponding cost is

J (N, u N ) = hK N e 0 , e 0 i.

Proposition 10. Consider (N 0 , M ) ∈ N 2 such that ϕ(M ) > J (N 0 , u N

0

). Then the minimum time N is the smallest integer that minimizes J (N, u N ) over the inter- val [0, M ]. Moreover, u = u N

.

5. Conclusion

We have solved the linear quadratic minimum-time problem for discrete distributed systems and discrete distributed time delay systems. On certain assumptions, we can prove the existence and uniqueness of the solution. We consider the problem in two variants, with fixed and free end point.

In the first variant, we establish that the optimal pair can be determined by solving a finite sequence of algebraic equa- tions and by minimizing a time functional over a finite sub-interval of N. In the second variant, we use a similar technique and some results of (Zabczyk, 1974). For dis- crete distributed time delay systems, in order to solve the problem we have defined a new state variable which is the solution of a discrete system without delay.

References

Abdelhak A. (1997): Quelques éléments sur l’analyse et le con- trôle des systèmes discrets à retard. — Ph. D. thesis, Fac- ulté des Sciences Ben M’Sik, Casablanca, Morocco.

El Alami N., Ounsafi A. and Znaidi N. (1998): On the discrete linear quadratic minimum-time problem. — J. Franklin In- stitute, Vol. 335(b), pp. 525–532.

Athans M. and Falb P.L. (1996): Optimal Control. — New York:

McGraw Hill.

Karrakchou J. and Rachik M. (1995): Optimal control of discrete distributed systems with delays in the control: The finite horizon case. — Arch. Contr. Sci., Vol. 4(XL), Nos. 1–2, pp. 37–53.

Lions J.L. (1988): Exact controllability, stabilization and per- turbation for distributed systems. — SIAM J. Contr. Opt., Rev. 30, pp. 71–86.

Lee K.Y., Chown S.N. and Barr R.O. (1972): On the control of discrete time distributed parameter systems. — SIAM J.

Contr., Vol. 10, No. 2, pp. 361–376.

Pohjolainen S. (1982): On the discrete-time quadratic optimum control problem in reflexive Banach space. — Proc. 2nd IFAC Symp., Coventry, Great Britain, Pergamon Press, pp. 319–328.

Rachik M. (1995): Quelques éléments sur l’analyse et le con- trôle des systèmes distribués. — Ph. D. thesis, EMI-Rabat, Morocco.

Schwartz H.M. and Gourdeau R. (1989): Optimal control of a robot manipulator using a weigthed time-energy cost func- tion. — Proc. IEEE Conf. Decision Contr., pp. 1628–1631.

Verriest E.I. and Lewis F.L. (1991): On the linear quadratic minimum-time problem. — IEEE Trans. Automat. Contr., Vol. 36, No. 7, pp. 859–863.

Zabczyk J. (1974): Remarks on the control of discrete-time parameter systems. — SIAM J. Contr., Vol. 12, No. 4, pp. 721–735.

Received: 25 May 2000

Revised: 18 June 2001

Cytaty

Powiązane dokumenty

Reynolds in his paper (1972) proposed a difference parametric method for solving the Fourier problem for a nonlinear parabolic equation of second order in one space variable.

It was shown that for a system with co- efficients having limits as time tends to infinity the opti- mal control can be realized in the form of a time-invariant feedback with

To solve the problem, we introduced an adequate Hilbertian structure and proved that the optimum and optimal cost stem from an algebraic linear infinite dimensional equation which

For the linear continuous system the use of the Equation Error Method EEM and the optimal choice of the linear constraint for the parameters guarantee much better results of

In this paper the Weierstrass–Kronecker decomposition theorem will be applied to fractional descriptor time-varying discrete-time linear systems with regular pencils to find

(2006a): Realization problem for positive multivari- able discrete-time linear systems with delays in the state vector and inputs. (2006b) A realization problem for

Stability of positive continuous-time line- ar systems with delays, Bulletin of the Polish Academy of Sciences: Technical Sciences 57(4): 395–398.. Kaczorek, T.,

(i) practically stable with the given length of practical implementation if and only if the positive standard discrete-time system (15) is asymptotically stable, (ii) practically