• Nie Znaleziono Wyników

STABILITY ANALYSIS OF SOLUTIONS TO AN OPTIMAL CONTROL PROBLEM ASSOCIATED WITH A GOURSAT-DARBOUX PROBLEM

N/A
N/A
Protected

Academic year: 2021

Share "STABILITY ANALYSIS OF SOLUTIONS TO AN OPTIMAL CONTROL PROBLEM ASSOCIATED WITH A GOURSAT-DARBOUX PROBLEM"

Copied!
16
0
0

Pełen tekst

(1)

STABILITY ANALYSIS OF SOLUTIONS TO AN OPTIMAL CONTROL PROBLEM ASSOCIATED WITH A GOURSAT-DARBOUX PROBLEM

D

ARIUSZ

IDCZAK

, M

AREK

MAJEWSKI

S

TANISŁAW

WALCZAK

University of Łód´z, Faculty of Mathematics ul. S. Banacha 22, 90–238 Łód´z, Poland

e-mail:

{idczak, marmaj, stawal}@imul.math.uni.lodz.pl

In the present paper, some results concerning the continuous dependence of optimal solutions and optimal values on data for an optimal control problem associated with a Goursat-Darboux problem and an integral cost functional are derived.

Keywords: Goursat-Darboux problem, optimal control, continuous dependence

1. Introduction

The question of the continuous dependence of solutions to an optimal control problem on data (the so-called sta- bility analysis of solutions or well-posedness) is very im- portant from the point of view of practical applications of theory. Indeed, if the problem is related to a physical phe- nomenon, its data can be considered only an arbitrarily close approximation to exact values. Consequently, if the solution does not depend continuously on the data, it is not actually determined.

Example 1. Let us consider the one-dimensional system with a scalar parameter ω ∈ [1/2, 1]

 

 

˙

x

1

(t) = x

2

(t),

˙

x

2

(t) = u(t) − ω

2

x

1

(t), u (t) ∈ [0, 1]

with boundary conditions

x

1

(0) = 0, x

1

(π) = 0, x

2

(0) and x

2

(π) are free, and the cost functional

J

ω

(x, u) =

π

Z

0

x

1

(t) x

1

(t) − 10

3

2π dt → inf.

In a way analogous to (Idczak, 1998) one can show that for any parameter ω ∈ [1/2, 1] the above optimal control problem possesses an optimal solution (x

ω

, u

ω

).

Supported by grant 7 T11A 004 21 of the State Committee for Scientific Research, Poland.

One can show that for any ω ∈ [1/2, 1) J

ω

(x

ω

, u

ω

) ≥ −4 × 10

3

π √

2π.

If ω = 1, then the control system has a solution x

ω

(not unique) only for u ≡ 0. Moreover,

J

ω

(x

ω

, u

ω

) = −4 × 10

6

.

So, we see that as ω → 1, the optimal value has a jump, i.e. it is not continuous with respect to ω at the point ω = 1. In this case, we say that the optimal control problem is ill posed. 

The stability analysis of solutions to finite- dimensional mathematical programming problems was investigated, e.g. in (Bank et al., 1983; Fiacco, 1981; Levitin, 1975; Robinson, 1974). A survey of the results for the case of abstract Banach spaces is given in (Malanowski, 1993).

We study the continuous dependence of solutions and optimal values on data for the optimal control problem associated with the Goursat-Darboux problem

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

2

w

∂x∂y = g 

x, y, w, ∂w

∂x , ∂w

∂y , u  , (x, y) ∈ P = [0, 1] × [0, 1] a.e., w(x, 0) = ϕ(x), w(0, y) = ψ(y), x, y ∈ [0, 1],

I(w, u) =

1

Z

0 1

Z

0

G(x, y, w, u) dx dy → min, u ∈ U

M

= u ∈ L

2

(P ); u(x, y) ∈ M,

(x, y) ∈ P a.e.

(NH)

(in linear and nonlinear cases).

(2)

Systems of this type (in control theory they are called two-dimensional (2D) continuous Fornasini-Marchesini systems) have been investigated by many authors (see, e.g. (Bergmann et al., 1989; Pulvirenti and Santagati, 1975; Surryanarayama, 1973)). They can be applied to describe the absorption gas phenomenon (Idczak and Wal- czak, 1994; Tikhonov and Samarski, 1958).

Our aim is to obtain stability results analogous to those obtained in (Walczak, 2001) for an ordinary prob- lem. The main tools used in the study of the stabil- ity question in the case of abstract Banach spaces are:

the implicit function theorem for generalized equations (Robinson, 1980), the open mapping theorem for set- valued maps (Robinson, 1976), and composite optimiza- tion (Ioffe, 1994). Our proofs make no appeal to the above approaches.

First, we consider Problem (NH) in the case when the system is linear, autonomous, and the set M does not vary. The approach used here is based on the Cauchy for- mula for a solution to a linear autonomous system. Since, in the nonlinear case, we have no formula for solving sys- tem (NH), in this case we use a different method, based on the Gronwall lemma for functions of two variables and the continuity of the mapping M 7−→ U

M

(in the Hausdorff sense). Let us point out the fact that this method cannot be applied in the linear case.

2. Space of Solutions to the Goursat–

Darboux Problem

By a function of an interval (Łojasiewicz, 1988) we mean a mapping F defined on the set of all closed intervals [x

1

, x

2

] × [y

1

, y

2

] contained in P , with values in R. We say that F is additive if

F (Q ∪ R) = F (Q) + F (R)

for any closed intervals Q, R ⊂ P such that Q ∪ R is an interval contained in P and IntQ ∩ IntR = ∅.

Let a function z : P → R of two variables be given.

The function F

z

of an interval given by

F

z

[x

1

, x

2

] × [y

1

, y

2

]  = z (x

2

, y

2

) − z (x

1

, y

2

)

− z (x

2

, y

1

) + z(x

1

, y

1

) for [x

1

, x

2

] × [y

1

, y

2

] ⊂ P is called the function of an interval associated with z.

We say that a function z : P → R of two variables is absolutely continuous (Walczak, 1987) if z(0, ·), z(·, 0) are absolutely continuous functions on [0, 1] and F

z

is an absolutely continuous function of an interval.

It can be shown (Walczak, 1987) that z : P → R is absolutely continuous if and only if there exist functions

l ∈ L

1

(P, R), l

1

, l

2

∈ L

1

([0, 1], R) and a constant c ∈ R such that

z (x, y) =

x

Z

0 y

Z

0

l (s, t) ds dt +

x

Z

0

l

1

(s) ds

+

y

Z

0

l

2

(t) dt + c (1)

for (x, y) ∈ P .

The above integral formula implies that the partial derivatives

∂z

∂x (x, y) , ∂z

∂y (x, y) , ∂

2

z

∂x∂y (x, y) exist a.e. on P and

∂z

∂x (x, y) =

y

Z

0

l (x, t) dt + l

1

(x) ,

∂z

∂y (x, y) =

x

Z

0

l (s, y) ds + l

2

(y) ,

2

z

∂x∂y (x, y) = l (x, y) for (x, y) ∈ P a.e. (Idczak, 1990).

Moreover, it is easy to see that a function z : P → R is absolutely continuous and satisfies the conditions

z (0, y) = 0 for y ∈ [0, 1] , z (x, 0) = 0 for x ∈ [0, 1] ,

if and only if there exists a function l ∈ L

1

(P, R) such that

z (x, y) =

x

Z

0 y

Z

0

l (s, t) ds dt (2)

for (x, y) ∈ P .

By AC

2

(P, R) we denote the set of all functions z : P → R, each of them having the integral rep- resentation (1) with functions l ∈ L

2

(P, R), l

1

, l

2

∈ L

2

([0, 1], R).

By AC

02

(P, R) we denote the set of all functions z : P → R each of them having the integral represen- tation (2) with function l ∈ L

2

(P, R).

By AC

2

(P, R

n

) (AC

02

(P, R

n

)) we denote the set

of all vector-valued functions z = (z

1

, z

2

, . . . , z

n

) :

P → R

n

such that each coordinate function z

i

: P → R

belongs to AC

2

(P, R) (AC

02

(P, R)).

(3)

It is easy to see that AC

02

(P, R

n

) with the scalar product

hz, wi

AC2

0(P,Rn)

= Z

P

l (s, t) , k (s, t)

Rn

ds dt,

where

z (x, y) =

x

Z

0 y

Z

0

l (s, t) ds dt,

w (x, y) =

x

Z

0 y

Z

0

k (s, t) ds dt

and AC

2

(P, R

n

) with the scalar product hz, wi

AC2(P,Rn)

=

Z

P

l (s, t) , k (s, t)

Rn

ds dt

+

1

Z

0

l

1

(s) , k

1

(s)

Rn

ds

+

1

Z

0

l

2

(t) , k

2

(t)

Rn

dt + hc, di

Rn

,

where

z(x, y) =

x

Z

0 y

Z

0

l(s, t) ds dt +

x

Z

0

l

1

(s) ds

+

y

Z

0

l

2

(t) dt + c,

w(x, y) =

x

Z

0 y

Z

0

k(s, t) ds dt +

x

Z

0

k

1

(s) ds

+

y

Z

0

k

2

(t) dt + d

are Hilbert spaces. In much the same way as in (Id- czak, 1996) it can be proved that if z

n

* z

0

weakly in AC

02

(P, R

n

), then z

n

⇒ z

0

uniformly on P .

In the sequel, we denote by AC

2

([0, 1], R

n

) the standard space of all absolutely continuous functions of one variable ϕ : [0, 1] → R

n

such that ϕ ∈ ˙ L

2

([0, 1], R

n

). The scalar product in AC

2

([0, 1], R

n

) is given by

hϕ, ψi

AC2([0,1],Rn)

=

1

Z

0

l (s) , k (s)

Rn

ds + hc, di

Rn

,

where

ϕ (x) =

x

Z

0

l (s) ds + c,

ψ (x) =

x

Z

0

k (s) ds + d.

3. Linear System

3.1. Problem Formulation and Basic Assumptions

Consider the family of optimal control problems

 

 

 

 

 

 

 

 

 

 

 

 

2

v

∂x∂y (x, y) + A

k1

∂v

∂x (x, y) + A

k2

∂v

∂y (x, y) +A

k

v (x, y) = B

k

u (x, y) , (x, y) ∈ P a.e., v (x, 0) = ϕ

k

(x) , v (0, y) = ψ

k

(y)

for all x, y ∈ [0, 1], J

k

(v, u) =

Z

P

G

k

(x, y, v (x, y) , u (x, y)) dx dy, (L

k

)

where A

k1

, A

k2

, A

k

∈ R

n×n

, B

k

∈ R

n×m

, ϕ

k

, ψ

k

∈ AC

2

([0, 1], R

n

) and ϕ

k

(0) = ψ

k

(0) = c

k

for k = 0, 1, 2, . . . . Problem (L

k

) is considered in the space AC

2

(P, R

n

) of solutions v and in the set U

M

= {u ∈ L

2

(P, R

m

) : u(x, y) ∈ M a.e.} of controls u. The set M is a convex and compact subset of R

m

.

It is easy to see that, using the substitution z (x, y) = v (x, y) − ϕ

k

(x) − ψ

k

(y) + c

k

, we get the equivalent problem

 

 

 

 

 

 

 

 

 

 

2

z

∂x∂y (x, y) + A

k1

∂z

∂x (x, y) + A

k2

∂z

∂y (x, y) + A

k

z (x, y) = B

k

u (x, y) + b

k

(x, y) , z (x, 0) = z (0, y) = 0 for all (x, y) ∈ P, J

k

(z, u) =

Z

P

F

k

(x, y, z (x, y) , u (x, y)) dx dy, (LH

k

)

where

b

k

(x, y) = −A

k1

d

dx ϕ

k

(x) − A

k2

d dx ψ

k

(y)

− A

k

ϕ

k

(x) − A

k

ψ

k

(x) + A

k

c

k

, F

k

(x, y, z, u) = G

k

x, y, z +ϕ

k

(x)+ψ

k

(y)−c

k

, u.

Problem (LH

k

) will be considered in the space

AC

02

(P, R

n

) of solutions z and in the set U

M

of con-

trols u.

(4)

For simplicity, we prove our results for Prob- lem (LH

k

). However, Problems (LH

k

) and (L

k

) are equiv- alent, and therefore all results which are to be proved can be used for (L

k

).

For the system

 

 

 

 

 

 

2

z

∂x∂y (x, y) + A

k1

∂z

∂x (x, y) + A

k2

∂z

∂y (x, y) +A

k

z (x, y) = B

k

u (x, y) + b

k

(x, y) , (x, y) ∈ P a.e.,

z (x, 0) = z (0, y) = 0 for all x, y ∈ [0, 1], (3)

the following theorem holds (Bergmann et al., 1989):

Theorem 1. For any u ∈ L

2

(P, R

m

) the system (3) pos- sesses a unique solution z

k

∈ AC

02

(P, R

n

) given by the formula

z

k

(x, y)

=

x

Z

0 y

Z

0

R

k

(s, t, x, y) B

k

u (s, t) + b

k

(s, t) ds dt, (4) where the function R

k

: P × P → R

n×n

(called the Rie- mann function) has the form

R

k

(s, t, x, y) =

X

i=0

X

j=0

(s − x)

i

i!

(t − y)

j

j! T

i,jk

, and the sequence T

i,jk

is defined by the recurrence formu- lae

T

i,jk

= T

i,j−1k

A

k1

+ T

i−1,jk

A

k2

− T

i−1,j−1k

A

k

, T

0,0k

= I, T

i,jk

= 0 for i = −1 or j = −1,

(5)

for k = 0, 1, 2, . . . .

We shall make the following assumptions:

(L0) b

k

→ b

0

in L

2

(P ,R

n

) ,

(L1) the function P 3 (x, y) 7→ F

k

(x, y, z, u) ∈ R is measurable for (z, u) ∈ R

n

× R

m

, and k = 0, 1, 2, . . . ,

(L2) the function R

n

×R

m

3 (z, u) 7→ F

k

(x, y, z, u) ∈ R is continuous for (x, y) ∈ P a.e., k = 0, 1, 2, . . . ,

(L3) the function R

m

3 u 7→ F

k

(x, y, z, u) ∈ R is convex for (x, y) ∈ P a.e., z ∈ R

n

and k = 0, 1, 2, . . . ,

(L4) for any bounded set B ⊂ R

n

, there exists a func- tion γ

B

∈ L

1

(P, R

+

) such that

F

k

(x, y, z, u)

≤ γ

B

(x, y) + |u|

for (x, y) ∈ P a.e., z ∈ B, u ∈ R

m

and k = 0, 1, 2, . . . ,

(L5) the sequences of matrices (A

k

)

k∈N

, (A

k1

)

k∈N

, (A

k2

)

k∈N

tend to matrices A

0

, A

01

, A

02

, respec- tively, in the norm of R

n×n

, and (B

k

)

k∈N

tends to B

0

in the norm of R

n×m

(by the norm of a ma- trix A we mean the value kAk = ( P

i,j

a

2i,j

)

12

).

For k = 0, 1, 2, . . . , set

Z

k

= z

k

∈ AC

02

(P, R

n

) : there exists u ∈ U

M

such that z

k

is the solution of (3) corresp. to u}

and

m

k

= inf J

k

(z, u)

with respect to (z, u) such that z is the solution of (3) corresponding to u ∈ U

M

.

In a standard way one can prove the following result:

Theorem 2. Assume that (L1)–(L4) hold. Then, for any k = 0, 1, 2, . . . , there exists an optimal solution of Prob- lem (LH

k

), i.e. for any k = 0, 1, 2, . . . there exist control u

k

∈ U

M

and the trajectory z

k

∈ Z

k

corresponding to u

k

, such that

J

k

z

k

, u

k

 = m

k

. Write

A

k

= n

z

k

, u

k

 ∈ AC

02

(P, R

n

) × U

M

: z

k

satisfies (3) with u

k

and J

k

z

k

, u

k

 = m

k

o

. (6)

This set will be referred to as the set of optimal solutions, or the optimal set.

Lemma 1. Let c = sup

k∈{0,1,... }

max{|A

k

|, |A

k1

|, |A

k2

|, |B

k

|, 1}.

Then

1. for any k = 0, 1, 2, . . . , R

k

(x, y, s, t)

≤ e

3c

for (x, y, s, t) ∈ P × P ,

(5)

2. if ε > 0 and K are such that, for any k > K, A

k

− A

0

≤ ε 3 ,

A

k1

− A

01

≤ ε

3 ,

A

k2

− A

02

≤ ε

3 ,

then, for k > K, we have T

i,jk

− T

i,j0

≤ ε3

i+j+1

c

i+j

for i, j ∈ {0, 1, 2, . . . }

and, consequently,

R

k

(x, y, s, t) − R

0

(x, y, s, t)

≤ 3εe

3c

for any (x, y, s, t) ∈ P × P (i.e. R

k

→ R

0

uni- formly on P × P as k → ∞).

The proof of this lemma (using the induction argu- ment) has only a technical character and is very arduous.

Let us recall that the weak upper limit of a sequence of the sets V

k

⊂ X (X is a Banach space) is defined as the set of all cluster points (with respect to the weak topology) of sequences v

k



where v

k

∈ V

k

for k = 1, 2, 3, . . . . We denote this set as wLimsup V

k

(Aubin and Frankowska, 1990).

Theorem 3. If

1. Problems (LH

k

) satisfy the conditions (L0)–(L5), 2. the sequence of cost functionals J

k

(x, u) tends to

J

0

(x, u) uniformly on e B × U

M

for any bounded set B ⊂ AC e

02

(P, R

n

),

then

(a) there exists a ball e B (0, ρ) ⊂ AC

02

(P, R

n

) such that Z

k

⊂ e B(0, ρ) for k = 0, 1, 2, . . . , i.e. there ex- ists ρ > 0 such that, for any z

k

∈ Z

k

, we have kz

k

k

AC2

0(P,Rn)

≤ ρ,

(b) the sequence of optimal values m

k

tends to an opti- mal value m

0

,

(c) the weak upper limit of the optimal sets A

k

⊂ AC

02

(P, R

n

) × L

2

(P, R

m

) is a non-empty set, and wLimsup A

k

⊂ A

0

.

If the set A

k

is a singleton, i.e. A

k

= {(z

k

, u

k

)}

for k = 0, 1, 2, . . . , then z

k

tends to z

0

weakly in AC

02

(P, R

n

) and u

k

tends to u

0

weakly in L

2

(P, R

m

).

Proof. (a) Let z

k

∈ AC

02

(P, R

n

) be the solution of (3) corresponding to u

k

. From (4) we have

2

z

k

∂x∂y (x, y) = B

k

u

k

(x, y) + b

k

(x, y)

+

x

Z

0

∂x R

k

(s, y, x, y) B

k

u

k

(s, y) + b

k

(s, y) ds

+

y

Z

0

∂y R

k

(x, t, x, y) B

k

u

k

(x, t) + b

k

(x, t) dt

+

x

Z

0 y

Z

0

2

∂x∂y R

k

(s, t, x, y)

× B

k

u

k

(s, t) + b

k

(s, t) ds dt for (x, y) ∈ P .

Since u

k

(x, y) ∈ M , which is bounded, B

k

→ B

0

and R

k

are analytic, from (L0) and Lemma 1 we get that there exists a constant ρ > 0 such that

z

k

AC2

0(P,Rn)

=  Z

P

2

z

∂x∂y (x, y)

2

dx dy 

12

≤ ρ

for k = 0, 1, 2, . . . .

(b) Let z

k

, u

k

 ∈ A

k

for k = 1, 2, 3, . . . . Then we have m

0

≤ J

0

z ˜

0k

, u

k



for k = 1, 2, 3, . . . , where ˜ z

k0

is the trajectory of (3) with k = 0, correspond- ing to u

k

for k = 1, 2, . . . . Let ε > 0. By (L0) and (L5), there exists a K such that, for any k > K,

A

k

− A

0

≤ ε

3 ,

A

k1

− A

01

≤ ε

3 , A

k2

− A

02

≤ ε 3 ,

B

k

− B

0

< ε, Z

P

b

k

(x, y) − b

0

(x, y)

dx dy < ε

2

.

(7)

By direct calculations, from Lemma 1 and (7) we ob- tain, for k > K and (x, y) ∈ P,

z

k

(x, y) − ˜ z

0k

(x, y)

x

Z

0 y

Z

0

R

k

(s, t, x, y) − R

0

(s, t, x, y) B

k

u

k

(s, t) + b

k

(s, t)

ds dt

(6)

+

x

Z

0 y

Z

0

R

0

(s, t, x, y)

B

k

− B

0

u

k

(s, t) ds dt

+

x

Z

0 y

Z

0

R

0

(s, t, x, y)

b

k

(s, t)−b

0

(s, t)

ds dt ≤ ε˜ c,

where ˜ c > 0. This means that sup

(x,y)∈P

z

k

(x, y)

−˜ z

k0

(x, y)

is arbitrarily small (for a sufficiently large k).

From the Scorza-Dragoni Theorem (Ekeland and Temam, 1976) applied to the function F

0

|

P ×B×M

(where B is the ball with radius ρ described in (a)) we have that for any η > 0 there exists a compact set P

η

⊂ P such that µ(P \P

η

) ≤ η and F

0

|

Pη×B×M

is uniformly continu- ous. Thus for a sufficiently large k we have

J

0

z ˜

0k

, u

k

 − J

0

z

k

, u

k



≤ Z

Pη

F

0

x, y, ˜ z

0k

(x, y) , u

k

(x, y) 

−F

0

x, y, z

k

(x, y) , u

k

(x, y)  +

Z

P \Ph

F

0

x, y, ˜ z

k0

(x, y) , u

k

(x, y) 

−F

0

x, y, z

k

(x, y) , u

k

(x, y) 

≤ µ (P

η

) η + ηˆ c = ¯ ε

where ˆ c > 0 and ¯ ε is arbitrarily small.

Thus, for any ¯ ε > 0,

m

0

≤ J

0

z ˜

k0

, u

k

 ≤ J

0

z

k

, u

k

 + ¯ ε (8) for a sufficiently large k.

From Assumption 2 and (a) we have J

k

z

k

, u

k

 − J

0

z

k

, u

k



< ¯ ε (9) for a sufficiently large k. Consequently, by (8) and (9),

m

0

≤ J

k

z

k

, u

k

 + 2ε = m

k

+ 2¯ ε for a sufficiently large k.

Similarly, we can prove that, for sufficiently large k, m

k

≤ m

0

+ 2¯ ε.

We have thus proved that m

k

→ m

0

as k → ∞.

(c) Let (z

k

, u

k

) be an optimal process for (LH

k

), i.e.

(z

k

, u

k

) ∈ A

k

for k = 0, 1, 2, . . . . Since u

k

(x, y) ∈ M and M is compact, (u

k

)

k∈N

is bounded. Since L

2

(P, R

m

) is reflexive, the sequence (u

k

)

k∈N

is compact in the weak topology of the space L

2

(P, R

m

). Without loss of generality, we can assume that u

k

* ¯ u

0

∈ U

M

in

the weak topology. From the formula of solution (4) we have that, for any (x, y) ∈ P ,

z

k

(x, y)

=

x

Z

0 y

Z

0

R

k

(s, t, x, y) B

k

u

k

(s, t) + b

k

(s, t) ds dt

=

x

Z

0 y

Z

0

R

k

(s, t, x, y) − R

0

(s, t, x, y) 

× B

k

u

k

(s, t) + b

k

(s, t) ds dt

+

x

Z

0 y

Z

0

R

0

(s, t, x, y) B

k

− B

0

 u

k

(s, t)

+ b

k

(s, t) − b

0

(s, t) ds dt

+

x

Z

0 y

Z

0

R

0

(s, t, x, y)

× B

0

u

k

(s, t) + b

0

(s, t) ds dt.

By virtue of Lemma 1 we have R

k

⇒ R

0

. By (L0), (L5) and from the boundedness of M the first and the second integral tend to zero. By the weak convergence of u

k

, the last integral tends to

x

Z

0 y

Z

0

R

0

(s, t, x, y)

× B

0

(s, t) u

0

(s, t) + b

0

(s, t) ds dt.

In this way we have proved that z

k

tends pointwisely to some ˜ z

0

∈ AC

02

(P, R

n

) which is the solution to (LH

k

) with k = 0, corresponding to ˜ u

0

. Further, we prove that (˜ z

0

, ˜ u

0

) is an optimal process for (LH

k

) with k = 0. Sup- pose that it is not true. Let (z

0

, u

0

) be an optimal process for (LH

k

) with k = 0. Let

J

0

z ˜

0

, ˜ u

0

 − J

0

z

0

, u

0

 = α > 0. (10) Then we have

m

k

− m

0

= J

k

z

k

, u

k

 − J

0

z

0

, u

0



= J

k

z

k

, u

k

 − J

0

z

k

, u

k



+ J

0

z

k

, u

k

 − J

0

z ˜

0

, ˜ u

0

 + α.

By (b), m

k

→ m

0

. From Assumption (2) and (a) we get that the first component tends to zero as k → ∞.

Moreover, lim

k→∞

J

0

(z

k

, u

k

) ≥ J

0

(˜ z

0

, ˜ u

0

) by (L2)–

(L4). In this way we have a contradiction with (10) and

the proof is complete.

(7)

3.2. Main Results for a Linear System

Based on Theorem 3, we obtain the following sufficient conditions for the stability of a two-dimensional optimal control system:

Corollary 1. Suppose that, for any k = 0, 1, . . . , Prob- lem (LH

k

) satisfies Assumptions (L0)–(L5) and, for any bounded set B ⊂ R

n

, there exists a sequence of functions γ

Bk

∈ L

1

(P, R

+

) such that

F

k

(x, y, z, u) − F

0

(x, y, z, u)

≤ γ

Bk

(x, y) for (x, y) ∈ P a.e., (z, u) ∈ B×M and k = 0, 1, 2, . . . . Moreover, we assume that γ

Bk

→ 0 in L

1

(P, R

+

). Then

(a) the sequence of optimal values m

k

tends to an opti- mal value m

0

as k → ∞,

(b) wLimsup A

k

⊂ A

0

and wLimsup A

k

6= ∅.

Proof. Let B e be any bounded set in the space AC

02

(P, R

n

). It is easy to see that if z ∈ e B, then z (x, y) ∈ B ⊂ R

n

for any (x, y) ∈ P , where B is bounded. By assumption, we have that, for any (z, u) ∈ B×U

M

,

J

k

(z, u) − J

0

(z, u) ≤

Z

P

F

k

(x, y, z (x, y) , u (x, y))

−F

0

(x, y, z (x, y) , u (x, y))

≤ Z

P

γ

Bk

(x, y) dx dy.

Since γ

Bk

→ 0 in L

1

(P, R

+

), the sequence of cost func- tionals J

k

converges uniformly to J

0

on e B×U

M

for any bounded set e B ⊂ AC

02

(P, R

n

). In this way, the as- sumptions of Theorem 3 are fulfilled and, by this theorem, (a) and (b) are true.

Corollary 2. If

1. we have

F

k

(x, y, z, u) = G

1

x, y, ω

k

(x, y) , z  + G

2

x, y, ω

k

(x, y) , z , u where ω

k

(·) ∈ L

p

(P, R

s

), p ≥ 1 and functions P × R

s

× R

n

3 (x, y, ω, z) 7→ G

1

(x, y, ω, z) → R, P ×R

s

×R

n

3 (x, y, ω, z) 7→ G

2

(x, y, ω, z) → R

m

are measurable with respect to (x, y) and continu- ous with respect to (ω, z),

2. ω

k

→ ω

0

in the norm topology of L

p

(P, R

s

) as k → ∞,

3. for any bounded set B ⊂ R

n

, there exists C > 0 such that

|G

i

(x, y, ω, z)| ≤ C (1 + |ω|

p

) for (x, y) ∈ P a.e., ω ∈ R

s

, z ∈ B,

4. Problems (LH

k

) satisfy Assumptions (L0)–(L5), then the conditions (a) and (b) of Corollary 1 hold.

Proof. We shall prove that the sequence of cost func- tionals J

k

(z, u) tends to J

0

(z, u) uniformly on any set B×U e

M

where B e ⊂ AC

02

(P, R

n

) is bounded.

Suppose that this is not true. Then there exist some bounded set e B⊂AC

02

(P, R

n

), ε > 0 and some sequence (z

ki

, u

ki

)

i∈N

, such that z

ki

∈ e B, u

ki

∈ U

M

and

J

ki

z

ki

, u

ki

 − J

0

z

ki

, u

ki



≥ ε (11) for i = 1, 2, 3, . . . .

By the reflexivity of AC

02

(P, R

n

), we may assume (extracting, if necessary, a subsequence) that z

ki

tends to some z

0

uniformly on P as i → ∞. We have

J

ki

z

ki

, u

ki

 − J

0

z

ki

, u

ki



≤ Z

P

G

1

x, y, ω

ki

(x, y) , z

ki

(x, y) 

− G

1

x, y, ω

0

(x, y) , z

0

(x, y)  dx dy +

Z

P

G

1

x, y, ω

0

(x, y) , z

0

(x, y) 

− G

1

x, y, ω

0

(x, y) , z

ki

(x, y)  dx dy +

Z

P

G

2

x, y, ω

ki

(x, y) , z

ki

(x, y) 

− G

2

x, y, ω

0

(x, y) , z

0

(x, y) 

u

ki

(x, y) dx dy +

Z

P

G

2

x, y, ω

0

(x, y) , z

0

(x, y) 

− G

2

x, y, ω

0

(x, y) , z

ki

(x, y) 

u

ki

(x, y) dx dy.

Since u

ki

∈ U

M

, it is commonly bounded. By Kras-

noselskii’s theorem on the continuity of the Nemytskii op-

erator, the right-hand side of the above inequality tends

to zero as i → ∞. This contradicts (11), and we have

thus proved that the sequence of cost functionals J

k

(z, u)

tends to J

0

(z, u) uniformly on any set e B × U

M

, where

B ⊂ AC e

02

(P, R

n

) is bounded. Applying Theorem (3),

we complete the proof.

(8)

Next, let us consider a mixed case, i.e. when the cost functional is of the form

J

k

(z, u) = Z

P

G

1

(x, y, z (x, y)) , ω

k

(x, y)

+ G

2

x, y, v

k

(x, y) ,

z (x, y)) , u (x, y) dx dy, (12) where G

1

: P × R

n

→ R

s

, G

2

: P × R

r

× R

n

→ R

m

, k = 0, 1, 2, . . . .

Corollary 3. If

1. the cost functional is of the form (12),

2. the function G

1

is measurable with respect to (x, y) and continuous with respect to z (analogously G

2

), 3. for any bounded set B ⊂ R

n

, there exist α(·) ∈

L

p

(P, R

+

) and C > 0, such that

|G

1

(x, y, z)| ≤ α (x, y) ,

|G

2

(x, y, v, z)| ≤ (1 + |v|

p

) for (x, y) ∈ P a.e., z ∈ B and v ∈ R

r

,

4. ω

k

tends to ω

0

in the weak topology of L

q

(P, R

s

) when q ∈ [1, ∞), or weakly-∗ when q = ∞; v

k

tends to v

0

in the norm topology of L

p

(P, R

s

), 5. the optimal control problems (LH

k

) satisfy Assump-

tions (L0)–(L5),

then the optimal values and the sets of optimal processes satisfy the conditions (a) and (b) of Corollary 1.

Remark 1. The obtained results remain true for a fam- ily of problems (L

k

), whereas the following additional as- sumption concerning the functions ϕ

k

, ψ

k

is satisfied:

ϕ

k

→ ϕ

0

, ψ

k

→ ψ

0

in AC

2

([0, 1], R

n

).

Example 2. Consider a two-dimensional continuous op- timal control system with variable parameters

 

 

 

 

 

 

z

xy

(x, y) + A

k1

z

x

(x, y) + 1 + A

k2

 z

y

(x, y)

= 1 + B

k

 u (x, y) , z (x, 0) = ϕ

k

(x) , z (0, y) = ψ

k

(y) , u (x, y) ∈ [0, 1] ,

(13)

J

k

(z, u) = Z

P2

h (x − 2) z + ω

k1

(x, y) φ

1

(x, y, z (x, y))

+ ω

k2

(x, y) φ

2

(x, y, u (x, y)) + 1

4 (1 − x) u (x, y) + 4xy i

dx dy → min,

where ϕ

k

, ψ

k

∈ AC

2

([0, 1], R), φ

1

is continuous and φ

2

is continuous and convex with respect to u ∈ [0, 1], u(·) ∈ L

2

(P, [0, 1]), z ∈ AC

2

(P, R), ω

1k

(·), ω

k2

(·) ∈ L

1

(P, [−1, 1]). By Theorem 2, the problem (13), (14) possesses at least one optimal solution but, in general, it is not easy to find an optimal process for this system.

Suppose that A

k1

, A

k2

, B

k

→ 0 in R; ϕ

k

, ψ

k

→ 0 in AC

2

([0, 1], R), ω

k1

, ω

k2

→ 0 in L

1

(P, R) as k → ∞.

In the limit case, we obtain the problem z

xy

(x, y) + z

y

(x, y) = u (x, y) , z (0, y) = z (0, y) = 0,

(14)

J

k

(z, u) = Z

P

h

(x − 2) z (x, y)

+ 1

4 (1−x) u (x, y)+4xy i

dx dy. (15) By Theorem 2, the above problem possesses an optimal process and, applying the extremum principle, we are able to find effectively an optimal solution (z

0

, u

0

) and an op- timal value m

0

. In fact, the Lagrange function for the system (14), (15) is of the form

L (z, u)

= Z

P

h

(x − 2) z (x, y) + 1

4 (1 − x) u (x, y) + 4xy

+v (x, y) z

xy

(x, y) + z

y

(x, y) − u (x, y) i

dx dy (16) where v ∈ L

2

(P, R).

The extremum principle implies that L

z

z

0

, u

0

 h = 0 for any h ∈ AC

02

(P, R

n

), and

L z

0

, u

0

 ≤ L (z

, u) (17) for any admissible control u.

Taking account of (16) and integrating by parts, we get

L

z

(z

, u

) h = Z

P2

(x − 2) h (x, y)

+ v (x, y) (h

xy

(x, y) + h

y

(x, y)) dx dy

= Z

P

 Z

1

x 1

Z

y

(x − 2) dx dy + v (x, y)

+

1

Z

x

v (x, y) dx h

xy

(x, y)



dx dy = 0

(9)

for any h ∈ AC

02

(P, R

n

). Thus

v (x, y) +

1

Z

x

v (x, y) dx + (1−y)



− 3

2 +2x− 1 2 x

2



= 0.

The above equation is of the Volterra type, and there- fore there exists a unique solution v

of this equation.

By direct calculation, it is easy to check that v

(x, y) = (1 − x) (1 − y).

The minimum condition (17) takes the form Z

P

(1 − x)

 y − 3

4



u

0

(x, y) dx dy

≤ Z

P

(1 − x)

 y − 3

4



u (x, y) dx dy

for all u (x, y) ∈ [0, 1]. This implies

u

0

(x, y) =

1 for x ∈ [0, 1] , y ∈ 0,

34

 0 for x ∈ [0, 1] , y ∈ 

3

4

, 1  (18)

and that, by (14) and (15),

z

0

(x, y) =

(1 − e

−x

) y for x ∈ [0, 1] , y ∈ 0,

34

 (1 − e

−x

)

34

for x ∈ [0, 1] , y ∈ 

3

4

, 1  (19) and

m

0

= J

0

z

0

, u

0

 = 55 64 .

Applying Theorem 2 to our example, we see that, for any k = 1, 2, 3, . . . , there exists at least one optimal pro- cess (z

0

, u

0

) for the system (13), (14), and the sequence (u

k

)

k∈N

tends to u

0

weakly in L

2

, (z

k

)

k∈N

tends to z

0

weakly in AC

02

(P, R

n

), where u

0

and z

0

are de- fined by (18) and (19), respectively. Moreover, the se- quence (m

k

) of the optimal values for the systems (13), (14) tends to m

0

= 55/64 and the sequence z

k



k∈N

of the optimal trajectories tends to z

0

uniformly on P .

In this way, we deduce that, in general, it is difficult to find an optimal solution for (13), (14), but the process (z

0

, u

0

) given by (18), (19) and the optimal value m

0

= 55/65 are a good approximation for (z

k

, z

k

) and m

k

with a sufficiently large k. 

4. Nonlinear System

4.1. Preliminaries

In this part we give a definition of a Hausdorff metric and prove some generalization of the Gronwall lemma regard- ing the case of functions of two variables.

Let (X, ρ) be a metric space. We define a distance from a point x

0

∈ X to a bounded set A ⊂ X as

dist(x

0

, A) = inf{ρ(x

0

, a); a ∈ A}.

The Hausdorff distance ρ

H

(A, B) between the bounded sets A, B ⊂ X is defined as

ρ

H

(A, B) = inf{ε > 0; A ⊂ N (B, ε), B ⊂ N (A, ε)}

where, for a given ε > 0 and a bounded set C ⊂ X, N (C, ε) = {x ∈ X; dist(x, C) ≤ ε}.

The function ρ

H

restricted to the set Z×Z, where Z is the family of all closed bounded subsets of X, is a metric in Z (Kisielewicz, 1991). It is called the Hausdorff metric.

Lemma 2. If k, c > 0, υ : P → R is continuous and

0 ≤ υ(x, y) ≤

x

Z

0 y

Z

0

kυ(s, t) ds dt + c, (x, y) ∈ P,

then

υ(x, y) ≤ e

kxy

c, (x, y) ∈ P.

Proof. Write

u(x, y) =

x

Z

0 y

Z

0

kυ(s, t) ds dt + c, (x, y) ∈ P,

w(x, y) = e

−kxy

u(x, y), (x, y) ∈ P.

From the continuity of υ it follows that u possesses the partial derivatives ∂u(x, y)/∂x and ∂u(x, y)/∂y ev- erywhere on P . Moreover,

∂u

∂x (x, y) =

y

Z

0

kυ(x, t) dt

y

Z

0

k 

x

Z

0 t

Z

0

kυ(s, τ ) ds dτ + c  dt

= k

2

x

Z

0 y

Z

0

(y − τ )υ(s, τ ) ds dτ + kcy

(10)

for (x, y) ∈ P . Consequently,

∂w

∂x (x, y) = − kye

−kxy

u(x, y) + e

−kxy

∂u

∂x (x, y)

≤ − kye

−kxy

u(x, y)

+ e

−kxy

k

2

y

x

Z

0 y

Z

0

υ(s, τ ) ds dτ

= − e

−kxy

k

2

x

Z

0 y

Z

0

τ υ(s, τ ) ds dτ + e

−kxy

kcy

= − kye

−kxy

u(x, y) + kye

−kxy

u(x, y)

− k

2

e

−kxy

x

Z

0 y

Z

0

τ υ(s, τ ) ds dτ ≤ 0

for (x, y) ∈ P and, analogously,

∂w

∂y (x, y) ≤ 0, (x, y) ∈ P.

Consequently,

w(x, y) ≤ w(0, 0) = c for (x, y) ∈ P . This implies

υ(x, y) ≤ u(x, y) = e

kxy

w(x, y) ≤ e

kxy

c for (x, y) ∈ P .

4.2. Main Result

In this part we consider the family of homogeneous prob- lems

 

 

 

 

 

 

 

 

 

 

 

 

 

 

2

z

∂x∂y = f

k



x, y, z, ∂z

∂x , ∂z

∂y , u 

, (x, y) ∈ P a.e., z(x, 0) = 0, z(0, y) = 0, x, y ∈ [0, 1],

J

k

(z, u) =

1

Z

0 1

Z

0

F

k

(x, y, z, u) dx dy → min, u ∈ U

k

= u ∈ L

2

(P, R

m

); u(x, y) ∈ M

k

,

(x, y) ∈ P a.e.},

(NH

k

)

k = 0, 1, . . . . Problem (NH

0

) will be referred to as the

‘limit problem’. In the sequel, we shall assume that the functions

f

k

: P × R

n

× R

n

× R

n

× M → R

n

, F

k

: P × R

n

× M → R,

where M = S

k=0

M

k

, and the sets M

k

⊂ R

m

, k = 0, 1, . . . , satisfy the following conditions:

(N1) the sets M

k

are compact and M

k

− −−− →

k→∞

M

0

in R

m

with respect to the Hausdorff metric;

(N2) the functions f

k

are measurable in (x, y) ∈ P a.e., continuous in u ∈ M and there exists a constant L > 0 such that

f

k

(x, y, z, z

x

, z

y

, u) − f

k

(x, y, w, w

x

, w

y

, u)

≤ L (|z − w| + |z

x

− w

x

| + |z

y

− w

y

|) for (x, y) ∈ P a.e., z, z

x

, z

y

, w, w

x

, w

y

∈ R

n

, u ∈ M, k = 0, 1, . . . ;

(N3) there exist constants a, b > 0 such that f

k

(x, y, z, z

x

, z

y

, u)

≤ a |z| + b

for (x, y) ∈ P a.e., z, z

x

, z

y

∈ R

n

, u ∈ M, k = 0, 1, . . . ;

(N4) for any bounded set A ⊂ R

n

× R

n

× R

n

, there ex- ists a sequence ϕ

kA



k∈N

⊂ L

2

(P, R

+

) such that ϕ

kA

− −−− →

k→∞

0 in L

2

(P, R

+

) and

f

k

(x, y, z, z

x

, z

y

, u) − f

0

(x, y, z, z

x

, z

y

, u)

≤ ϕ

kA

(x, y) for (x, y) ∈ P a.e., k = 1, 2, . . . ;

(N5) the function f

0

is of the type f

0

(x, y, z, z

x

, z

y

, u)

= α

0

(x, y, z, z

x

, z

y

) + β

0

(x, y, z, z

x

, z

y

)u, where β

0

is measurable in (x, y) ∈ P , continuous in (z, z

x

, z

y

) ∈ R

n

× R

n

× R

n

and, for any bounded set A ⊂ R

n

× R

n

× R

n

, there exists a function γ

A

∈ L

2

(P, R

+

) such that

β

0

(x, y, z, z

x

, z

y

)

≤ γ

A

(x, y) for (x, y) ∈ P a.e., (z, z

x

, z

y

) ∈ A;

(N6) the functions F

k

are measurable in (x, y) ∈ P , continuous in (z, u) ∈ R

n

× R

m

and, for each bounded set B ⊂ R

n

, there exists a function ν

B

∈ L

1

(P, R

+

) such that

F

k

(x, y, z, u)

≤ ν

B

(x, y)

for (x, y) ∈ P a.e., z ∈ B, u ∈ M, k = 0, 1, . . . ;

(11)

(N7) for any bounded set B ⊂ R

n

, there exists a sequence ψ

kB



k∈N

⊂ L

1

(P, R

+

) such that ψ

Bk

− −−− →

k→∞

0 in L

1

(P, R

+

) and

F

k

(x, y, z, u) − F

0

(x, y, z, u)

≤ ψ

kB

(x, y) for (x, y) ∈ P a.e., z ∈ B, u ∈ M , k = 1, 2, . . . . Remark 2. From Assumption (N1) it follows that the sets M

k

, k = 0, 1, . . . , are commonly bounded in R

m

. Remark 3. From Assumption (N2) it follows that the functions f

k

are continuous in (z, z

x

, z

y

, u) ⊂ R

n

× R

n

× R

n

× M .

In much the same way as in (Idczak et al, 1994), one can show that there exist l ∈ N and α ∈ (0, 1) such that each operator

F

uk

: L

2

(P, R

n

) 3 g

7−→ f

k

 x, y,

x

Z

0 y

Z

0

g,

y

Z

0

g,

x

Z

0

g, u(x, y)



∈ L

2

(P, R

n

),

where u ∈ U

k

, k = 0, 1, . . . , is contracting with the constant α ∈ (0, 1) with respect to the Bielecki norm in L

2

(P, R

n

) given by

kgk

l

=

 Z

1

0 1

Z

0

e

−2l(x+y)

|g(x, y)|

2

dx dy



12

, g ∈ L

2

(P, R

n

).

Consequently, F

uk

possesses a unique fixed point g

uk

∈ L

2

(P, R

n

). This means that the system

2

z

∂x∂y = f

k



x, y, z, ∂z

∂x , ∂z

∂y , u 

has a unique solution z

ku

in the space AC

02

(P, R

n

). This solution is given by

z

uk

(x, y) =

x

Z

0 y

Z

0

g

ku

(s, t) ds dt, (x, y) ∈ P,

and

2

z

uk

∂x∂y (x, y) = g

uk

(x, y), (x, y) ∈ P a.e.

Let us recall that the weak convergence in AC

02

(P, R

n

) implies the uniform convergence. Using the standard ar- guments, one can also easily show that if z

n

−→z

0

in AC

02

(P, R

n

) , then ∂z

n

/∂x−→∂z

0

/∂x in L

2

(P, R

n

) as n → ∞.

In the proof of the main theorem, we shall use the following three lemmas:

Lemma 3. There exists a constant r such that

2

z

uk

∂x∂y (x, y) ,

∂z

ku

∂x (x, y) ,

∂z

uk

∂y (x, y)

≤ r

for (x, y) ∈ P a.e. and u ∈ U

k

, k = 0, 1, . . . . Proof. From Assumption (N3) we have

2

z

uk

∂x∂y (x, y)

=

f

k

(x, y, z

uk

(x, y), ∂z

uk

∂x (x, y), ∂z

ku

∂y (x, y), u(x, y)

≤ a

z

uk

(x, y)

+ b, (x, y) ∈ P a.e.

for u ∈ U

k

, k = 0, 1, . . . . Hence

z

ku

(x, y) =

x

Z

0 y

Z

0

2

z

uk

∂x∂y (s, t) ds dt

≤ a

x

Z

0 y

Z

0

z

uk

(s, t)

ds dt + b, (x, y) ∈ P

for u ∈ U

k

, k = 0, 1, . . . . Applying the previous lemma with υ(x, y) =

z

uk

(x, y)

, k = a, c = b, we obtain z

uk

(x, y)

≤ e

a

b, (x, y) ∈ P, for u ∈ U

k

, k = 0, 1, . . . . Thus

2

z

ku

∂x∂y (x, y)

≤ ae

a

b + b, (x, y) ∈ P a.e.,

for u ∈ U

k

, k = 0, 1, . . . . The remaining part of the assertion follows from the fact that

∂z

uk

∂x (x, y) =

y

Z

0

2

z

uk

∂x∂y (x, t) dt, (x, y) ∈ P a.e.,

∂z

uk

∂y (x, y) =

x

Z

0

2

z

uk

∂x∂y (s, y) ds, (x, y) ∈ P a.e.,

for u ∈ U

k

, k = 0, 1, . . . .

As an immediate consequence of Lemma 3, we ob- tain the following result:

Corollary 4. The set {z

uk

∈ AC

02

(P, R

n

) (P ) : u ∈

U

k

, k = 0, 1, . . . } is bounded in AC

02

(P, R

n

).

Cytaty

Powiązane dokumenty

The trans- portation problem (or coupling problem) is to determine the value of the optimal transportation ` p (P, Q) and to construct an optimal pair (X, Y ) of random variables..

This paper presents an optimal control problem governed by a quasi- linear parabolic equation with additional constraints.. The optimal control problem is converted to an

Optimal control problems for linear and nonlinear parbolic equations have been widely considered in the literature (see for instance [4, 8, 18]), and were studied by Madatov [11]

To solve the problem, we introduced an adequate Hilbertian structure and proved that the optimum and optimal cost stem from an algebraic linear infinite dimensional equation which

A method for constructing -value functions for the Bolza problem of optimal control class probably it is even a discontinuous function, and thus it does not fulfil

The first section of this paper deals with the control problem for our fourth-order operator, the second considers a second-order operator, and the third studies the

Пусть *8 обозначает класс всех нормированных однолистных функций, определенных в единичном круге, а 8К — подкласс класса 8,

The study of the existence, the structure and properties of (approximate) solu- tions of optimal control problems defined on infinite intervals and on sufficiently large intervals