• Nie Znaleziono Wyników

ON AN OPTIMAL CONTROL PROBLEM FOR A QUASILINEAR PARABOLIC EQUATION

N/A
N/A
Protected

Academic year: 2021

Share "ON AN OPTIMAL CONTROL PROBLEM FOR A QUASILINEAR PARABOLIC EQUATION"

Copied!
12
0
0

Pełen tekst

(1)

S. H. F A R A G (Minia) M. H. F A R A G (Ibri)

ON AN OPTIMAL CONTROL PROBLEM FOR A QUASILINEAR PARABOLIC EQUATION

Abstract. An optimal control problem governed by a quasilinear parabolic equation with additional constraints is investigated. The optimal control problem is converted to an optimization problem which is solved using a penalty function technique. The existence and uniqueness theorems are investigated. The derivation of formulae for the gradient of the modified function is explained by solving the adjoint problem.

1. Introduction. Optimal control problems for partial differential equations are currently of much interest. An extensive literature in this area is devoted to parabolic equations [1, 11, 12, 14, 15]. These problems describe the processes of hydro- and gasdynamics, heat physics, filtration, plasma physics and others [8, 9].

This paper presents an optimal control problem governed by a quasi- linear parabolic equation with additional constraints. The optimal control problem is converted to an optimization problem which is solved using a penalty function technique. The existence and uniqueness theorems are investigated. The derivation of formulae for the gradient of the modified function is explained by solving the adjoint problem.

2. The optimal control problem. Let D be a bounded domain of the N -dimensional Euclidean space E

N

, let l, T be given positive numbers, and let Ω = {(x, t) : x ∈ D, t ∈ (0, T )}. Let V = {v : v = (v

1

, . . . , v

N

) ∈ E

N

, kvk

EN

≤ R}, where R > 0 is a given number. We consider the heat

2000 Mathematics Subject Classification: 49J20, 49K20, 49M29, 49M30.

Key words and phrases: optimal control, parabolic equations, penalty function meth- ods, existence theory.

[239]

(2)

exchange process described by the equation (1) ∂u

∂t − ∂

∂x



λ(u, v) ∂u

∂x



+ B(u, v) ∂u

∂x = f (x, t, u, v), (x, t) ∈ Ω, with initial and boundary conditions

u(x, 0) = φ(x), x ∈ D, (2)

λ(u, v) ∂u

∂x

x=0

= g

0

(t), λ(u, v) ∂u

∂x

x=l

= g

1

(t), 0 ≤ t ≤ T, (3)

where φ(x) ∈ L

2

(D), g

0

(t), g

1

(t) ∈ L

2

(0, T ).

The function f (x, t, u, v) ∈ L

2

(Ω) for every (u, v) ∈ [r

1

, r

2

] × E

N

is measurable in (x, t) ∈ Ω and for all (x, t) ∈ Ω it is continuous in (u, v) ∈ [r

1

, r

2

] × E

N

. Furthermore, this function has a continuous derivative in u for each (x, t) ∈ Ω, and for (u, v) ∈ [r

1

, r

2

]× E

N

, the derivative ∂f (x, t, u, v)/∂u is bounded. Moreover, the functions λ(u, v), B(u, v) are continuous on [r

1

, r

2

]×E

N

, have continuous derivatives in u and for all (u, v) ∈ [r

1

, r

2

]×E

N

, the derivatives ∂λ(u, v)/∂u, ∂B(u, v)/∂u are bounded, where r

1

, r

2

are given numbers.

On the set V , under the conditions (1)–(3) and the additional restrictions (4) ν

0

≤ λ(u, v) ≤ µ

0

, ν

0

≤ B(u, v) ≤ µ

0

, r

1

≤ u(x, t) ≤ r

2

it is required to minimize the function [14]

(5) f

α

(u, v) =

T

\

0

0

[u(0, t) − f

0

(t)]

2

+ β

1

[u(l, t) − f

1

(t)]

2

} dt + αkv − ωk

2EN

where f

0

(t), f

1

(t) ∈ L

2

(0, T ) are given functions, α ≥ 0, ν

0

, µ

0

> 0, β

0

≥ 0, β

1

≥ 0, β

0

+ β

1

6= 0, are given numbers, and ω = (ω

1

, . . . , ω

N

) ∈ E

N

is a given vector.

Definition 1. The problem of finding a function u = u(x, t) ∈ V

21,0

(Ω) from conditions (1)–(4) for a given v ∈ V is called the reduced problem.

Definition 2. A solution of the reduced problem (1)–(4) corresponding to v ∈ V is a function u(x, t) ∈ V

21,0

(Ω) that satisfies the integral identity (6)

l

\

0 T\

0

 u ∂η

∂t − λ(u, v) ∂u

∂x

∂η

∂x − B(u, v) ∂u

∂x η + ηf (x, t, u, v)

 dx dt

= −

l

\

0

φ(x)η(x, 0) dx −

T\

0

η(0, t)g

0

(t) dt +

T\

0

η(l, t)g

1

(t) dt for all η = η(x, t) ∈ W

21,1

(Ω) with η(x, T ) = 0.

A solution of the reduced problem (1)–(4) explicitly depends on the

control v, therefore we shall also use the notation u = u(x, t; v).

(3)

From the assumptions and the results of [6] it follows that for every v ∈ V a solution of the problem (1)–(4) exists, it is unique and |u

x

| ≤ C

0

for all (x, t) ∈ Ω and v ∈ V , where C

0

is a certain constant.

The inequality constrained problem (1) through (5) is converted to a problem without inequality constraints by adding a penalty function [3, 16] to the objective (5) {OCP }, yielding the following function Φ(v) = Φ

α,k

(v, A

k

):

(7) Φ(v) = f

α

(u(v), v) + P

k

(u(v), v) where

Z(u, v) = [max{ν

0

− λ(u, v); 0}]

2

+ [max{λ(u, v) − µ

0

; 0}]

2

, Y (u, v) = [max{ν

0

− B(u, v); 0}]

2

+ [max{B(u, v) − µ

0

; 0}]

2

,

Q

1

(u) = [max{r

1

− u(x, t; v); 0}]

2

, Q

2

(u) = [max{u(x, t; v) − r

2

; 0}]

2

, P

k

(v) = A

k

l

\

0 T

\

0

[Z(u, v) + Y (u, v) + Q

1

(u) + Q

2

(u)] dx dt and A

k

, k = 1, 2, . . . , are positive numbers with lim

k→∞

A

k

= ∞.

3. Well-posedness of the problem. Optimal control problems for solutions of differential equations do not always have a solution [13]. In this section, we will prove the existence and uniqueness of solution of problem (1)–(5).

Lemma 3.1. Under the above assumptions for every solution of the re- duced problem (1)–(5) the following estimate is valid:

(8) kδuk

V1,0

2 (Ω)

≤ C



δλ ∂u

∂x

2 L2(Ω)

+

δB ∂u

∂x

2 L2(Ω)

+ kδf k

2L2(Ω)



1/2

where C ≥ 0 is a constant not depending on δv.

P r o o f. Set δu(x, t) = u(x, t; v + δv) − u(x, t; v), u = u(x, t; v), u

= u(x, t; v + δv). From (6) it follows that

(9)

l

\

0 T

\

0



− δu ∂η

∂t + λ

∂δu

∂x

∂η

∂x + ∂λ(u+θ

1

δu, v+δv)

∂u

∂u

∂x

∂η

∂x δu + δλ ∂u

∂x

∂η

∂x

 dx dt

+

l

\

0 T

\

0

 B

∂δu

∂x η + ∂B(u + θ

2

δu, v + δv)

∂u

∂u

∂x ηδu + δB ∂u

∂x η

 dx dt

l

\

0 T

\

0

 ∂f (x, t, u + θ

3

δu, v + δv)

∂u δuη + δf η



dx dt = 0

for all η = η(x, t) ∈ W

21,1

(Ω) with η(x, T ) = 0. Here θ

1

, θ

2

, θ

3

∈ (0, 1) are

some numbers and

(4)

δf = f (x, t, u, v + δv) − f (x, t, u, v),

λ

= λ(u + δu, v + δv), δλ = λ(u, v + δv) − λ(u, v) B

= B(u + δu, v + δv), δB = B(u, v + δv) − B(u, v).

Let η

h

(x, t) = h

−1

Tt

t−h

η(x, τ ) dτ , 0 < h < τ , where η(x, t) = δu(x, t) for (x, t) ∈ Ω

t1

, zero for t > t

1

(t

1

≤ T − h), and Ω

t1

= D × (0, t

1

]. In identity (9) put η(x, t) instead of η

h

(x, t). Following the method of [7, pp. 166–168]

we obtain (10) 1

2

\

D

δu

2

(x, t

1

) dx

+

\

t1



λ

 ∂δu

∂x



2

+ ∂λ(u + θ

1

δu, v + δv)

∂u

∂u

∂x

∂δu

∂x δu + δλ ∂u

∂x

∂δu

∂x

 dx dt

+

\

t1

 B

∂u

∂x δu + ∂B(u + θ

2

δu, v + δv)

∂u

∂u

∂x (δu)

2

+ δB ∂u

∂x δu

 dx dt

\

t1

 ∂f (x, t, u + θ

3

δu, v + δv)

∂u (δu)

2

+ δf δu



dx dt = 0.

Hence, from the above assumptions and applying the Cauchy–Bunyakov- ski˘ı inequality, we have

(11) 1 2

\

D

δu

2

(x, t

1

) dx + ν

0

\

t1

 ∂δu

∂x



2

dx dt

≤ (C

3

+ C

4

)

\

t1

δu

2

dx dt

+ (C

1

+ C

2

) 

\

t1

δu

2

dx dt 

1/2



\

t1

 ∂δu

∂x



2

dx dt



1/2

+



\

t1

 δB ∂u

∂x



2

dx dt



1/2



\

t1

δu

2

dx dt 

1/2

+ 

\

t1

(δf )

2

dx dt 

1/2



\

t1

δu

2

dx dt 

1/2

+



\

t1

 δλ ∂u

∂x



2

dx dt



1/2



\

t1

 ∂δu

∂x



2

dx dt



1/2

where C

1

, C

2

, C

3

and C

4

are positive constants not depending on δv.

(5)

Take ε

1

= 2C

1

0

, ε

2

= 2C

2

0

and apply the Cauchy inequality with ε (|ab| ≤

ε2

|a|

2

+

1

|b|

2

) to the second and third summands on the right hand side of (11); multiplying both sides by two we obtain

(12) kδu(x, t

1

)k

2L2(D)

+ ν

0

∂δu

∂x

2

L2(Ωt1)

≤ 2  C

22

ν

0

+ C

3

+ C

4

+ C

12

ν

0



kδuk

2L2(Ωt1)

+ 2



\

t1

 δB ∂u

∂x



2

dx dt



1/2



\

t1

δu

2

dx dt 

1/2

+ 2 

\

t1

δf

2

dx dt 

1/2



\

t1

δu

2

dx dt 

1/2

+ 2



\

t1

 δλ ∂u

∂x



2

dx dt



1/2



\

t1

 ∂δu

∂x



2

dx dt



1/2

Applying Cauchy’s inequality with ε to the last three summands on the right side of (12) and taking ε = ν

0

/2 we obtain

(13) kδu(x, t

1

)k

2L2(D)

+ ν

0

2

∂δu

∂x

2

L2(Ωt1)

≤ 2  C

12

+ C

2

+ ν

02

ν

0

+ C

3

+ C

4



kδuk

2L2(Ω

t1)

+ 2 ν

0

δλ ∂u

∂x

2 L2(Ωt1)

+ 2 ν

0

δB ∂u

∂x

2 L2(Ωt1)

+ 2 ν

0

kδf k

2L2(Ω

t1)

. Now we set

y(t

1

) = kδu(x, t

1

)k

2L2(Ω)

, M =

δλ ∂u

∂x

2 L2(Ωt1)

+

δB ∂u

∂x

2 L2(Ωt1)

+ kδf k

2L2(Ω

t1)

. Then inequality (13) yields the two inequalities

y(t

1

) ≤ C

5

t1

\

0

y(t) dt + 2M ν

0

, (14)

∂δu

∂x

2 L2(Ωt1)

≤ 2C

5

ν

0

kδuk

2L2(Ω

t1)

+ 4M ν

02

, (15)

where C

5

= (2C

22

+ 2C

12

)/ν

0

+ 2C

3

+ 2C

4

+ 2ν

0

is a positive constant not

depending on δv.

(6)

From the known estimate [6, pp. 166–167] it follows that

(16) y(t

1

) ≤ C

6

M

where C

6

is a positive constant not depending on δv. Consequently,

(17) max

0≤t≤t1

kδu(x, t)k

L2(D)

≤ C

6

M

1/2

. Similarly we obtain

(18)

∂δu

∂x

L

2(Ωt1)

≤ C

7

M

1/2

where C

7

is a positive constant not depending on δv.

If we combine the estimates for δu and ∂δu/∂x, then we obtain kδuk

V1,0

2 (Ωt1)

= max

0≤t≤t1

kδu(x, t)k

L2(D)

+

∂δu

∂x

L

2(Ωt1)

(19)

≤ C

8

M

1/2

where C

8

is a positive costant not depending on δv. Lemma 3.1 is proved.

Corollary 3.1. Under the above assumptions the right side of esti- mate (8) converges to zero as kδvk

EN

→ 0, therefore kδuk

V1,0

2 (Ω)

→ 0 as kδvk

EN

→ 0.

Hence from the trace theorem [10] we get

(20) kδu(0, t)k

L2(0,T )

→ 0, kδu(l, t)k

L2(0,T )

→ 0 as kδvk

EN

→ 0.

Now we consider the function J

0

(u, v) of the form J

0

(u, v) = β

0

T

\

0

[u(0, t) − f

0

(t)]

2

dt + β

1 T

\

0

[u(l, t) − f

1

(t)]

2

dt.

Lemma 3.2. The function J

0

(u, v) is continuous on V.

P r o o f. Let δv = (δv

1

, . . . , δv

N

) be an increment of control on an ele- ment v ∈ V such that v + δv ∈ V . For the increment of J

0

(u, v) we have

δJ

0

(u, v) = 2β

0 T

\

0

[u(0, t) − f

0

(t)]δu(0, t) dt (21)

+ 2β

1 T

\

0

[u(l, t) − f

1

(t)]δu(l, t) dt

+ β

0 T\

0

[δu(0, t)]

2

dt + β

1 T\

0

[δu(l, t)]

2

dt.

(7)

Applying the Cauchy–Bunyakovski˘ı inequality, we obtain

|δJ

0

(u, v)| ≤ 2β

0

ku(0, t) − f

0

(t)k

L2(0,T )

kδu(0, t)k

L2(0,T )

(22)

+ 2β

1

ku(l, t) − f

1

(t)k

L2(0,T )

kδu(l, t)k

L2(0,T )

+ β

0

kδu(0, t)k

2L2(0,T )

+ β

1

kδu(l, t)k

2L2(0,T )

. An application of Corollary 3.1 completes the proof.

Theorem 3.1. For any α ≥ 0 problem (1)–(5) has at least one solution.

P r o o f. The set V is closed and bounded in E

N

. Since J

0

(u, v) is continuous on V by Lemma 3.2, so is

J

α

(u, v) = J

0

(u, v) + αkv − ωk

2EN

.

Then from the Weierstrass theorem [5] it follows that problem (1)–(5) has at least one solution.

Theorem 3.2. For α > 0 and almost all ω ∈ E

N

problem (1)–(5) has a unique solution.

P r o o f. The functions J

0

(u, v) and J

α

(u, v), α > 0, are continuous on V . Moreover, since E

N

is a uniformly convex space, a theorem of [4] yields the existence of a dense subset K of E

N

such that for any ω ∈ K and α > 0 problem (1)–(5) has a unique solution. Consequently, for almost all ω ∈ E

N

and ω > 0 problem (1)–(5) has a unique solution.

4. Adjoint problem and gradient formulae

4.1. The adjoint problem. We illustrate the adjoint problem for the system (1)–(3). The Lagrangian function L(x, t, u, v, Θ) for the optimal control problem is defined as

(23) L(x, t, u, v, Θ)

= β

0 T

\

0

[u(0, t) − f

0

(t)]

2

dt + β

1 T

\

0

[u(l, t) − f

1

(t)]

2

dt

+ αkv − ωk

2EN

+ A

k l

\

0 T\

0

[Z(u, v) + Y (u, v) + Q

1

(u) + Q

2

(u)] dx dt

+

l

\

0 T\

0

Θ  ∂u

∂t − ∂

∂x



λ(u, v) ∂u

∂x



+ B(u, v) ∂u

∂x − f (x, t, u, v)



dx dt.

(8)

The first variation of the Lagrangian is (24) δL(x, t, u, v, Θ)

= 2β

0 T

\

0

[u(0, t) − f

0

(t)]δu(0, t) dt + 2β

1 T

\

0

[u(l, t) − f

1

(t)]δu(l, t) dt

+ β

0 T\

0

[δu(0, t)]

2

dt + β

1 T\

0

[δu(l, t)]

2

dt + 2αhv − ω, δvi

EN

+ αkδvk

2EN

+ A

k l

\

0 T\

0

 ∂Z(u, v)

∂u + ∂Y (u, v)

∂v + ∂Q

1

(u)

∂u + ∂Q

2

(u)

∂u



δu(x, t) dx dt

+

l

\

0 T\

0

Θ  ∂δu

∂t − ∂

∂x

 λ

∂δu

∂x



− ∂

∂x

 ∂λ

∂u

∂u

∂x δu



− ∂

∂x

 λ

′′

∂u

∂x



+ B(u, v) ∂δu

∂x + ∂B

∂u

∂u

∂x δu + {f (x, t, u + δu, v + δv) − f (x, t, u, v)}

 dx dt where λ

= λ(u + δu, v + δv), λ

′′

= λ(u + δu, v).

Integrating (24) by parts we obtain (25) δL(x, t, u, v, Θ)

= 2β

0 T

\

0

[u(0, t) − f

0

(t)]δu(0, t) dt + 2β

1 T

\

0

[u(l, t) − f

1

(t)]δu(l, t) dt

+ β

0 T

\

0

[δu(0, t)]

2

dt + β

1 T

\

0

[δu(l, t)]

2

dt + 2αhv − ω, δvi

EN

+ αkδvk

2EN

+ A

k l

\

0 T

\

0

 ∂Z(u, v)

∂u + ∂Y (u, v)

∂v + ∂Q

1

(u)

∂u + ∂Q

2

(u)

∂u



δu(x, t) dx dt

+

l

\

0 T

\

0



− ∂Θ

∂t − ∂

∂x

 λ

∂Θ

∂x

 + ∂λ

∂u

∂u

∂x

∂Θ

∂x +  ∂B

∂u

∂u

∂x Θ + ∂(BΘ)

∂x



δu(x, t) dx dt

+

l

\

0 T

\

0

∂f

∂u Θδu(x, t) dx dt +

l

\

0

(Θδu)|

t=T

dx +

T

\

0

 λ

∂Θ

∂x δu



x=l

dt

+

T\

0

 λ

∂Θ

∂x δu



x=0

dt +

T\

0

(BΘδu)|

x=l

dt +

T\

0

(BΘδu)|

x=0

dt.

(9)

Setting the variation in the Lagrangian equal to zero (the first order neces- sary condition for minimizing L(x, t, u, v, Θ)) implies, since (25) must hold for any δu(x, t) [11], that we obtain the adjoint problem:

Θ

t

+ (λ(u, v)Θ

x

)

x

− λ

u

(u, v)Θ

x

u

x

− [B

u

u

x

Θ + (BΘ)

x

] − f

u

Θ (26)

= A

k

[Z

u

(u, v) + Y

u

(u, v) + Q

2u

+ Q

1u

], (x, t) ∈ Ω, Θ(x, T ) = 0, x ∈ D,

(27)

(λΘ

x

+ BΘ)|

x=0

= 2β

0

[u(0, t) − f

0

(t)], (λΘ

x

+ BΘ)|

x=l

= −2β

1

[u(l, t) − f

1

(t)], t ∈ [0, T ], (28)

where u = u(x, t) is the solution of problem (1)–(3) corresponding to v ∈ V.

Definition 3. A solution of the adjoint problem (26)–(28) correspond- ing to v ∈ V is a function Θ(x, t) ∈ V

21,0

(Ω) such that the following integral identity is satisfied:

(29)

l

\

0 T

\

0

[Θγ

t

+ λ(u, v)Θ

x

γ

x

+ λ

u

(u, v)Θ

x

u

x

γ] dx dt

+

l

\

0 T

\

0

[B

u

u

x

Θ + (BΘ)

x

+ f

u

(x, t, u, v)Θ]γ(x, t) dx dt

= − A

k l

\

0 T

\

0

[Z

u

(u, v) + Y

u

(u, v) + Q

2u

+ Q

1u

]γ(x, t) dx dt for all γ = γ(x, t) ∈ W

21,1

(Ω) with γ(x, 0) = 0.

From the above assumptions and the results of [7] it follows that for every v ∈ V a solution of the adjoint problem (26)–(28) exists, it is unique and |Θ

x

| ≤ C

9

for almost all (x, t) ∈ Ω and all v ∈ V, where C

9

is a certain constant.

4.2. Gradient formulae for Φ(v). Sufficient differentiability conditions for Φ(v) and its gradient formulae will be obtained by defining the Hamil- tonian function [2] H(u, Θ, v) as

H(u, Θ, v) ≡ −

l

\

0 T\

0

[λ(u, v)Θ

x

u

x

+ B(u, v)u

x

Θ − f (x, t, u, v)Θ (30)

+ A

k

{Z(u, v) + Y (u, v)}] dx dt − αkv − ωk

2EN

. Theorem 4.1. Assume that:

(i) The functions λ(u, v), B(u, v), f (x, t, u, v) satisfy the Lipschitz con-

dition for v.

(10)

(ii) The first derivatives of λ(u, v), B(u, v), f (x, t, u, v) with respect to v are continuous functions and for any v ∈ V such that kvk

EN

≤ R, the functions λ

v

(u, v), B

v

(u, v), f

v

(x, t, u, v) belong to L

(Ω).

(iii) The operators

l

\

0 T

\

0

λ

v

(u, v) dx dt,

l

\

0 T

\

0

B

v

(u, v) dx dt and

l

\

0 T

\

0

f

v

(x, t, u, v) dx dt are bounded in E

N

.

Then the function Φ(v) is differentiable and its gradient is

(31) ∂Φ(v)

∂v = − ∂H

∂v ≡



− ∂H

∂v

1

, . . . , − ∂H

∂v

N

 .

P r o o f. Suppose that v ≡ (v

1

, . . . , v

N

), δv ≡ (δv

1

, . . . , δv

N

), δv ∈ E

N

, v + δv ∈ V and set δu ≡ u(x, t; v + δv) − u(x, t; v). The increment of Φ(v) can be expressed as

(32) δΦ(v) = Φ(v + δv) − Φ(v)

= 2β

0 T

\

0

[u(0, t) − f

0

(t)]δu(0, t) dt + 2β

1 T

\

0

[u(l, t) − f

1

(t)]δu(l, t) dt

+ A

k

l

\

0 T\

0

[Z

u

(u, v) + Y

u

(u, v) + Q

1u

(u) + Q

2u

(u)]δu(x, t) dx dt

+ A

k l

\

0 T

\

0

[Z(u, v + δv) − Z(u, v) + Y (u, v + δv) − Y (u, v)] dx dt + 2αhv − ω, δvi

EN

+ R

1

(δv)

where

(33) R

1

(δv) = β

0 T\

0

[δu(0, t)]

2

dt + β

1 T\

0

[δu(l, t)]

2

dt + αkδvk

2EN

.

Using the estimate (8), we get the inequality |R

1

(δv)| ≤ C

10

kδvk

EN

where C

10

is a constant not depending on δv.

If we put γ = δu(x, t) in (29) and η = Θ(x, t) in (9) and subtract the resulting relations, we obtain

(34) 2β

0 T

\

0

[u(0, t) − f

0

(t)]δu(0, t) dt + 2β

1 T

\

0

[u(l, t) − f

1

(t)]δu(l, t) dt

+A

k l

\

0 T\

0

[Z

u

(u, v) + Y

u

(u, v) + Q

1u

(u) + Q

2u

(u)]δu(x, t) dx dt

(11)

=

l

\

0 T

\

0

[δλu

x

Θ

x

+ δBu

x

Θ − δf Θ] dx dt + R

2

(δv) where

(35) R

2

(δv)

=

l

\

0 T

\

0

 λ

∂δu

∂x

∂Θ

∂x +  ∂λ(u + θ

1

δu, v + δv)

∂u − ∂λ(u, v)

∂u

 ∂u

∂x

∂Θ

∂x δu

 dx dt

+

l

\

0 T

\

0



B

Θ ∂δu

∂x +  ∂B(u + θ

2

δu, v + δv)

∂u − ∂B(u, v)

∂u

 Θ ∂u

∂x δu

 dx dt

+

l

\

0 T

\

0

 ∂f (x, t, u + θ

3

δu, v + δv)

∂u − ∂f (x, t, u, v)

∂u



δu(x, t)Θ(x, t) dx dt and θ

i

∈ (0, 1), i = 1, 2, 3.

By assumption (i), R

2

(δv) is estimated as |R

2

(δv)| ≤ C

11

kδvk

EN

, where C

11

is a constant independent of δv. Using the above assumptions, we can estimate

Z(u, v + δv) − Z(u, v) = hZ

v

(u, v), δvi

EN

+ O(kδvk

EN

), Y (u, v + δv) − Y (u, v) = hY

v

(u, v), δvi

EN

+ O(kδvk

EN

), λ(u, v + δv) − λ(u, v) = hλ

v

(u, v), δvi

EN

+ O(kδvk

EN

), B(u, v + δv) − B(u, v) = hB

v

(u, v), δvi

EN

+ O(kδvk

EN

), f (x, t, u, v + δv) − f (x, t, u, v) = hf

v

(x, t, u, v), δvi

EN

+ O(kδvk

EN

).

By substituting the last five expansions in (32) and (34), we obtain δΦ(v) =

l

\

0 T

\

0

v

(u, v)u

x

Θ

x

− {B

v

(u, v)u

x

− f

v

(x, t, u, v)}Θ (36)

+ A

k

{Z

v

(u, v) + Y

v

(u, v)}, δvi

EN

dx dt + 2αhv − ω, δvi

EN

+ R

3

(δv)

where R

3

(δv) = R

1

(δv) + R

2

(δv) + O(kδvk

EN

).

From the formula for R

3

(δv), we have (37) |R

3

(δv)| ≤ C

12

kδvk

EN

where C

12

is a constant independent of δv.

From (36), (37), using the function H(u, Θ, v) we have

(38) δΦ(v) =



− ∂H(u, Θ, v)

∂v , δv



EN

+ O(kδvk

EN

),

(12)

which shows the differentiability of Φ(v) and also gives the gradient formulae for Φ(v). Theorem 4.1 is proved.

References

[1] M. B e r g o u n i o u x and F. T r ¨o l t z s c h, Optimality conditions and generalized bang- bang principle for a state constrained semilinear parabolic problem, Numer. Funct.

Anal. Optim. 17 (1996), 517–536.

[2] P. E n i d r, Optimal Control and Calculus of Variations, Oxford Sci. Publ., London, 1993.

[3] M. H. F a r a g, Application of the exterior penalty method for solving constrained optimal control problem, Math. Phys. Soc. Egypt, 1995.

[4] M. G o e b e l, On existence of optimal control , Math. Nachr. 93 (1979), 67–73.

[5] W. K r a b s, Optimization and Approximation, Wiley, New York, 1979.

[6] O. A. L a d y z h e n s k a y a, V. A. S o l o n n i k o v and N. N. U r a l ’ t s e v a, Linear and Quasilinear Parabolic Equations, Nauka, Moscow, 1976 (in Russian).

[7] O. A. L a d y z h e n s k a y a, Boundary Value Problems of Mathematical Physics, Nau- ka, Moscow, 1973 (in Russian).

[8] J.-L. L i o n s, Optimal Control by Systems Described by Partial Differential Equa- tions, Mir, Moscow, 1972 (in Russian).

[9] K. A. L o u r i e, Optimal Control in Problems of Mathematical Physics, Nauka, Moscow, 1975 (in Russian).

[10] V. P. M i k h a i l o v, Partial Differential Equations, Nauka, Moscow, 1983 (in Rus- sian).

[11] J. P. R a y m o n d, Nonlinear boundary control semilinear parabolic equations with pointwise state constraints, Discrete Contin. Dynam. Systems 3 (1997), 341–370.

[12] J. P. R a y m o n d and F. T r ¨o l t z s c h, Second order sufficient optimality conditions for nonlinear parabolic control problems with constraints, preprint, Fak. Math., Tech. Univ. Chemnitz, 1998.

[13] A. N. T i k h o n o v and N. Ya. A r s e n i n, Methods for the Solution of Ill-Posed Problems, Nauka, Moscow, 1974 (in Russian).

[14] F. T r ¨o l t z s c h, On the Lagrange–Newton–SQP method for the optimal control for semilinear parabolic equations, preprint, Fak. Math., Tech. Univ. Chemnitz, 1998.

[15] T. T s a c h e v, Optimal control of linear parabolic equation: The constrained right- hand side as control function, Numer. Funct. Anal. Optim. 13 (1992), 369–380.

[16] A.-Q. X i n g, The exact penalty function method in constrained optimal control prob- lems, J. Math. Anal. Appl. 186 (1994), 514–522.

S. H. Farag

Mathematics Department Faculty of Science, Minia University Minia, Egypt

M. H. Farag Mathematics Department Faculty of Education P.O. Box 14, Ibri 516, Sultanate of Oman E-mail: farag5358@yahoo.com

Received on 29.4.1999;

revised version on 11.10.1999

Cytaty

Powiązane dokumenty

This paper is concerned with the linear programming (LP) approach to deterministic, finite-horizon OCPs with value function J ∗ (t, x)—when the initial data is (t, x) [see (2.3)]...

Optimal control problems for linear and nonlinear parbolic equations have been widely considered in the literature (see for instance [4, 8, 18]), and were studied by Madatov [11]

In the present paper, some results concerning the continuous dependence of optimal solutions and optimal values on data for an optimal control problem associated with a

To solve the problem, we introduced an adequate Hilbertian structure and proved that the optimum and optimal cost stem from an algebraic linear infinite dimensional equation which

We make some remarks concerning the necessity of compatibility conditions for some estimates and existence theorems appearing in the literature... However, it

When compared with the relevant past works on optimal control of an aircraft, the proposed controller does not depend on the aircraft dynamics via the design of an

The solution obtained is a global solution of a multi-objective optimal control problem for power production with Laddermill. The same approach can be used in other renewable

These observations support the generalized error model proposed in equation (12), which decomposes the total wavefront error in a fitting error independent from the control design and