• Nie Znaleziono Wyników

THE GRADIENT PROJECTION METHOD FOR SOLVING AN OPTIMAL CONTROL PROBLEM

N/A
N/A
Protected

Academic year: 2021

Share "THE GRADIENT PROJECTION METHOD FOR SOLVING AN OPTIMAL CONTROL PROBLEM"

Copied!
7
0
0

Pełen tekst

(1)

M. H. F A R A G (Minia)

THE GRADIENT PROJECTION METHOD FOR SOLVING AN OPTIMAL CONTROL PROBLEM

Abstract. A gradient method for solving an optimal control problem described by a parabolic equation is considered. The gradient projection method is applied to solve the problem. The convergence of the projection algorithm is investigated.

1. Introduction. The theory of optimal control systems with distrib- uted parameters is one of the leading sections of optimization theory. It has wide applications in various practical fields. The theory of optimal control problems has been studied by many workers [1, 2, 6, 7]. They have shown [4, 9, 10] that these problems arise in many physical applications such as heat conductivity, filtration and diffusion.

2. Statement of the problem and definitions. Let it be required to minimize the function

(1) f (v) =

l

\

0

|u(x, T ; v) − g(x)|

2

dx + β

T

\

0

|v

1

(t)|

2

dt provided that u(x, t; v) is a solution of the boundary value problem

u

t

= a

2

u

xx

+ B(x, t)u + v

2

(x, t), (x, t) ∈ Ω = [0 < x < l, 0 < t ≤ T ], (2)

u(x, 0) = φ(x), 0 ≤ x ≤ l, (3)

u

x

(0, t) = 0, u

x

(l, t) = ν[v

1

(t) − u(l, t)], 0 < t ≤ T, (4)

where a

2

, l, ν, T , β are positive numbers, v

1

(t) the temperature of the external medium, v

2

(x, t) the density of heat sources, and the control v is in

1991 Mathematics Subject Classification: 49J20, 49M07, 65L10, 65K10.

Key words and phrases: optimal control, gradient methods, boundary value problems, distributed parameter systems.

[141]

(2)

V = n v

v = (v

1

(t), v

2

(x, t)); v

1

(t) ∈ L

2

[0, T ], v

1min

≤ v

1

(t) ≤ v

1max

; v

2

(x, t) ∈ L

2

(Ω),

l

\

0 T

\

0

|v

2

(x, t)|

2

dx dt ≤ R

2

o , where v

1min

< v

1max

; R > 0 is a given number; g(x), φ(x) ∈ L

2

[0, l], B(x, t)

∈ L

2

(Ω) are given functions and H = L

2

[0, T ] × L

2

(Ω).

Definition 1. The problem of finding a function u = u(x, t; v) satisfy- ing conditions (2)–(4) for a given v ∈ V is called the reduced problem.

Definition 2. The solution of the reduced problem (2)–(4) corre- sponding to v ∈ V is a function u(x, t) ∈ H

1,0

(Ω(4)) satisfying the integral identity

(5)

l

\

0 T

\

0

[−uη

t

+ a

2

u

x

η

x

+ B(x, t)uη − v

2

(x, t)η] dx dt

=

l

\

0

φ(x)η(x, 0) dx + a

2

ν

T

\

0

[v

1

(t) − u(l, t)]η(l, t) dt for all η = η(x, t) ∈ H

1

(Ω) with η(x, T ) = 0.

Equations (1)–(4) are the mathematical formulation of the optimal con- trol problem for a linear parabolic equation with controls in boundary con- ditions and the right side of equation (2). Optimal control problems for linear and nonlinear parbolic equations have been widely considered in the literature (see for instance [4, 8, 18]), and were studied by Madatov [11] and Mokrane [12], where the existence, uniqueness and regularity of the solution were proved. In addition, Farag [3] and Phillipson and Mitter [13] have derived numerical results for the heat equation with strong nonlinearity.

3. The gradient of the function. The principal result in this section is Theorem 3.1. Its proof will be prepared by two lemmas:

Lemma 3.1. Let δu(x, t) be the generalized solution of the boundary value problem

δu

t

− a

2

δu

xx

− B(x, t)δu − δv

2

(x, t) = 0, (x, t) ∈ Ω, (6)

δu(x, 0) = 0, 0 ≤ x ≤ l, (7)

δu

x

(0, t) = 0, δu

x

(l, t) = ν[δv

1

(t) − δu(l, t)], 0 < t ≤ T.

(8) Then

l

\

0

|δu(x, T )|

2

dx ≤ C h

T\

0

|δv

1

(t)|

2

dt +

l

\

0 T

\

0

|δv

2

(x, t)|

2

dx dt i (9)

= Ckδvk

2H

,

where C > 0 is a constant which is independent of the choice of δv ∈ V.

(3)

P r o o f. We multiply (6) by δu and integrate it on the rectangle Ω. By using the conditions (7) and (8), we obtain the reduced equation:

(10) 1 2

l

\

0

|δu(x, T )|

2

dx + a

2

ν

T

\

0

|δu(l, t)|

2

dt + a

2

l

\

0 T

\

0

|δu

x

|

2

dx dt

= a

2

ν

T

\

0

δu(l, t)δv

1

(t) dt +

l

\

0 T

\

0

δuδv

2

dx dt.

Applying the inequality ab ≤

2ε

a

2

+

1

b

2

, ε > 0, we obtain (11) 1

2

l

\

0

|δu(x, T )|

2

dx + a

2

ν

T

\

0

|δu(l, t)|

2

dt + a

2

l

\

0 T

\

0

|δu

x

|

2

dx dt

≤ 1 2 a

2

ε

1

ν

T

\

0

|δu(l, t)|

2

dt + 1 2ε

1

a

2

ν

T

\

0

|δv

l

(t)|

2

dt

+ ε

2

2

l

\

0 T

\

0

|δu(x, t)|

2

dx dt + 1 2ε

2

l

\

0 T

\

0

|δv

2

(x, t)|

2

dx dt.

Since

|δu(x, t)|

2

= 

\l

x

δu

x

(θ, t) dθ − δu(l, t) 

2

≤ 2 

\l

x

δu

x

(θ, t) dθ 

2

+ 2|δu(l, t)|

2

≤ 2l

l

\

0

|δu

x

(x, t)|

2

dx + 2|δu(l, t)|

2

we have

(12)

l

\

0 T

\

0

|δu(x, t)|

2

dx dt ≤ 2l

2

l

\

0 T

\

0

|δu

x

|

2

dx dt + 2l

T

\

0

|δu(l, t)|

2

dt.

From (11), (12) and by reducing these terms we obtain (13) 1

2

l

\

0

|δu(x, T )|

2

dx +



a

2

ν − a

2

νε

1

2 − lε

2



T\

0

|δu(l, t)|

2

dt

+ (a

2

− l

2

ε

2

)

l

\

0 T

\

0

|δu

x

|

2

dx dt

≤ a

2

ν 2ε

1

T

\

0

|δv

l

(t)|

2

dt + 1 2ε

2

l

\

0 T

\

0

|δv

2

(x, t)|

2

dx dt.

Letting ε

2

= a

2

ε

1

and 0 < ε

1

< min[1/l

2

; 2ν/(ν + 2l)], from (13) we

obtain (9) with C = max[a

2

ν/ε

1

; 1/(a

2

ε

1

)]. The lemma is proved.

(4)

Lemma 3.2. Let λ(x, t; v) = λ(x, t) be the generalized solution of the conjugate boundary value problem

λ

t

= −a

2

λ

xx

− B(x, t)λ, (x, t) ∈ Ω, (14)

λ(x, T ) = 2[u(x, T ; v) − g(x)], 0 ≤ x ≤ l, (15)

λ

x

(0, t) = 0, λ

x

(l, t) = −νλ(l, t), 0 < t < T.

(16) Then (17) 2

l

\

0

[u(x, T ; v) − g(x)]δu(x, T ) dx

=

T

\

0

a

2

νλ(l, t; v)δv

1

(t) dt +

l

\

0 T

\

0

λ(x, t; v)δv

2

(x, t) dx dt.

P r o o f. Applying the conditions (6)–(8) and (14)–(16), we obtain (18) 2

l

\

0

[u(x, T, v) − g(x)]δu(x, T ) dx

=

l

\

0

λ(x, T )δu(x, T ) dx

=

l

\

0 T

\

0

t

δu + λδu

t

] dx dt

=

l

\

0 T

\

0

[−a

2

λ

xx

δu + a

2

λδu

xx

+ λδv

2

] dx dt

=

T

\

0

a

2

νλ(l, t; v)δv

1

(t) dt +

l

\

0 T

\

0

λ(x, t; v)δv

2

(x, t) dx dt.

The equality (17) is thus obtained. The lemma is proved.

Definition 3. The solution of the conjugate boundary value problem (14)–(16) corresponding to v ∈ V is a function λ(x, t) ∈ H

1,0

(Ω) satisfying the integral identity

(19)

l

\

0 T

\

0

[−λξ

t

+ a

2

λ

xx

ξ + B(x, t)λξ] dx dt

= −2

l

\

0

[u(x, T ; v) − g(x)]ξ(x, T ) dx

for all ξ = ξ(x, t) ∈ H

1

(Ω) with ξ(x, 0) = 0.

(5)

Theorem 3.1. The function (1) is differentiable in H and its gradient at v ∈ V is given by

(20) f

v

(v) = ∂f

∂v = − ∂ℜ

∂v ≡



− ∂ℜ

∂v

1

, − ∂ℜ

∂v

2

 , where ℜ is defined by

ℜ(x, t, λ, v

1

, v

2

) ≡ −[a

2

νv

1

λ(l, t; v

1

) + βv

21

+ v

2

λ(x, t; v

2

)].

P r o o f. Consider the increment of the function (1):

δf (v) = f (v + δv) − f (v) (21)

= 2

l

\

0

[u(x, T, v) − g(x)]δu(x, T ) dx + 2β

T

\

0

v

1

(t)δv

1

(t) dt

+

l

\

0

|δu(x, T )|

2

dx + β

T

\

0

|δv

1

(t)|

2

dt

where v ∈ V , v + δv ∈ V , δu(x, t) ≡ u(x, t; v + δv) − u(x, t; v), u ≡ u(x, t; v).

By substituting equality (17) and estimate (9) in (21), it follows that the function (1) is differentiable in H and its gradient is given by the expression (20). The theorem is proved.

4. The gradient projection method. One of the first authors who used projection methods for solving constrained problems was J. B. Rosen [16, 17]. A lot of projection algorithms were described by Polak [14] and Pshenichny˘ı and Danilin [15]. Having the gradient function (1), we can use the gradient projection method for solving the problem (1)–(4). According to this method we construct a sequence {v

k

= (v

k1

(t), v

2k

(x, t))} by setting

v

1k+1

=

v

1k

− γ

k

f

v

(v

1k

) if v

1min

≤ Z

1

(v

1k

) ≤ v

1max

, v

1min

if Z

1

(v

k1

) < v

1min

,

v

1max

if Z

1

(v

k1

) > v

1max

, (22)

v

2k+1

=

v

2k

− γ

k

f

v

(v

2k

) if Z

2

(v

2k

) ≤ R

2

, R[v

2k

− γ

k

f

v

(v

2k

)]

pZ

2

(v

k2

) if Z

2

(v

2k

) > R

2

, (23)

where Z

1

(v

1k

) = v

k1

− γ

k

f

v

(v

k1

) and Z

2

(v

2k

) =

Tl 0

TT

0

|v

2k

− γ

k

f

v

(v

2k

)|

2

dx dt.

The values γ

k

≥ 0 in (21)–(22) may be selected in one of the following ways:

(i) γ

k

is defined by

(24) γ

k

= min

γ≥0

f (γ) = min

γ≥0

(f (v

k

) − γf

v

(v

k

)).

(6)

(ii) If the gradient f

v

(v) satisfies the condition (25) kf

v

(v) − f

v

(w)k

H

≤ Lkv − wk

H

for any v, w ∈ V , L = const > 0, then γ

k

may be found from the conditions

(26) 0 < c

1

≤ γ

k

≤ 2

L + 2c

2

. Here, c

1

, c

2

> 0 are parameters selected by computer.

(iii) The parameter γ

k

∈ [0, 1] can be chosen from the monotonicity condition f (v

k+1

) < f (v

k

).

(iv) γ

k

can be chosen from the condition

(27) f (v

k

) − f (v

k

− γ

k

f

v

(v

k

)) ≥ εγ

k

kf

v

(v

k

)k

2

, ε > 0.

Theorem 4.1. Let V be a closed convex subset of H, and f ∈ C

1,1

(V ) with f

= inf

V

f (v) > −∞. Let {v

k

} be the sequence of controls generated by the projection algorithm formulated in (22)–(27) for an arbitrary ini- tial approximation {v

0

} ∈ V . Then the sequence {f (v

k

)} decreases and lim

k→∞

kv

k

− v

k+1

k = 0. Moreover , if f is convex in H and the set M (v

0

) = {v

0

∈ V : f (v) ≤ f (v

0

)} is bounded, then the sequence {v

k

} minimizes the function f (v) in V and converges to v

weakly in H, and it also satisfies the estimate

(28) 0 ≤ f (v

k

) − f

≤ c

3

k , k = 1, 2, . . . ; c

3

= const ≥ 0.

If f is also strongly convex in V , then {v

k

} converges to the unique mini- mum control v

such that

(29) kv

k

− v

k

2

≤ c

4

k , k = 1, 2, . . . ; c

4

= const ≥ 0.

The proof directly follows from that of Theorem 5.2.1 of [19].

References

[1] A. G. B u t k o v s k i˘ı, Optimal Control Theory for Systems with Distributed Param- eters, Nauka, Moscow, 1965 (in Russian).

[2] Yn. V. E g o r o v, On some optimal control problems, Zh. Vychisl. Mat. i Mat. Fiz.

3 (1963), 887–904 (in Russian).

[3] M. H. F a r a g, A numerical solution to a nonlinear problem of the identification of the characteristics of a mathematical model of heat exchange, in: Mathematical Modeling and Automated Systems, A. D. Iskenderov (ed.), Bakin. Gos. Univ., Baku, 1990, 23–30 (in Russian).

[4] M. H. F a r a g and S. H. F a r a g, An existence and uniqueness theorem for one optimal control problem, Period. Math. Hungar. 30 (1995), 61–65.

[5] A. F r i e d m a n, Partial Differential Equations of Parabolic Type, Prentice-Hall, En- glewood Cliffs, N.J., 1964.

(7)

[6] A. D. I s k e n d e r o v, On a certain inverse problem for quasilinear parabolic equa- tions, Differentsial’nye Uravneniya 10 (1974), 890–898 (in Russian).

[7] A. D. I s k e n d e r o v and R. K. T a g i e v, Optimization problems with controls in coefficients of parabolic equations, ibid. 19 (1983), 1324–1334 (in Russian).

[8] J.-L. L i o n s, Control problems in systems described by partial differential equations, in: Mathematical Theory of Control , A. V. Balakrishnan and L. W. Neustadt (eds.), Academic Press, New York and London, 1969, 251–271.

[9] —, Optimal Control by Systems Described by Partial Differential Equations, Mir, Moscow, 1972 (in Russian).

[10] K. A. L u r i e, Optimal Control in Problems of Mathematical Physics, Nauka, Moscow, 1975 (in Russian).

[11] M. D. M a d a t o v, Regularization of one class of optimal control problems, in: Ap- proximate Methods and Computer, A. D. Iskenderov (ed.), Bakin. Gos. Univ., Baku, 1982, 78–80 (in Russian).

[12] A. M o k r a n e, An existence result via penalty method for some nonlinear parabolic unilateral problems, Boll. Un. Mat. Ital. B 8 (1994), 405–417.

[13] G. A. P h i l l i p s o n and S. K. M i t t e r, Numerical solution of a distributed iden- tification problem via a direct method, in: Computing Methods in Optimization Problems—2, L. A. Zadeh, L. W. Neustadt and A. V. Balakrishnan (eds.), Aca- demic Press, New York, 1969, 305–315.

[14] E. P o l a k, Computational Methods in Optimization, Academic Press, New York, 1971.

[15] B. N. P s h e n i c h n y˘ı and Yu. M. D a n i l i n, Numerical Methods in Extremal Prob- lems, Mir, Moscow, 1982.

[16] J. B. R o s e n, The gradient projection method for nonlinear programming. Part I : Linear constraints, SIAM J. Appl. Math. 8 (1960), 181–217.

[17] —, The gradient projection method for nonlinear programming. Part II : Nonlinear constraints, ibid. 9 (1961), 514–532.

[18] Ts. T s a c h e v, Optimal control of linear parabolic equation: The constrained right- hand side as control function, Numer. Funct. Anal. Optim. 13 (1992), 369–380.

[19] F. P. V a s i l’ e v, Numerical Methods for Solving Extremal Problems, Nauka, Moscow, 1988 (in Russian).

M. H. Farag

Department of Mathematics Faculty of Science

Minia University Minia, Egypt

Received on 18.8.1995;

revised version on 16.2.1996

Cytaty

Powiązane dokumenty

[r]

Odpowiedź na pytanie «co się zdarzyło», «jak to było na­ prawdę», domaga się dopiero hipotetycznej rekonstrukcji, z szeregu odm iennych przekazów i form

This is in close agreement with the observation that the mean-square residual phase error shows a weaker dependence on the Greenwood to sample frequency ratio for the optimal than

The interplay between tensile deformation, the orientation-dependent austenite-to-martensite transforma- tion, grain volume and carbon concentration has been analysed using

This paper is concerned with the linear programming (LP) approach to deterministic, finite-horizon OCPs with value function J ∗ (t, x)—when the initial data is (t, x) [see (2.3)]...

This paper presents an optimal control problem governed by a quasi- linear parabolic equation with additional constraints.. The optimal control problem is converted to an

Convergence results, similar to those presented here, occur for both algorithms applied to optimal control problems, where, in addition to mixed constraints, also pure state

A method for constructing -value functions for the Bolza problem of optimal control class probably it is even a discontinuous function, and thus it does not fulfil