• Nie Znaleziono Wyników

SUFFICIENT OPTIMALITY CONDITIONS FOR MULTIVARIABLE CONTROL PROBLEMS

N/A
N/A
Protected

Academic year: 2021

Share "SUFFICIENT OPTIMALITY CONDITIONS FOR MULTIVARIABLE CONTROL PROBLEMS"

Copied!
16
0
0

Pełen tekst

(1)

SUFFICIENT OPTIMALITY CONDITIONS FOR MULTIVARIABLE CONTROL PROBLEMS

Andrzej Nowakowski Faculty of Mathematics

University of L´ od´z Banacha 22, 90–238 L´ od´z, Poland e-mail: annowako@math.uni.lodz.pl

Abstract

We study optimal control problems for partial differential equations (focusing on the multidimensional differential equation) with control functions in the Dirichlet boundary conditions under pointwise control (and we admit state – by assuming weak hypotheses) constraints.

Keywords: sufficient optimality condition, wave equations, parbolic equation, elliptic equation, Dirichlet boundary controls, dual dynamic programming.

2000 Mathematics Subject Classification: 49K20, 49J20, Secondary: 93C20, 35L20.

We study optimal control problems (P) for partial differential equations with controls in the state equation and in Dirichlet boundary conditions:

minimize J(x, u, v) = Z

[0,T ]×Ω

L(t, z, x(t, z), u(t, z))dtdz +

Z

Σ

h(t, s, v(t, s))dtds + Z

l (x (T, z)) dz subject to

A(t)x(t, z) = f (t, z, x(t, z), u(t, z)) a. e. on (0, T ) × Ω (1)

x (0, z) = ϕ (0, z) , x t (0, z) = ψ (0, z) on Ω

(2)

(2)

x(t, z) = v (t, z) on (0, T ) × Γ (3)

u(t, z) ∈ U a. e. on (0, T ) × Ω (4)

v(t, z) ∈ V on (0, T ) × Γ (5)

where Ω is a given bounded domain of R n with boundary Γ = ∂Ω of C 2 , Σ = (0, T ) × Γ, U and V are given nonempty sets in R m and R, V – closed;

L, f : [0, T ] × ¯ Ω × R × R m → R, l : R → R, h : [0, T ] × ¯ Ω × R m → R and ϕ, ψ : R n+1 → R are given functions, ϕ(0, ·) ∈ L 2 (Ω), ψ(0, ·) ∈ H 1 (Ω); x : [0, T ] × Ω → R, x ∈ W 2,2 ((0, T ) × Ω) ∩ C([0, T ]; L 2 (Ω)) and u : [0, T ] × Ω → R m , v : (0, T ) × Γ → R are Lebesgue measurable functions in suitable sets, and A(t) has one of the following three forms

A(t) =

 

 

d dt + ∆ z ,

d

2

dt

2

− ∆ z ,

∆ z .

Of course, according to the form of A(t) we need to change in a suitable way the functional J, e.g., in the third case it has the form

J(x, u, v) = Z

L(z, x(z), u(z))dz + Z

∂Ω

h(s, v(s))ds and conditions (1)–(5)

∆ z x(z) = f (z, x(z), u(z)) a.e. on Ω x(z) = v (z) on Γ

u(z) ∈ U a.e. on Ω v(z) ∈ V on Γ

and suitable spaces in which we consider such problems; in the third case it is the space W 2,2 (Ω).We assume that the functions L, f, h, l are lower semicon- tinuous in their domains of definitions. Assuming the lower semicontinuity of theses functions only, we admit that state x may satisfy some pointwise state constraints, e.g., that x(t, z) ∈ C for a.e. (t, z) ∈ [0, T ] × Ω with C a closed set in R. We call a trio x(t, z), u(t, z), v(t, s) to be admissible if it satisfies (1)–(5) and L(t, z, x(t, z), u(t, z)), h(t, s, v(t, s)) are summable; then the corresponding trajectory x(t, z) is said to be admissible.

It is well known that optimal control problems with pointwise state con-

straints belong to one of the most challenging and difficult classes in control

(3)

theory. Quite recently, growing interest in such problems for parabolic equa- tions has been taken in [1, 2, 7, 9, 16, 17]; see also the references therein.

Much less has been done for hyperbolic systems. Some control problems for the wave equation in the presence of state constraints are considered in [8, 18, 19, 23, 24] for distributed controls. There are only a few results [18, 19] on boundary control problems for the wave equation and/or for other partial differential equations of the hyperbolic type. Note that there are essential differences between parabolic and hyperbolic systems. Gener- ally, hyperbolic equations exhibit less regularity. This is why in the paper we assume that system (1)–(5) admits at least one solution belonging to W 2,2 ((0, T ) × Ω) ∩ C([0, T ]; L 2 (Ω)).

The aim of the paper is to present sufficient optimality conditions for problem (P) in terms of dynamic programming conditions directly. In litera- ture, there is no work which would study problem (P) directly by a dynamic programming method. The only results known to the author for parabolic (also abstract case) (see e.g. [3]–[11], [15, 20] and literature therein) treat problem (P) first as an abstract problem with an abstract evolution equation (1) and then derive from abstract Hamilton-Jacobi equations suitable suffi- cient optimality conditions for problem (P). We refer the reader to [8, 14]

and their references for more discussions on important differences between parabolic and hyperbolic systems.

The fact that we take into acount boundary control (5) makes the prob- lem essentially more difficult. In fact, we need to develop a new duality, whose ideas are described in the next section. We propose almost a direct method to study (P) by a dual dynamic programming approach following the method described in [21] for a one dimensional case and in [12, 13] for a multidimensional case. We move all notions of a dynamic programming to a dual space (the space of multipliers) and then develop a dual dynamic ap- proach together with a dual Hamilton-Jacobi equation and as a consequence sufficient optimality conditions for (P). We also define an optimal dual feed- back control and we formulate sufficient conditions for optimality in terms of it. Such an approach allows us to weaken significantly the assumptions on the data.

1. A Dual Dynamic Programming

In this section, we describe an idea of a dual dynamic approach to opti-

mal control problems governed by hyperbolic equations. Let us recall what

(4)

dynamic programming means. We have an initial condition (t 0 , x 0 (t 0 , ·)), z ∈ Ω and for it assume we have an optimal solution (¯ x, ¯ u, ¯ v), then by nec- essary optimality conditions (see, e.g., [18]) there exists a function ¯ p(t, z) = (y 0 , y(t, z)) on (0, T ) × Ω - conjugate function, being a solution to the cor- responding adjoint system. This p = (y 0 , y) plays a role of multipliers from the classical Lagrange problem with constraints (with multiplier y 0 staying by functional and y corresponding to constraint). If we perturbe (t 0 , x 0 ), then assuming that an optimal solution for each perturbed problem exists we also have a conjugate function corresponding to it. Therefore making per- turbations of our initial conditions we obtain two sets of functions: optimal trajectories ¯ x and conjugate functions ¯ p corresponding to them. The graph of those sets of functions covers some sets in the state space (t, z, x) say a set X (in the classical calculus of variation it is named the ”field of extremals”) and in the conjugate space (t, z, p) say a set P (in classical mechanics it is named the ”space of momentums”). In the classical dynamic programming approach (see, e.g., [3]) we explore the state space (t, z, x), i.e., the set X, in the dual dynamic programming approach (see [21] for a one dimensional case and in [12] for a multidimensional case) we explore the conjugate space (the dual space) (t, z, p), i.e., the set P . It is worth noting that in elliptic control optimization problems we have no possibilities to perturb those problems, however dual dynamic programming is still possible to apply (see [13]). It is natural that if want to explore the dual space (t, z, p), then we need a map- ping between the set P and the set X: P 3 (t, z, p) → (t, z, ˜ x(t, z, p)) ∈ X to have a possibility at the end of some consideration in P to formulate some conditions about optimality for our original problem as well as on optimal solution ¯ x. Of course, such a mapping should have the property that for each admissible function x(t, z) lying in X we have to have a func- tion p(t, z) lying in P such that x(t, z) = ˜ x(t, z, p(t, z)). Hence, we per- form all our investigations in a dual space (t, z, p), i.e., most of our notions concerning dynamic programming are defined in the dual space and thus also a dynamic programming equation which becames now a dual dynamic programming equation.

Therefore, let P ⊂ R n+3 be a set of variables (t, z, p) = (t, z, y 0 , y), (t, z) ∈ [0, T ] × ¯ Ω, y 0 ≤ 0, y ∈ R. Let ˜ x : P → R be such a function of the variables (t, z, p) that for each admissible trajectory x(t, z) there exists a function p(t, z) = (y 0 , y(t, z)), p ∈ W 2,2 ([0, T ] × ¯ Ω) ∩ C([0, T ]; L 2 (Ω)), (t, z, p(t, z)) ∈ P such that

x(t, z) = ˜ x(t, z, p(t, z)) for (t, z) ∈ [0, T ] × ¯ Ω.

(6)

(5)

Now, let us introduce an auxiliary function V (t, z, p) : P → R being of C 3 such that the following two conditions are satisfied:

V (t, z, p) = y 0 V y

0

(t, z, p) + yV y (t, z, p) = pV p (t, z, p), for (t, z) ∈ (0, T ) × Ω, (t, z, p) ∈ P

(7)

z V (t, z, p)ν(z) = y 0z V y

0

(t, z, p)ν(z), for (t, z) ∈ [0, T ] × ∂Ω, (t, z, p) ∈ P.

(8)

where ν(·) is the exterior unit normal vector to ∂Ω and ∇V (t, z, p) means

”∇” of the function z → V (t, z, p). The condition (7) is a generalization of tranversality condition known in classical mechanics as orthogonality of momentum to the front of wave. The condition (8) is of the same meaning but taken on the boundary. Similarly as in the classical dynamic program- ming define at (t, p(·)), where p(z) = (y 0 , y(z)) is any function p ∈ W 2,2 (Ω), (t, z, p(z)) ∈ P, a dual value function S D by the formula

S D (t, p(·)) := inf (

−y 0 Z

[t,T ]×Ω

L(τ, z, x(τ, z), u(τ, z))dτ dz

−y 0 Z

Ω l(x(T, z))dz − y 0 Z

[t,T ]×∂Ω h(τ, s, v(τ, s))dτ ds (9) )

where the infimum is taken over all admissible trios x(τ, ·), u(τ, ·), v(τ, ·), τ ∈ [t, T ] such that

x(t, z) = ˜ x(t, z, p(z)) for z ∈ Ω (10)

˜

x(t, z, p(z)) = v(t, z) for z ∈ ∂Ω , (11)

i.e., whose trajectories start at (t, ˜ x(t, ·, p(·)). Then, integrating (7) over Ω, for any function p(z) = (y 0 , y(z)), p ∈ W 2,2 (Ω), (t, z, p(z)) ∈ P , (t, z, p(z)) ∈ P , such that x(·, ·) satisfying x(t, z) = ˜ x(t, z, p(z)) for z ∈ Ω, is an admissible trajectory, we also have the equalities:

Z

V (t, z, p(z))dz + Z

∂Ω

z V (t, z, p(z))ν(z)dz

= − Z

Ω y(z)x(t, z, p(z))dz − S D (t, p(·)),

(12)

(6)

with Z

Ω y 0 V y

0

(t, z, p(z))dz + y 0 Z

∂Ω ∇ z V y

0

(t, z, p(z))ν(z)dz = −S D (t, p(·)), (13)

and assuming

˜

x(t, z, p(z)) = −V y (t, z, p(z)), for (t, z) ∈ (0, T ) × Ω, (t, z, p(z)) ∈ P.

The function V (t, z, p) satisfies the second order partial differential system A(t)V (t, z, p) + H(t, z, −V y (t, z, p), p) = 0,

(14)

(t, z) ∈ (0, T ) × Ω, (t, z, p) ∈ P,

∇ z V (t, z, p(t, z))ν(z) + H Σ (t, z, p) = 0, (t, z) ∈ (0, T ) × ∂Ω, (t, z, p) ∈ P where

H(t, z, r, p) = y 0 L(t, z, r, u(t, z, p)) + yf (t, z, r, u(t, z, p)), (15)

H Σ (t, z, p) = y 0 h(t, z, v(t, z, p)) (16)

and u(t, z, p), v(t, z, p) are optimal dual feedback controls, respectively, on (0, T ) × Ω and (0, T ) × ∂Ω, and the dual second order partial differential system of multidimensional dynamic programming (DSPDEMDP)

sup {A(t)V (t, z, p) + y 0 L(t, z, −V y (t, z, p), u) + yf (t, z, −V y (t, z, p), u) : u ∈ U } = 0, (t, z) ∈ (0, T ) × Ω, (t, z, p) ∈ P

sup {∇ z V (t, z, p)ν(z) + y 0 h(t, z, v) : v ∈ V} = 0, (t, z) ∈ (0, T ) × ∂Ω, (t, z, p) ∈ P.

(17)

Remark. We would like to stress that the duality which is sketched in this

section is not a duality in the sense of convex optimization. It is a new

nonconvex duality, for the first time described in [21] and next developed in

[12, 13] for which we have not the relation sup(D) ≤ inf(P) (D – means a

dual problem, P-a primal one). But instead of it we have other relations,

(7)

namely: (7) and (12), (13), which are generalizations of transversality con- ditions from classical mechanics. If we find a solution to (17), then checking the relation (7) for concrete problems is not very difficult.

2. A verification theorem

The most important conclusion of the dynamic programming is a verification theorem. We present it in a dual form accordingly to our dual dynamic programming approach described in the previous section.

Theorem 1. Let x(t, z), u(t, z), (t, z) ∈ (0, T ) × Ω, ¯ v(t, z), (t, z) ∈ (0, T ) ×

∂Ω, be an admissible trio. Assume that there exists a C 3 solution V (t, z, p) of DSPDEMDP (17) on P such that (7), (8) hold. Let further p(t, z) = (y 0 , y(t, z)), p ∈ W 2,2 ([0, T ] × Ω) ∩ C([0, T ]; L 2 (Ω)), ¯ p ∈ L 2 ([0, T ] × ∂Ω), (t, z, p(t, z)) ∈ P , be such a function that x(t, z) = −V y (t, z, p(t, z)) for (t, z) ∈ (0, T ) × Ω. Suppose that V (t, z, p) satisfies the boundary condition for (T, z, p) ∈ P ,

y 0 Z

(d/dt)V y

0

(T, z, p)dz = y 0 Z

l(−V y (T, z, p))dz, in the wave case, (18)

y 0 Z

V y

0

(T, z, p)dz = y 0 Z

l(−V y (T, z, p))dz, in the parabolic case

and none of them, in the elliptic case.

Moreover, assume that

A(t)V (t, z, p(t, z)) + y 0 L(t, z, −V y (t, z, p(t, z)), u(t, z))

(19) + y(t, z)f (t, z, −V y (t, z, p(t, z)), u(t, z)) = 0, for (t, z) ∈ (0, T ) × Ω, (∇ z )V (t, z, p(t, z))ν(z) + y 0 h(t, z, ¯ v(t, z)) = 0, for (t, z) ∈ (0, T )×∂Ω.

Then x(t, z), u(t, z), (t, z) ∈ (0, T ) × Ω, ¯ v(t, z), (t, z) ∈ (0, T ) × ∂Ω, is an optimal trio relative to all admissible trios x(t, z), u(t, z), (t, z) ∈ (0, T ) × Ω, v(t, z), (t, z) ∈ (0, T ) × ∂Ω, for which there exists such a function p(t, z) = (y 0 , y(t, z)), p ∈ W 2,2 ([0, T ] × Ω) ∩ C([0, T ]; L 2 (Ω)), (t, z, p(t, z)) ∈ P , that x(t, z) = −V y (t, z, p(t, z)) for (t, z) ∈ (0, T ) × Ω and

y(0, z) = y(0, z) for z ∈ Ω

(20)

(8)

(21) y(t, z) = y(t, z) for (t, z) ∈ [0, T ]×∂Ω, additionally in the parabolic case and none of them in the elliptic case.

P roof. Let x(t, z), u(t, z), (t, z) ∈ (0, T ) × Ω, v(t, z), (t, z) ∈ (0, T ) ×

∂Ω, be an admissible trio for which there exists such a function p(t, z) = (y 0 , y(t, z)), p ∈ W 2,2 ([0, T ] × Ω) ∩ C([0, T ]; L 2 (Ω)), p ∈ L 2 ([0, T ] × ∂Ω), (t, z, p(t, z)) ∈ P , that x(t, z) = −V y (t, z, p(t, z)) for (t, z) ∈ (0, T ) × Ω, v(t, z) = −V y (t, z, p(t, z)) for (t, z) ∈ (0, T ) × ∂Ω and (20), (1) are satis- fied. From the transversality conditions (7), (8), we obtain that for (t, z) ∈ (0, T ) × Ω,

A(t)V (t, z, p(t, z)) = y 0 A(t)V y

0

(t, z, p(t, z)) + y(t, z)A(t)V y (t, z, p(t, z)), (22)

and for (t, z) ∈ (0, T ) × ∂Ω,

(∇ z )V (t, z, p(t, z))ν(z) = y 0 (∇ z )V y

0

(t, z, p(t, z))ν(z).

Since x(t, z) = −V y (t, z, p(t, z)), for (t, z) ∈ (0, T ) × ¯ Ω, (1) shows that for (t, z) ∈ (0, T ) × Ω,

A(t)V y (t, z, p(t, z)) = −f (t, z, −V y (t, z, p(t, z)), u(t, z)) (23)

and the boundary control (3) shows that for (t, z) ∈ (0, T ) × ∂Ω,

−V y (t, z, p(t, z)) = v(t, z).

Now define a function W (t, z, p(t, z)) on P by the following requirement for (t, z) ∈ (0, T ) × Ω,

W (t, z, p(t, z)) = y 0 A(t)V y

0

(t, z, p(t, z))

+ L(t, z, −V y (t, z, p(t, z)), u(t, z)).

(24)

and for (t, z) ∈ (0, T ) × ∂Ω, W (t, z, p(t, z))

= y 0 (∇ z )V y

0

(t, z, p(t, z))ν(z) + ¯ y 0 h(t, z, v(t, z)).

(9)

We conclude from (22) that for (t, z) ∈ (0, T ) × Ω, W (t, z, p(t, z)) = A(t)V (t, z, p(t, z)) + y 0 L(t, z, −V y (t, z, p(t, z)), u(t, z)) + y(t, z)f (t, z, −V y (t, z, p(t, z)), u(t, z)) (25)

and for (t, z) ∈ (0, T ) × ∂Ω, W (t, z, p(t, z))

= (∇ z )V (t, z, p(t, z))ν(z) + ¯ y 0 h(t, z, v(t, z)), hence, by (17) and (25), that

W (t, z, p(t, z) ≤ 0 for (t, z) ∈ (0, T ) × Ω (26)

W (t, z, p(t, z) ≤ 0 for (t, z) ∈ (0, T ) × ∂Ω (27)

and finally, after integrating (26) and applying (24), that

y 0 Z

[0,T ]×Ω A(t)V y

0

(t, z, p(t, z))dtdz

≤ −y 0 Z

[0,T ]×Ω

L(t, z, x(t, z), u(t, z))dtdz.

(28)

Similarly, in the set (0, T ) × ∂Ω we have

y 0 Z

[0,T ]×∂Ω

(∇ z )V y

0

(t, z, p(t, z))ν(z)dtdz

≤ −y 0 Z 0

[0,T ]×∂Ω h(t, z, v(t, z))dtdz.

(29)

According to the definition of A(t) and (18), (20) we have respectively

(10)

Z

[0,T ]×Ω

A(t)V y

0

(t, z, p(t, z))dtdz =

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  y 0

Z

Ω [l(−V y (T, z, p(T, z))) − V y

0

(0, z, y 0 , y(0, z))]dz +y 0

Z

[0,T ]

Z

∂Ω

∇ z V y

0

(t, z, y 0 , y(t, z))ν(z)dz

 dt, y 0

Z

[l(−V y (T, z, p(T, z))) − (d/dt)V y

0

(0, z, y 0 , y(0, z))]dz

−y 0 Z

[0,T ]

Z

∂Ω (∇ z )V y

0

(t, z, y 0 , y(t, z))ν(z)dz

 dt,

−y 0 Z

∂Ω ∇V y

0

(s, p(s))ν(s)ds.

(30)

So by (30) and (29) we get in the parabolic case:

− y 0 Z

Ω V y

0

(0, z, y 0 , y(0, z))dz

≤ −y 0 Z

[0,T ]×Ω L(t, z, x(t, z), u(t, z))dtdz

− y 0 Z

Ω l(x(T, z))dz − y 0 Z 0

[0,T ]×∂Ω h(t, z, v(t, z))dtdz.

In the wave case:

− y 0 Z

(d/dt)V y

0

(0, z, y 0 , y(0, z))dz

≤ −y 0 Z

[0,T ]×Ω

L(t, z, x(t, z), u(t, z))dtdz

− y 0 Z

l(x(T, z))dz − y 0 Z 0

[0,T ]×∂Ω

h(t, z, v(t, z))dtdz.

(31)

In the elliptic case:

0 ≤ −y 0 Z

L(z, x(z), u(z))dz − y 0 Z 0

∂Ω

h(z, v(z))dz.

(11)

In the same manner, applying (1) and (25) we have W (t, z, p(t, z) = 0 for (t, z) ∈ (0, T ) × ¯ Ω (32)

and further we have in the parabolic case:

− y 0 Z

V y

0

(0, z, y 0 , y(0, z))dz

= −y 0 Z

[0,T ]×Ω

L(t, z, ¯ x(t, z), ¯ u(t, z))dtdz

− y 0 Z

l(¯ x(T, z))dz − y 0 Z

[0,T ]×∂Ω

h(t, z, ¯ v(t, z))dtdz, in the wave case

− y 0 Z

Ω (d/dt)V y

0

(0, z, y 0 , y(0, z))dz

= −y 0 Z

[0,T ]×Ω L(t, z, ¯ x(t, z), ¯ u(t, z))dtdz

− y 0 Z

Ω l(¯ x(T, z))dz − y 0 Z

[0,T ]×∂Ω h(t, z, ¯ v(t, z))dtdz, (33)

and in the elliptic case 0 = −y 0

Z

L(z, ¯ x(z), ¯ u(z))dz − y 0 Z 0

∂Ω

h(z, ¯ v(z))dz.

Combining (31) with (33) gives the assertion of the theorem, e.g., in the wave case:

− y 0 Z

[0,T ]×Ω L(t, z, x(t, z), u(t, z))dtdz

− y 0 Z

l(x(T, z))dz − y 0 Z

[0,T ]×∂Ω

h(t, z, ¯ v(t, z))dtdz

≤ −y 0 Z

[0,T ]×Ω

L(t, z, x(t, z), u(t, z))dtdz

− y 0 Z

l(x(T, z))dz − y 0 Z

[0,T ]×∂Ω

h(t, z, v(t, z))dtdz, (34)

which completes the proof.

(12)

3. An optimal dual feedback control

It often occurs that in practice a feedback control is more important than a value function for engineers. It turns out that dual dynamic programming approach allows us also to investigate a kind of feedback control which we name a dual feedback control. Suprisingly, it can have better properties than the classical one – now our state equation depends only on the parameter and not additionaly on the state in feedback function, which made the state equation difficult to solve.

Definition 1. A pair of functions u = u(t, z, p) from P of the points (t, z, p) = (t, z, y 0 , y), (t, z) ∈ (0, T ) × Ω, y 0 ≤ 0, y ∈ R, into U and v(t, z, p) from a subset P of those points (t, z, p) = (t, z, y 0 , y), (t, z) ∈ (0, T ) × ∂Ω, (t, z, p) ∈ P , into V is called a dual feedback control, if there is any solution x(t, z, p), P , of the partial differential equation

A(t)x(t, z, p) = f (t, z, x(t, z, p), u(t, z, p)) (35)

satisfying the boundary condition

x(t, z, p) = v(t, z, p) on (0, T ) × Γ,(t, z, p) ∈ P

such that for each admissible trajectory x(t, z), (t, z) ∈ [0, T ]×Ω, there exists such a function p(t, z) = (y 0 , y(t, z)), p ∈ W 2,2 ([0, T ] × Ω) ∩ C([0, T ]; L 2 (Ω)), p ∈ L 2 ([0, T ] × ∂Ω), (t, z, p(t, z)) ∈ P , that (6) holds.

Definition 2. A dual feedback control (u(t, z, p), v(t, z, p)) is called an opti- mal dual feedback control, if there exist a function x(t, z, p), (t, z, p) ∈ P , cor- responding to u(t, z, p), v(t, z, p)) as in Definition 1, and a function p(t, z) = (y 0 , y(t, z)), p ∈ W 2,2 ([0, T ] × Ω) ∩ C([0, T ]; L 2 (Ω)), ¯ p ∈ L 2 ([0, T ] × ∂Ω), (t, z, p(t, z)) ∈ P , such that, for

S D (t, p(t, ·)) = −y 0 Z

[t,T ]×Ω L(τ, z, x(τ, z, p(τ, z)), u(τ, z, p(τ, z)))dτ dz

−y 0 Z

l (x(T, z, p(T, z))) dz − y 0 Z

[t,T ]×∂Ω

h(τ, s, v(τ, z, p(τ, z)))dτ ds (36)

defining V y

0

(t, z, p(t, z)) by

(13)

Z

Ω y 0 V y

0

(t, z, p(t, z))dz + y 0 Z

∂Ω (∇ z )V y

0

(t, z, p(t, z))ν(z)dz

= −S D (t, p(t, ·)) and for

V y (t, z, p) = −x(t, z, p) for (t, z) ∈ (0, T ) × Ω, (t, z, p) ∈ P, (37)

there is V (t, z, p) satisfying (7).

The next theorem is nothing more than the above verification theorem for- mulated in terms of a dual feedback control.

Theorem 2. Let (u(t, z, p), v(t, z, p)) be a dual feedback control in P . Sup- pose that there exists a C 3 solution V (t, z, p) of (17) on P such that (7) and (18) hold. Let p(t, z) = (y 0 , y(t, z)), p ∈ W 2,2 ([0, T ] × Ω) ∩ C([0, T ]; L 2 (Ω)),

¯

p ∈ L 2 ([0, T ] × ∂Ω), (t, z, p(t, z)) ∈ P , be such a function that x(t, z) = x(t, z, p(t, z)), u(t, z) = u(t, z, p(t, z)), (t, z) ∈ (0, T ) × Ω, ¯ v(t, z) = v(t, z, p(t, z)), (t, z) ∈ (0, T ) × ∂Ω, is an admissible trio, where x(t, z, p), (t, z, p) ∈ P , is corresponding to u(t, z, p) and v(t, z, p) is as in Definition 1.

Assume further that:

V y (t, z, cp) = −x(t, z, p) for (t, z) ∈ [0, T ] × Ω, (t, z, p) ∈ P , (38)

y 0 Z

V y

0

(t, z, p(t, z))dz

+ y 0 Z

[0,T ]

Z

∂Ω (∇ z )V y

0

(t, z, y 0 , y(t, z))ν(z)dz

 dt

= −y 0 Z

[0,T ]×Ω L(t, z, x(t, z, p(t, z)), u(t, z, p(t, z)))dtdz

−y 0 Z

Ω l(x(T, z, p(T, z)))dz

− y 0 Z

[t,T ]×∂Ω

h(τ, s, v(τ, s, p(τ, s)))dτ ds.

(39)

Then (u(t, z, p), v(t, z, p)) is an optimal dual feedback control.

(14)

P roof. Take any function p(t, z) = (y 0 , y(t, z)), p ∈ W 2,2 ([0, T ] × Ω) ∩ C([0, T ]; L 2 (Ω)), p ∈ L 2 ([0, T ] × ∂Ω), (t, z, p(t, z)) ∈ P , (t, z, p(t, z)) ∈ P , such that x(t, z) = x(t, z, p(t, z)), u(t, z) = u(t, z, p(t, z)), (t, z) ∈ (0, T ) × Ω, v(t, z) = v(t, z, p(t, z)), (t, z) ∈ [0, T ]×∂Ω, is an admissible trio and (20), (1) hold. By (38), it follows that x(t, z) = −V y (t, z, p(t, z)) for (t, z) ∈ (0, T )×Ω.

As in the proof of Theorem 1, equation (39) gives

−y 0 Z

[0,T ]×Ω

L(t, z, x(t, z, p(t, z)), u(t, z, p(t, z)))dtdz

−y 0 Z

Ω l(x(T, z, p(T, z)))dz − y 0 Z

[t,T ]×∂Ω h(τ, s, v(τ, s, p(τ, s)))dτ ds

−y 0 Z

[0,T ]×Ω L(t, z, x(t, z, p(t, z)), u(t, z, p(t, z))dtdz

−y 0 Z

l(x(T, z, p(T, z)))dz − y 0 Z

[t,T ]×∂Ω

h(τ, s, v(τ, s, p(τ, s)))dτ ds.

(40)

We conclude from (40) that S D (t, p(t.·))

− y 0 Z

[t,T ]×Ω

L(τ, z, x(τ, z, p(τ, z)), u(τ, z, p(τ, z)))dτ dz

− y 0 Z

Ω l(x(T, z, p(T, z)))dz − y 0 Z

[t,T ]×∂Ω h(τ, s, v(τ, z, p(τ, z)))dτ ds (41)

and it is sufficient to show that (u(t, z, p), v(t, z, p)) is an optimal dual feedback control, by Theorem 1 and Definition 2.

References

[1] N. Arada and J.P. Raymond, Optimality conditions for state-constrained Dirichlet boundary control problems, J. Optim. Theory Appl. 102 (1999), 51–68.

[2] N. Arada and J.P. Raymond, Dirichlet boundary control of semilinear parabolic equations, Part 2 : Problems with pointwise state constraints, Appl. Math.

Optim. 45 (2002), 145–167.

[3] V. Barbu, Analysis and control of nonlinear infinite dimensional systems,

Academic Press, Boston 1993.

(15)

[4] V. Barbu, The dynamic programming equation for the time-optimal control problem in infinite dimensions, SIAM J. Control Optim. 29 (1991), 445–456.

[5] V. Barbu and G. Da Prato, Hamilton-Jacobi equations in Hilbert spaces, Pit- man Advanced Publishing Program, Boston 1983.

[6] P. Cannarsa and O. Carja, On the Bellman equation for the minimum time problem in infinite dimensions, SIAM J. Control Optim. 43 (2004), 532–548.

[7] E. Casas, Pontryagin’s principle for state-constrained boundary control prob- lems of semilinear parabolic equations, SIAM J. Control Optim. 35 (1997), 1297–1327.

[8] H.O. Fattorini, Infinite-Dimensional Optimization and Control Theory, Cam- bridge University Press, Cambridge, 1999.

[9] H.O. Fattorini and T. Murphy, Optimal control for nonlinear parabolic bound- ary control systems: the Dirichlet boundary conditions, Diff. Integ. Equ. 7 (1994), 1367–1388.

[10] A.V. Fursikov, Optimal control of distributed systems. Theory and applica- tions, American Mathematical Society, Providence, RI 2000.

[11] F. Gozzi and M.E. Tessitore, Optimality conditions for Dirichlet boundary control problems of parabolic type, J. Math. Systems Estim. Control (1998), 8.

[12] E. Galewska and A. Nowakowski, Multidimensional Dual Dynamic Program- ming, Journal of Optimization Theory and Applications 124 (2005), 175–186.

[13] E. Galewska and A. Nowakowski, A dual dynamic programming for multi- dimensional elliptic optimal control problems, Numerical Functional Analysis and Optimization-accepted in July 2005.

[14] I. Lasiecka, J.L. Lions and R. Triggiani, Nonhomogeneous boundary value prob- lems for second order hyperbolic operators, J. Math. Pures Appl. 65 (1986), 149–192.

[15] X. Li and J. Yong, Optimal control theory for infinite dimensional systems, Birkhauser, Boston 1994.

[16] B.S. Mordukhovich and K. Zhang, Minimax control of parabolic systems with Dirichlet boundary conditions and state constraints, Appl. Math. Optim. 36 (1997), 323–360.

[17] B.S. Mordukhovich and K. Zhang, Dirichlet boundary control of parabolic sys-

tems with pointwise state constraints in International Conference on Control

and Estimations of Distributed Parameter Systems, in Control and Estima-

tion of Distributed Parameter Systems (Vorau, 1996), Intentational Series of

Numerrical Mathematics, Vol. 126, Birkh¨ auser. Basel (1998), 223–236.

(16)

[18] B.S. Mordukhovich and J.P. Raymond, Dirichlet boundary control of hyperbolic equations in the presence of state constraints, Appl. Math. Optim. 49 (2004), 145–157

[19] B.S. Mordukhovich and J.P. Raymond, Neumann boundary control of hyper- bolic equations with pointwise state constraints, SIAM J. Control Optim. 43 (2004), 1354–1372.

[20] P. Neittaanmaki and D. Tiba, Optimal control of nonlinear parabolic systems, Marcel Dekker, New York 1994.

[21] A. Nowakowski, The dual dynamic programming, Proc. Amer. Math. Soc. 116 (1992), 1089–1096.

[22] J.P. Raymond, Nonlinear boundary control of semilinear parabolic problems with pointwise state constraints, Discrete Contin. Dynam. Systems 3 (1997), 341–370.

[23] L.W. White, Control of a hyperbolic problem with pointwise stress constraints, J. Optim. Theory Appl. 41 (1983), 359–369.

[24] L.W. White, Distributed control of a hyperbolic problem with control and stress constraints, J. Math. Anal. Appl. 106 (1985), 41–53.

Received 19 November 2005

Cytaty

Powiązane dokumenty

Theorems 3.1 and 3.2 can be applied to description of physical problems in the heat conduction theory, for which we cannot measure the temperature at the initial instant, but we

Convergence results, similar to those presented here, occur for both algorithms applied to optimal control problems, where, in addition to mixed constraints, also pure state

In this work, we develop a family of analytical solutions for very simple topology optimiza- tion problems, in the framework of elasticity theory, including bending and extension

After setting down the necessary conditions of optimal- ity, the optimal design task was reduced to the solution of Multipoint Boundary Value Problems (MPBVPs) for the system of

In particular, in (Kowalewski and Krakowiak, 2006), time-optimal boundary control problems for second order parabolic systems with deviating arguments appearing in an integral form

The existence and the multiplicity of T-periodic solutions for this problem are shown when g(t,x,ξ) ξ lies between two higher eigenvalues of −∆ in Ω with the Dirichlet

(2002) stating that regularity assumptions, together with a certain strict second-order condition for the optimization problem formulated in switching points, are sufficient for

Section 4 contains the main results of this paper: a statement of the finite-dimensional induced optimization problem introduced by Agrachev, Stefani and Zezza, second order