• Nie Znaleziono Wyników

We consider linear 2-D systems of Fornasini-Marchesini type in the continuous-time case with non-constant coefficients.

N/A
N/A
Protected

Academic year: 2021

Share "We consider linear 2-D systems of Fornasini-Marchesini type in the continuous-time case with non-constant coefficients."

Copied!
15
0
0

Pełen tekst

(1)

CONTROLLABILITY, OBSERVABILITY AND OPTIMAL CONTROL OF CONTINUOUS-TIME 2-D SYSTEMS

G

ERHARD

JANK

Lehrstuhl II für Mathematik, RWTH Aachen, D-52056 Aachen, Germany e-mail:

jank@math2.rwth-aachen.de

We consider linear 2-D systems of Fornasini-Marchesini type in the continuous-time case with non-constant coefficients.

Using an explicit representation of the solutions by utilizing the Riemann-kernel of the equation under consideration, we obtain controllability and observability criteria in the case of the inhomogeneous equation, where control is obtained by choosing the inhomogeneity appropriately, but also for the homogeneous equation, where control is obtained by steering with Goursat data. The optimal control problem with a quadratic cost functional is also solved.

Keywords: 2-D continuous-time systems, controllability, observability, optimal control, quadratic cost

1. Introduction

We study controllability and observability properties of the linear system described by the following hyperbolic system:

2

∂s∂t x = A

0

(s, t)x + A

1

(s, t) ∂

∂s x + A

2

(s, t) ∂

∂t x

+ B(s, t)u, (1)

where x(s, t) ∈ R

n

, u(s, t) ∈ R

m

together with A

0

(s, t), A

1

(s, t), A

2

(s, t) ∈ R

n×n

, B(s, t) ∈ R

n×m

, A

0

and B being assumed to be piecewise continuous on some interval I := [S

0

, S

1

] × [T

0

, T

1

], while A

1

, A

2

are assumed to be continuously differentiable on the same in- terval. Hereby we call a function in I piecewise continu- ous if there exists a rectangular subdivision R

1

, . . . , R

N

of I such that the restriction of the function to the open rectangles

R

i

, i = 1, . . . , N has a continuous extension to their closure. This linear 2-D system is considered to- gether with the initial-boundary or Goursat conditions

x(s, t

0

) = x

1

(s) ∈ R

n

, (s, t

0

) ∈ I, x(s

0

, t) = x

2

(t) ∈ R

n

, (s

0

, t) ∈ I,

x

1

(s

0

) = x

2

(t

0

),

(2)

where x

1

, x

2

are piecewise continuously differentiable.

For applications in image processing, see, e.g., (Jain and Jain, 1977; 1978). For this type of control prob- lems there has also been developed a Pontryagin maxi- mum principle (see, e.g., (Wolfersdorf, 1978) and the ref- erences therein) for applications in optimization of quasi- stationary chemical reactors.

The main aim of this paper is to obtain conditions on the parameters of the system (1), (2) for unconstrained controllability. Therefore, analogously to (Sontag, 1998, p. 83), we start with the following definition:

Definition 1. The system (1), (2), is said to be (com- pletely) controllable in a given interval J = [s

0

, s

1

] × [t

0

, t

1

] if for any initial-boundary condition (2) and any x ∈ R

n

there exists a piecewise continuous function u in J , u(s, t) ∈ R

m

, such that x(s

1

, t

1

) = x.

There are several papers concerning the controllabil- ity of systems of type (1), (2). Here we only make a refer- ence to (Bergmann et al., 1989; Pulvirenti and Santagati, 1975; Kaczorek, 1995; 1996). In (Pulvirenti and Santa- gati, 1975) the scalar case is treated, in (Bergmann et al., 1989) and (Kaczorek, 1995) the case of constant coeffi- cients is studied. In (Kaczorek, 1996) the general case with non-constant coefficients is treated and a necessary and sufficient controllability criterion is obtained by de- manding the so-called Gramian matrix to be positive def- inite (see Theorem 4). In the present paper we formulate controllability conditions for the case of non-constant co- efficients by making use of the solutions of the adjoint equation, which are similar to the one-dimensional situa- tion. This then also gives rise to observability results. Be- sides, we also study control by Goursat data and solve an optimal control problem. The controllability problem for the system (1), (2) under certain restrictions on the steer- ing function u is treated in (Gyurkovics and Jank, 2001).

In order to find conditions for controllability and ob-

servability as in the classical one-dimensional case, we

recall a representation formula for solutions of (1), (2) ob-

(2)

tained with the matrix Riemann function for (1). This for- mula yields an operator, mapping any admissible function u to a solution x of (1) and (2).

Although this representation formula is used, e.g., in (Bergmann et al., 1989; Kaczorek, 1995; 1996), a proof is available only for the scalar case in (Pulvirenti and Santa- gati, 1975). In principle, one can deduce the desired rep- resentation also from (Vekua, 1967, p. 15). However, the presentation there is oriented towards the representation of solutions of elliptic differential equations with analytic coefficients using a complex transformation into a formal hyperbolic system.

Since we consider neither analytic coefficients nor el- liptic equations, for the reader’s convenience we shortly recall that representation theory, which is based on a method introduced by Riemann. Readers not interested in the construction of solutions to (1), (2) can directly start with Theorem 3.

The paper is organized as follows: After the intro- ductory section, in Section 2 we briefly recall the repre- sentation theory for solutions of equation (1) using the matrix Riemann kernel function. Then in Section 3 we obtain controllability and observability results for systems with non-constant coefficients using solutions of the ad- joint equation. In Sections 4 and 5 we briefly address the issue of control of the system by initial boundary values and optimal control, respectively.

In the next section we introduce the Riemann kernel function for equations of type (1).

2. Riemann Kernel and a Representation Formula

Before introducing the Riemann kernel function, we prove a lemma concerning the solvability of an integral equation of Volterra type. Let us first define the set of matrix-valued functions

S

n×k

(J ) := n

U : J → R

n×k

| U (s, t),

∂s U (s, t), ∂

∂t U

t

(s, t), ∂

2

∂s∂t U (s, t) are piecewise continuous in J ⊂ I o .

Lemma 1. Let A(s, t, ξ, η) ∈ S

n×n

(I). Then the follow- ing integral equation of Volterra type:

R

0

(s, t, σ, τ )

− Z

s

σ

Z

t τ

R

0

(ξ, η, σ, τ )A(s, t, ξ, η) dξ dη = I

n

(3)

has a unique continuous solution such that

2

∂s∂t

R

0

(s, t, σ, τ ) is piecewise continuous and

∂s

R

0

and

∂t

R

0

are continuous in I × I.

Proof. The operator T defined by

(T F )(s, t, σ, τ ) :=

Z

s σ

Z

t τ

F (ξ, η)A(s, t, ξ, η) dξ dη maps any matrix-valued function F (ξ, η) ∈ R

n×n

con- tinuous in I to R

n×n

-valued functions continuous in I × I. There exists a constant C > 0 such that

k(T F )(s, t, σ, τ )k

≤ max

(s,t,ξ,η)∈I×I

kA(s, t, ξ, η)k

× max

(ξ,η)∈I

kF (ξ, η)k |s−σ| |t−τ |

≤ C|s−σ| |t−τ | max

(ξ,η)∈I

kF (ξ, η)k, (4) and for k = 2, 3, . . . we have

k(T

k

F )(s, t, σ, τ )k

≤ C

k

(k!)

2

|s − σ|

k

|t − τ |

k

max

(ξ,η)∈I

kF (ξ, η)k.

Equation (3) can now be written as

R

0

= I

n

+ T R

0

. (5)

Then it follows from the Picard iteration (Arnol’d, 1980, p. 212), setting R

00

= I

n

, R

k0

= I

n

+ T R

k−10

, k = 1, 2, . . . ,

R

l0

=

l

X

k=0

T

k

I

n

, (6)

where T

0

= id, and for k = 1, 2, . . . : T

k

= T T

k−1

. Here id denotes the identity mapping the space of matrix- valued functions F (ξ, η) ∈ R

n×n

continuous in I to R

n×n

-valued functions continuous in I × I.

For ε > 0 and m, l sufficiently large, m > l, to- gether with (4), (6) we obtain the estimate

kR

m0

− R

l0

k =

m

X

k=l+1

T

k

I

n

m

X

k=l+1

C

k

|s − σ|

k

|t − τ |

k

(k!)

2

< ε.

Hence we infer the uniform convergence of R

l0

towards R

0

and that R

0

is continuous on I × I and solves (3).

The uniqueness can also be concluded in the usual

way, since any difference ∆ of two solutions of equa-

tion (3) solves the homogenous equation ∆ = T ∆, and

(3)

hence also ∆ = T

k

∆, k = 2, 3, . . . . From this, together with (4), we see that ∆ = 0. Furthermore, from (5), (6) we obtain

2

∂s∂t R

l0

(s, t, σ, τ )

= R

l−10

(s, t, σ, τ )A(s, t, s, t) +

Z

t τ

R

l−10

(s, η, σ, τ ) ∂

∂t A(s, t, s, η) dη +

Z

s σ

R

l−10

(ξ, t, σ, τ ) ∂

∂s A(s, t, ξ, t) dξ +

Z

s σ

Z

t τ

R

0l−1

(ξ, η, σ, τ ) ∂

2

∂s∂t A(s, t, ξ, η) dξ dη.

Taking the limit as l → ∞, we infer that

∂s∂t2

R

0

is also piecewise continuous in I × I. In a similar way, it can be seen that

∂s

R

0

and

∂t

R

0

are continuous.

Now we are ready to introduce the Riemann kernel function.

Theorem 1. (Riemann kernel function) Let A

0

(s, t) ∈ R

n×n

be piecewise continuous and A

1

(s, t), A

2

(s, t) ∈ R

n×n

be continuously differentiable on I. Then the fol- lowing integral equation of Volterra type:

R(s, t, σ, τ ) + Z

s

σ

R(ξ, t, σ, τ )A

2

(ξ, t) dξ

+ Z

t

τ

R(s, η, σ, τ )A

1

(s, η) dη

− Z

s

σ

Z

t τ

R(ξ, η, σ, τ )A

0

(ξ, η) dξ dη = I

n

(7)

has a unique continuous solution R(s, t, σ, τ ) ∈ R

n×n

such that

∂s∂t2

R(s, t, σ, τ ) is piecewise continuous and

∂s

R(s, t, σ, τ ), as well as

∂t

R(s, t, σ, τ ), is continuous in I × I.

This matrix-valued function R(s, t, σ, τ ) ∈ R

n×n

is called the matrix Riemann function or the matrix Riemann kernel of the equation

2

∂s∂t x − A

1

∂s x − A

2

∂t x − A

0

x = 0. (8) Proof of Theorem 1. Iterating equation (7) in a similar way as we have done with (3) yields a sequence of matrix- valued functions R

(i)

(s, t, σ, τ ), but it seems to require an enormous effort to get appropriate estimates in order to obtain convergence. Therefore we follow the way pro-

posed in (Vekua, 1967). We introduce the following inte- gral equations:

R

1

(s, t, ξ) = A

2

(ξ, t) − Z

s

ξ

R

1

1

, t, ξ)A

2

1

, t) dξ

1

,

R

2

(s, t, η) = A

1

(s, η) − Z

t

η

R

2

(s, η

1

, η)A

1

(s, η

1

) dη

1

. (9)

Defining the integral operators

(T

1

R

1

)(s, t, ξ) := − Z

s

ξ

R

1

1

, t, ξ)A

2

1

, t) dξ

1

and

(T

2

R

2

)(s, t, η) := − Z

t

η

R

2

(s, η

1

, η)A

1

(s, η

1

) dη

1

,

which map continuously differentiable functions into con- tinuously differentiable functions, equations (9) can now formally be written as R

1

= A

2

+ T

1

R

1

and R

2

= A

1

+ T

2

R

2

.

Iterating (9) in a similar way as we did before, i.e.

calculating successively R

01

= A

2

, R

1l

= A

2

+ T R

l−11

, R

02

= A

1

, R

l2

= A

1

+ T R

l−12

, l = 1, 2, . . . , we obtain

R

1

(s, t, ξ) =

X

j=0

(T

1j

A

2

)(s, t, ξ),

R

2

(s, t, η) =

X

j=0

(T

2j

A

1

)(s, t, η).

(10)

The uniform convergence is obvious, since, as has been done before in the proof of Lemma 1, we have performed a simple Picard-iteration procedure, and hence R

1

, R

2

are continuously differentiable functions and the unique so- lutions of (9). With these solutions we now determine a matrix-valued function R

0

(s, t, σ, τ ) such that

R(s, t, σ, τ ) = R

0

(s, t, σ, τ )

− Z

s

σ

R

0

(ξ, t, σ, τ )R

1

(s, t, ξ) dξ

− Z

t

τ

R

0

(s, η, σ, τ )R

2

(s, t, η) dη. (11)

(4)

Inserting (11) into (7), while suppressing σ and τ , yields I

n

= R

0

(s, t) −

Z

s σ

R

0

(ξ, t)R

1

(s, t, ξ) dξ

− Z

t

τ

R

0

(s, η)R

2

(s, t, η) dη

+ Z

s

σ

h

R

0

(ξ, t) − Z

ξ

σ

R

0

1

, t)R

1

(ξ, t, ξ

1

) dξ

1

− Z

t

τ

R

0

(ξ, η)R

2

(ξ, t, η) dη i

A

2

(ξ, t) dξ

+ Z

t

τ

h

R

0

(s, η) − Z

s

σ

R

0

(ξ, η)R

1

(s, η, ξ) dξ

− Z

η

τ

R

0

(s, η

1

)R

2

(s, η, η

1

) dη

1

i

A

1

(s, η) dη

− Z

s

σ

Z

t τ

h R

0

(ξ, η) − Z

ξ

σ

R

0

1

, η)R

1

(ξ, η, ξ

1

) dξ

1

− Z

η

τ

R

0

(ξ, η

1

)R

2

(ξ, η, η

1

) dη

1

i

A

0

(ξ, η) dξ dη.

Together with (9) and (10), we then obtain, after a short calculation,

I

n

= R

0

(s, t)−

Z

s σ

Z

t τ

R

0

(ξ, η)R

2

(ξ, t, η)A

2

(ξ, t) dξ dη

− Z

s

σ

Z

t τ

R

0

(ξ, η)R

1

(s, η, ξ)A

1

(s, η) dξ dη

− Z

s

σ

Z

t τ

h

R

0

(ξ, η) − Z

ξ

σ

R

0

1

, η)R

1

(ξ, η, ξ

1

) dξ

1

− Z

η

τ

R

0

(ξ, η

1

)R

2

(ξ, η, η

1

) dη

1

i

A

0

(ξ, η) dξ dη.

Interchanging the order of integration in the last two lines we finally obtain the following integral equation of Volterra type for R

0

:

R

0

(s, t, σ, τ )

− Z

s

σ

Z

t τ

R

0

(ξ, η, σ, τ )A(s, t, ξ, η) dξ dη = I

n

, (12) where

A(s, t, ξ, η)

:= R

2

(ξ, t, η)A

2

(ξ, t) + R

1

(s, η, ξ)A

1

(s, η) + A

0

(ξ, η) −

Z

s ξ

R

1

1

, η, ξ)A

0

1

, η) dξ

1

− Z

t

η

R

2

(ξ, η

1

, η)A

0

(ξ, η

1

) dη

1

and, together with

∂s∂t2

A = 0, we have A ∈ S

n×n

(I).

In order to establish Theorem 1, we need the existence and uniqueness of the solution of this latter integral equation.

Setting A in (3) as defined in (12) and using Lemma 1, we infer that R as defined in (11) is a continuous so- lution of (7). Since R

0

, R

1

, and R

2

are unique, we also obtain the uniqueness of R. Moreover, since R

1

and R

2

are continuously differentiable,

∂s

R

0

and

∂t

R

0

are continuous and

∂s∂t2

R

0

is piecewise continuous, we infer from (11) that also

∂σ∂τ2

R is piecewise continu- ous.

The next step now is to prove some important prop- erties of the matrix Riemann function.

Theorem 2. The matrix Riemann function is a solution of the differential equation

2

∂s∂t R(s, t, σ, τ ) + ∂

∂s R(s, t, σ, τ )A

1

(s, t)  + ∂

∂t R(s, t, σ, τ )A

2

(s, t) 

− R(s, t, σ, τ )A

0

(s, t) = 0, (13) with det R(s, t, s, τ ) 6= 0, det R(s, t, σ, t) 6= 0 and

i)

∂s R(s, t, σ, t) + R(s, t, σ, t)A

2

(s, t) = 0, ii)

∂t R

t

(s, t, s, τ ) + R(s, t, s, τ )A

1

(s, t) = 0, iii)

∂ξ R(s, t, ξ, t) − A

2

(ξ, t)R(s, t, ξ, t) = 0, iv)

∂η R(s, t, s, η) − A

1

(s, η)R(s, t, s, η) = 0, v) R(s, t, s, t) = I

n

.

Proof. That R is a solution of the adjoint differential equation (13) results from differentiating (7). Item (i) fol- lows by setting t = τ in (7) and differentiating the result with respect to s. Analogously, we obtain (ii) and (v).

With the well-known Abel-Jacobi-Liouville formula (Gantmacher, 1986, p. 470), we obtain from, e.g., (ii) and (v)

det R(s, t, s, τ ) = e

Rτttrace A1(s,η)



,

which yields the desired property. Analogously, from (i)

and (v) we obtain the second determinant. The remaining

two items require some more effort. For any continuous

matrix-valued function X(s, t) ∈ R

n×n

, differentiable

(5)

with respect to s for all t ∈ I, together with (ii) we de- duce that

∂t R(s, t, s, τ )X(s, t) 

− R(s, t, s, τ )  ∂

∂t X(s, t) − A

1

(s, t)X(s, t) 

=  ∂

∂t R(s, t, s, τ )+R(s, t, s, τ )A

1

(s, t) 

X(s, t) = 0.

Replacing therein t by η and integrating the result with respect to η from τ to t together with (v) yields R(s, t, s, τ )X(s, t) − I

n

X(s, τ )

= Z

t

τ

R(s, η, s, τ ) h ∂

∂η X(s, η)−A

1

(s, η)X(s, η) i dη.

Setting now X(s, τ ) := R(s, t, s, τ ) and applying again (v) yields

0 = Z

t

τ

R(s, η, s, τ )

× h ∂

∂η R(s, t, s, η) − A

1

(s, η)R(s, t, s, η) i dη.

Since t, τ ∈ I are arbitrary, we infer that necessarily

R(s, η, s, τ ) h ∂

∂η R(s, t, s, η) − A

1

(s, η)R(s, t, s, η) i

= 0

for all η ∈ I. Since R(s, t, s, η) is invertible, we ob- tain (iv). Analogously, we get (iii).

Having introduced the matrix Riemann function, we can now use it to obtain a general representation formula for all solutions to (1).

We start deriving an important identity.

Lemma 2. Let U (s, t) ∈ S

n×k

(I) and let the matrices A

0

, A

1

, A

2

be defined as before. Then with

F (U ) := ∂

2

∂s∂t U − A

1

∂s U − A

2

∂t U − A

0

U and R as the matrix Riemann function, we obtain the identity

2

∂s∂t (RU ) − RF (U ) = ∂

∂s

  ∂

∂t R + RA

1

 U



+ ∂

∂t

  ∂

∂s R + RA

2

 U

 . (14)

Proof. It is easy to check that the left-hand side, together with (13) and the definition of F (U ), yields

2

∂s∂t (RU ) − RF (U )

= 2 ∂

2

∂s∂t RU + ∂

∂s R ∂

∂t U + ∂

∂t R ∂

∂s U + RA

1

∂s U + RA

2

∂t U +  ∂

∂t (RA

2

)  U

+  ∂

∂s (RA

1

)  U,

and this exactly equals the term on the right-hand side of (14).

The next identity yields an integrated version of (14).

Lemma 3. Let U (σ, τ ) ∈ S

n×k

(J ) and let R(s, t, σ, τ ) be the matrix Riemann kernel of the differential equa- tion (8). Then we obtain the identity

U (s, t) = R(s

0

, t

0

, s, t)U (s

0

, t

0

) +

Z

t t0

R(s

0

, τ, s, t) h ∂

∂τ U (s

0

, τ ) − A

1

(s

0

, τ )U (s

0

, τ ) i dτ

+ Z

s

s0

R(σ, t

0

, s, t) h ∂

∂σ U

σ

(σ, t

0

) − A

2

(σ, t

0

)U (σ, t

0

) i dσ

+ Z

s

s0

Z

t t0

R(σ, τ, s, t)F U (σ, τ ) dσ dτ, (15)

where (s

0

, t

0

) ∈ J .

Proof. Interchanging the first pair of variables (s, t) with the second pair (σ, τ ) and integrating the identity (14) from s

0

to s with respect to σ and also from t

0

to t with respect to τ yields for the left-hand side of (14)

Z

s s0

Z

t t0

∂σ∂τ (RU ) dσ dτ

− Z

s

s0

Z

t t0

R(σ, τ, s, t)F U (σ, τ ) dσ dτ

= Z

s

s0

∂σ R(σ, t, s, t)U (σ, t) dσ

− Z

s

s0

∂σ R(σ, t

0

, s, t)U (σ, t

0

) dσ

− Z

s

s0

Z

t t0

R(σ, τ, s, t)F U (σ, τ ) dσ dτ

(6)

= R(s, t, s, t)U (s, t) − R(s

0

, t, s, t)U (s

0

, t)

−R(s, t

0

, s, t)U (s, t

0

) + R(s

0

, t

0

, s, t)U (s

0

, t

0

)

− Z

s

s0

Z

t t0

R(σ, τ, s, t)F U (σ, τ ) dσ dτ.

For the right-hand side, we obtain Z

t

t0

 ∂

∂t R(s, τ, s, t) + R(s, τ, s, t)A

1

(s, τ ) 

U (s, τ ) dτ

− Z

t

t0

 ∂

∂t R(s

0

, τ, s, t)+R(s

0

, τ, s, t)A

1

(s

0

, τ ) 

U (s

0

, τ )dτ

+ Z

s

s0

 ∂

∂s R(σ, t, s, t)+R(σ, t, s, t)A

2

(σ, t) 

U (σ, t) dσ

− Z

s

s0

 ∂

∂s R(σ, t

0

, s, t)+R(σ, t

0

, s, t)A

2

(σ, t

0

) 

U (σ, t

0

)dσ.

Using now the properties of the Riemann kernel as stated in Theorem 2, we obtain

U (s, t) − R(s

0

, t, s, t)U (0, t) − R(s, t

0

, s, t)U (s, t

0

) + R(s

0

, t

0

, s, t)U (s

0

, t

0

)

− Z

s

s0

Z

t t0

R(σ, τ, s, t)F U (σ, τ ) dσ dτ

= − Z

t

t0

∂t R(s

0

, τ, s, t)U (s

0

, τ ) dτ

− Z

t

t0

R(s

0

, τ, s, t)A

1

(s

0

, τ )U (s

0

, τ ) dτ

− Z

s

s0

∂s R(σ, t

0

, s, t)U (σ, t

0

) dσ

− Z

s

s0

R(σ, t

0

, s, t)A

2

(σ, t

0

)U (σ, t

0

) dσ

= −R(s

0

, τ, s, t)U (s

0

, τ )|

tτ =t0

+ Z

t

t0

R(s

0

, τ, s, t)  ∂

∂τ U (s

0

, τ )−A

1

(s

0

, τ )U (s

0

, τ )  dτ

− R(σ, t

0

, s, t)U (σ, t

0

)|

sσ=s0

+

Z

s s0

R(σ, t

0

, s, t)  ∂

∂σ U (σ, t

0

)−A

2

(σ, t

0

)U (σ, t

0

)  dσ.

This immediately yields the desired identity (15).

Remark 1. Notice that from Lemma 3 we can conclude that the matrix Riemann function with respect to the sec- ond pair of variables is a solution of the homogeneous

equation (1) in I, i.e.

2

∂σ∂τ R(s, t, σ, τ )

= A

0

(σ, τ )R(s, t, σ, τ ) + A

1

(σ, τ ) ∂

∂σ R(s, t, σ, τ ) + A

2

(σ, τ ) ∂

∂τ R(s, t, σ, τ ).

Proof. Calculating

∂σ∂τ2

R,

∂σ

R,

∂τ

R from (7) and defining

ϕ(s, t, σ, τ ) := ∂

2

∂σ∂τ R(s, t, σ, τ ) − A

1

(σ, τ ) ∂

∂σ R(s, t, σ, τ )

− A

2

(σ, τ ) ∂

∂τ R(s, t, σ, τ ) − A

0

(σ, τ )R(s, t, σ, τ ) yields, together with Theorem 2, (iii), (iv) and (v),

ϕ(s, t, σ, τ ) = − Z

s

σ

ϕ(ξ, t, σ, τ )A

2

(ξ, t) dξ

− Z

t

τ

ϕ(s, η, σ, τ )A

1

(s, η) dη

+ Z

s

σ

Z

t τ

ϕ(ξ, η, σ, τ )A

0

(ξ, η) dξ dη.

With R

1

, R

2

and (10), we perform again the transforma- tion (11):

ϕ(s, t, σ, τ ) = ϕ

0

(s, t, σ, τ )

− Z

s

σ

ϕ

0

(ξ, t, σ, τ )R

1

(s, t, ξ) dξ

− Z

t

τ

ϕ

0

(s, η, σ, τ )R

2

(s, t, η) dη,

which yields the integral equation for ϕ

0

: ϕ

0

(s, t, σ, τ ) = (T ϕ

0

)(s, t, σ, τ ).

By iterating this equation we obtain ϕ

0

(s, t, σ, τ ) = (T

n

ϕ

0

)(s, t, σ, τ ) for all n ∈ N. Using the estimate (4) for T , we see that ϕ

0

= 0 and hence ϕ = 0.

Now we can obtain a representation formula in much

the same way as for the one-dimensional continuous-time

case. This formula will then also enable us to derive sim-

ilar controllability criteria.

(7)

Theorem 3. (i) Let u be a piecewise continuous function, u(s, t) ∈ R

m

, and let x ∈ S

n×1

(J ) be a solution of (1) in J . Then

x(s, t) = R(s

0

, t

0

, s, t)x

1

(s

0

) +

Z

s s0

R(σ, t

0

, s, t) x

01

(σ) − A

2

(σ, t

0

)x

1

(σ) dσ

+ Z

t

t0

R(s

0

, τ, s, t) x

02

(τ ) − A

1

(s

0

, τ )x

2

(τ ) dτ

+ Z

s

s0

Z

t t0

R(σ, τ, s, t)B(σ, τ )u(σ, τ ) dσ dτ, (16) where (s

0

, t

0

) ∈ J , x

1

(σ) := x(σ, t

0

), x

2

(τ ) :=

x(s

0

, τ ).

(ii) For any piecewise continuously differentiable func- tions x

1

(resp. x

2

) in [s

0

, s

1

] (resp. in [t

0

, t

1

]), with x

1

(s

0

) = x

2

(t

0

), and a piecewise continuous function u, u(s, t) ∈ R

m

, in J , x in (16) is a solution of the dif- ferential equation (1) (i.e. x ∈ S

n×1

(J ) and fulfils (1) a.e.), with the initial-boundary values (2).

Proof. If x ∈ S

n×1

(J ) is a solution of (1), then from the identity (15), setting F (x) = Bu, we infer the represen- tation (16).

To show (ii), we first prove by direct computation that x as represented by (16) is in S

n×1

and fulfils (1).

There we have to use Remark 1 and properties (iii), (iv) and (v) of Theorem 2.

It remains to prove that x(s, t) also has the desired boundary values. From (16) we obtain

x(s

0

, t) = R(s

0

, t

0

, s

0

, t)x

1

(s

0

) +

Z

t t0

R(s

0

, τ, s

0

, t)x

02

(τ ) − A

1

(s

0

, τ )x

2

(τ ) dτ, which yields, after partial integration,

x(s

0

, t) = R(s

0

, t

0

, s

0

, t)x

1

(s

0

) + R(s

0

, τ, s

0

, t)x

2

(τ )|

tt

0

− Z

t

t0

h ∂

∂τ R(s

0

, τ, s

0

, t) + R(s

0

, τ, s

0

, t)A

1

(s

0

, τ ) i

x

2

(τ ) dτ.

Using then (ii) and (v), from Theorem 2 we finally get x(s

0

, t) = x

2

(t). Analogously, we obtain x(s, t

0

) = x

1

(s).

Notice that (16) remains true if u is in the space of square integrable functions L

n2

(J ) and x, x

1

, x

2

are in some appropriate Sobolev space, since the representa- tion operator is continuous. We used piecewise continu- ous functions having in mind only technical applications.

In some particular situations it is possible to apply a simple transformation of (1) in order to obtain a simpler form.

Remark 2. Let A

1

(s, t), A

2

(s, t) ∈ R

n×n

be piecewise continuously differentiable on I such that the integrabil- ity conditions

∂A

1

∂s (s, t) = ∂A

2

∂t (s, t), A

1

(s, t)A

2

(σ, t) = A

2

(σ, t)A

1

(s, t)

(17)

hold for all σ, s, t ∈ [S

0

, S

1

], t ∈ [T

0

, T

1

].

If V (s, t, s

0

, t

0

), (s, t, s

0

, t

0

) ∈ I is the solution of

∂s V = A

2

(s, t)V,

∂t V = A

1

(s, t)V

(18)

with V (s

0

, t

0

, s

0

, t

0

) = I

n

, then by the transformation

x = V y (19)

eqn. (1) with boundary values (2) is equivalent to

2

∂s∂t y + V

−1

(s, t)  ∂

∂t A

2

(s, t) − A

2

(s, t)A

1

(s, t)

− A

0

(s, t) 

V (s, t)y = V

−1

(s, t)B(s, t)u(s, t) (20) and

y(s, t

0

) = V

−1

(s, t

0

)x

1

(s), y(s

0

, t) = V

−1

(s

0

, t)x

2

(t), y(s

0

, t

0

) = x

1

(s

0

) = x

2

(t

0

).

(21)

Proof. First, notice that the required solution of (18) under the assumption of (17) can be written as

V (s, t, s

0

, t

0

)

= exp  Z

s s0

A

2

(σ, t) dσ 

exp  Z

t t0

A

1

(s

0

, τ ) dτ  . (22) Inserting (19) into (1) together with (17) and (18), we ob- tain

V ∂

2

∂s∂t y +  ∂

∂t A

2

− A

2

A

1

− A

0

 V y

= V ∂

2

∂s∂t y +  ∂

∂s A

1

− A

1

A

2

− A

0



V y = Bu,

and hence (20). Clearly, from (19) we have (21). Since

V (s, t) ∈ R

n×n

is regular in I, we conclude the equiva-

lence.

(8)

3. Controllability and Observability

The first controllability condition can now be obtained similarly to the one-dimensional case by using the rep- resentation formula (16).

Theorem 4. Let A

0

be piecewise continuous and A

1

, A

2

be continuously differentiable in I. Let further- more R denote the matrix Riemann function of (8). The system (1) together with (2) is completely controllable in J if and only if

W = W (s

0

, t

0

, s

1

, t

1

) :=

Z

s1 s0

Z

t1 t0

R(σ, τ, s

1

, t

1

)B(σ, τ )B

T

(σ, τ )

×R

T

(σ, τ, s

1

, t

1

) dσ dτ > 0. (23) Proof. From the representation (16) we conclude that for the control

u(σ, τ ) := B

T

(σ, τ )R

T

(σ, τ, s

1

, t

1

)z, z ∈ R

n

we have

x(s

1

, t

1

) = R(s

0

, t

0

, s

1

, t

1

)x

1

(s

0

) +

Z

s1

s0

R(σ, t

0

, s

1

, t

1

)

× x

01

(σ) − A

2

(σ, t

0

)x

1

(σ) dσ

+ Z

t1

t0

R(s

0

, τ, s

1

, t

1

)

× x

02

(τ ) − A

1

(s

0

, τ )x

2

(τ ) dτ + W z.

If W > 0, then with z = W

−1



x(s

1

, t

1

) − R(s

0

, t

0

, s

1

, t

1

)x

1

(s

0

)

− Z

s1

s0

R(σ, t

0

, s

1

, t

1

) x

01

(σ) − A

2

(σ, t

0

)x

1

(σ) dσ

− Z

t1

t0

R(s

0

, τ, s

1

, t

1

) x

02

(τ ) − A

1

(s

0

, τ )x

2

(τ ) dτ 

∈ R

n

we see that the control u = B

T

R

T

z steers the system from x(s

0

, t

0

) = x

1

(s

0

) = x

2

(t

0

) to x(s

1

, t

1

) for any given x(s

1

, t

1

) and any boundary functions x

1

(s), x

2

(t).

Hence the system is completely controllable.

If, on the other hand, the system is supposed to be controllable, then for any given x

1

(s), x

2

(t) and

x(s

1

, t

1

) ∈ R

n

or, equivalently, for any given ˜ x ∈ R

n

with

˜

x = x(s

1

, t

1

) − R(s

0

, t

0

, s

1

, t

1

)x

1

(s

0

)

− Z

s1

s0

R(σ, t

0

, s

1

, t

1

) x

01

(σ) − A

2

(σ, t

0

)x

1

(σ) dσ

− Z

t1

t0

R(s

0

, τ, s

1

, t

1

) x

02

(τ ) − A

1

(s

0

, τ )x

2

(τ ) dτ there exists an admissible control u steering the system to x(s

1

, t

1

), which is then equivalent to

˜ x =

Z

s1 s0

Z

t1 t0

R(σ, τ, s

1

, t

1

)B(σ, τ )u(σ, τ ) dσ dτ.

Since W is a symmetric and positive semi-definite ma- trix, all we have to prove is that W is regular or that W has a kernel containing only the zero vector.

If ˜ x 6= 0 were in the kernel of W (i.e. W ˜ x = 0), then

Z

s1

s0

Z

t1

t0

˜

x

T

R(σ, τ, s

1

, t

1

)B(σ, τ )B

T

(σ, τ )

× R

T

(σ, τ, s

1

, t

1

)˜ x dσ dτ

= Z

s1

s0

Z

t1 t0

|B

T

R

T

x| ˜

2

dσ dτ = 0

and therefore B

T

R

T

x = 0 a.e. ˜

From controllability and our consideration above we obtained a representation for ˜ x using an appropriate con- trol function u. Together with this definition of ˜ x, we get

|˜ x|

2

= ˜ x

T

x = ˜ Z

s1

s0

Z

t1 t0

˜

x

T

RBu dσ dτ = 0, a contradiction. This means Im W = R

n

and, since in general W ≥ 0, we have W > 0.

As a first example, we study the controllability of the system (1) in the case of constant coefficients.

Theorem 5. (Kalman controllability) The system

2

∂s∂t x = A

0

x + Bu, x(s, 0) = x

1

(s), x(0, t) = x

2

(t), x

1

(0) = x

2

(0),

where A

0

∈ R

n×n

, B ∈ R

n×m

and x

1

, x

2

are piece- wise continuously differentiable for s, t > 0, is com- pletely controllable in [0, ∞) × [0, ∞), i.e. completely controllable for all s

1

, t

1

> 0, if and only if

rank (B, A

0

B, A

20

B, . . . , A

n−10

B) = n. (24)

(9)

Proof. First, from (7) we determine the Riemann kernel for the equation in Theorem 5, i.e. for a constant coef- ficient A

0

. Iterating equation (7) with R

0

= I

n

and R

m+1

(s, t, σ, τ ) := I

n

+ R

s

σ

R

t

τ

A

0

R

m

(ξ, η, σ, τ ) dξ dτ , m = 0, 1, . . . yields

R

m

=

m

X

j=0

1

(j!)

2

A

j0

(s − σ)

j

(t − τ )

j

.

After showing the convergence, we obtain the Riemann kernel of (7)

R(s, t, σ, τ ) =

X

j=0

1

(j!)

2

A

j0

(s − σ)

j

(t − τ )

j

. We now infer controllability from Theorem 4. The system is not controllable if and only if there exists (s

1

, t

1

) ∈ [0, ∞) × [0, ∞) and x ∈ R

n

\ {0} such that x

T

W (0, 0, s

1

, t

1

)x = 0. Since x

T

RBB

T

R

T

x ≥ 0, this is equivalent to B

T

R

T

x = 0, i.e.

X

j=0

1

(j!)

2

(σ − s

1

)

j

(τ − t

1

)

j

B

T

(A

T0

)

j

x = 0.

This implies

B

T

(A

T0

)

j

x = 0 for all j = 0, 1, 2, . . . .

Together with the Cayley-Hamilton theorem, this is equiv- alent to

rank

 B

T

B

T

A

T0

.. . B

T

(A

T0

)

n−1

< n.

Hence we obtain a contradiction. Since the maximal rank of the above matrix equals n, the theorem is proved.

Corollary 1. Let A

0

∈ R

n×n

, B ∈ R

n×m

, A

1

= α

1

I

n

, A

2

= α

2

I

n

, α

1

, α

2

∈ R, A := α

1

α

2

I

n

+A

0

. The system

2

∂s∂t x = A

0

x + A

1

∂s x + A

2

∂t x + Bu, x(s, 0) = x

1

(s), x(0, t) = x

2

(t),

where x

1

, x

2

are piecewise continuously differentiable for s, t > 0, is completely controllable in [0, ∞)×[0, ∞) if and only if

rank (B, AB, A

2

B, . . . , A

n−1

B) = n. (25)

Proof. From Remark 2 we infer that there exists a trans- formation matrix V (s, t, 0, 0) = exp(α

1

t+α

2

s)I

n

trans- forming (1) into (20), which is in this case

2

∂s∂t y − (α

1

α

2

I

n

+ A

0

)y = ∂

2

∂s∂t y − Ay

= V

−1

(s, t)Bu(s, t). (26) This equation is completely controllable if and only if the original system (1) is completely controllable.

The matrix Riemann function of (26) is again

R(s, t, σ, τ ) =

X

j=0

1

(j!)

2

A

j

(s − σ)

j

(t − τ )

j

. (27) For s

1

, t

1

> 0 we infer from Theorem 4 the complete controllability of (26) if and only if

W (0, 0, s

1

, t

1

)

= Z

s1

0

Z

t1 0

R(σ, τ, s

1

, t

1

)V

−1

(σ, τ )

×BB

T

(V

T

)

−1

(σ, τ )R

T

(σ, τ ) dσ dτ

= Z

s1

0

Z

t1

0

e

−2(α1τ +α2σ)

R(σ, τ, s

1

, t

1

)

×BB

T

R

T

(σ, τ, s

1

, t

1

) dσ dτ > 0.

As before, non-controllability is equivalent to the exis- tence of x ∈ R

n

\ {0} such that

e

−(α1τ +α2σ)

B

T

R

T

x = 0,

which, together with (27), finally yields the desired result.

More general controllability criteria in the case of constant coefficients are derived in (Gyurkovics and Jank, 2001; Kaczorek, 1996).

Next we derive general controllability criteria in the case of non-constant coefficients.

Theorem 6. Let R be the matrix Riemann function of (8) and, furthermore, let J be an interval such that R(s

0

, t

0

, s

1

, t

1

) 6= 0. If there exists y

0

∈ R

1×n

\{0} such that with y(s, t) = y(s, t, s

1

, t

1

) := y

0

R(s, t, s

1

, t

1

),

y

0

R(s

0

, t

0

, s

1

, t

1

) = y(s

0

, t

0

, s

1

, t

1

) 6= 0, and

y(s, t)B(s, t) = 0 for all (s, t) ∈ J , (28)

then the system (1) is not completely controllable in J .

(10)

Proof. Notice that with a solution x ∈ S

n×1

of (1), from (15) by premultiplying this identity from the left- hand side by y(s, t) and observing that y(s

1

, t

1

) = y

0

, we see that

y

0

x(s

1

, t

1

)

= y(s

0

, t

0

, s

1

, t

1

)x(s

0

, t

0

) +

Z

t1 t0

y(s

0

, τ, s

1

, t

1

)

× h ∂

∂τ x(s

0

, τ ) − A

1

(s

0

, τ )x(s

0

, τ ) i dτ

+ Z

s1

s0

y(σ, t

0

, s

1

, t

1

)

× h ∂

∂σ x(σ, t

0

) − A

2

(σ, t

0

)x(σ, t

0

) i dσ

+ Z

s1

s0

Z

t1 t0

y(σ, τ, s

1

, t

1

)B(σ, τ )u(σ, τ ) dσ dτ. (29)

If we now assume that the solution x of (1) fulfils the following boundary conditions:

x(s

0

, τ ) = x

2

(τ ) := e

Rτ

t0A1(s0,η) dη

x

0

, and

x(σ, t

0

) = x

1

(σ) := e

Rσ

s0A2(ξ,t0) dξ

x

0

,

where x

1

(s

0

) = x

2

(t

0

) = x

0

, then together with yB = 0 we obtain

y

0

x(s

1

, t

1

) = y(s

0

, t

0

, s

1

, t

1

)x

0

= y

0

R(s

0

, t

0

, s

1

, t

1

)x

0

for arbitrary x

0

∈ R

n

.

Choosing x

0

∈ R

n

such that y(s

0

, t

0

, s

1

, t

1

)x

0

6= 0 yields a contradiction if we intend to steer the system to x(s

1

, t

1

) = 0. Hence the system is not completely con- trollable.

We say that y ∈ S

1×n

(I) is a solution of the adjoint differential equation to (8) if it fulfils

2

∂s∂t y + ∂

∂s y(s, t)A

1

(s, t)  + ∂

∂t y(s, t)A

2

(s, t) − y(s, t)A

0

(s, t) = 0 (30) a.e., and hence if, e.g., y

0

∈ R

1×n

, then y(s, t) = y(s, t, σ, τ ) := y

0

R(s, t, σ, τ ) is a solution of the adjoint equation (30). These solutions of the adjoint equation can now be used to obtain sufficient conditions for complete controllability in the case of non-constant coefficients.

Theorem 7. Let (1) be defined in the interval I = [S

0

, S

1

] × [T

0

, T

1

] and let R denote the matrix Riemann function of (1) or (8), respectively. If for all nontriv- ial solutions y of the adjoint equation (30) of the form y

0

R(s, t, s

0

, t

0

) = y(s, t), y

0

∈ R

1×n

\ {0} we have

yB 6≡ 0 on I ∩ [s

0

, ∞) × [t

0

, ∞), (31) for s

0

> S

0

, t

0

> T

0

, then there exists s

1

> s

0

, t

1

>

t

0

such that the system (1) is completely controllable in J =[s

0

, s

1

] × [t

0

, t

1

].

Proof. First we prove that for all s

0

> S

0

, t

0

> T

0

, there exists s

1

> s

0

, t

1

> t

0

such that yB 6≡ 0 on J for all nontrivial solutions y of the adjoint equation (30) that can be represented in the form y(s, t) = y

0

R(s, t, s

0

, t

0

), where y

0

∈ R

1×n

.

Assume that this is wrong. Then there exist se- quences s

ν

→ S

1

, t

ν

→ T

1

as ν → ∞, and nontriv- ial solutions y

ν

(s, t) = y

0,ν

R(s, t, s

0

, t

0

) of the adjoint equation with y

ν

B ≡ 0 on [s

0

, s

ν

] × [t

0

, t

ν

]. Without loss of generality we assume |y

ν

(s

0

, t

0

)| = 1, and also (by taking a subsequence if necessary)

ν→∞

lim y

ν

(s

0

, t

0

) = x

0

∈ R

n

.

Since y

ν

(s, t) = y

0,ν

R(s, t, s

0

, t

0

), we conclude that

ν→∞

lim y

ν

(s, t) = ( lim

ν→∞

y

0,ν

)R(s, t, s

0

, t

0

)

= x

0

R(s, t, s

0

, t

0

) =: y

0

(s, t).

Hence y

0

(s, t) is a nontrivial solution of the adjoint equa- tion, since x

0

6= 0. On the other hand, we have

y

0

(s, t)B(s, t) = lim

ν→∞

y

ν

(s, t)B(s, t) = 0, for s > s

0

, t > t

0

. This contradicts our assumption.

In the next step we prove

W (s

0

, t

0

, s

1

, t

1

) > 0,

which, by Theorem 4, yields complete controllability.

In general, W (s

0

, t

0

, s

1

, t

1

) ≥ 0 and we have to show that W is regular. Assume there is some a ∈ R

1×n

\ {0} such that aW = 0. Hence

aW a

T

= Z

s1

s0

Z

t1 t0

aR(σ, τ, s

1

, t

1

)B(σ, τ )B

T

(σ, τ )

×R(σ, τ, s

1

, t

1

)a

T

dσ dτ

= Z

s1

s0

Z

t1 t0

y(σ, τ )B(σ, τ )B

T

(σ, τ )

×y

T

(σ, τ ) dσ dτ = 0.

(11)

Therefore

aR(σ, τ, s

1

, t

1

)B(σ, τ ) = 0 on [s

0

, s

1

] × [t

0

, t

1

].

This is again a contradiction. Thus, W > 0 and by Theo- rem 4 the system is completely controllable on [s

0

, s

1

] × [t

0

, t

1

].

Since Theorem 7 makes use only of particular solu- tions of the adjoint differential equation (30), there is a stronger sufficient controllability condition applying con- dition (31) to all nontrivial solutions of the adjoint differ- ential equation. Hence we obtain a controllability condi- tion closer to the one-dimensional case.

Corollary 2. Under the assumptions of Theorem 7 and if condition (31) holds for all nontrivial solutions of the adjoint differential equation (30), there exist s

1

> s

0

and t

1

> t

0

such that the system (1) is completely con- trollable in J =[s

0

, s

1

] × [t

0

, t

1

].

Next we discuss the observability of the system (1), (2) together with a linear output. We shall introduce a no- tion of observability analogously to that given in (Sontag, 1998, p. 263).

Definition 2. Let C be piecewise continuous on the inter- val I, C(s, t) ∈ R

k×n

. Then we define the linear output of the system (1), (2) by

y(s, t) = C(s, t)x(s, t), y(s, t) ∈ R

k

, (32) where x(s, t) is a solution of (1), (2). Suppose that for all (s

1

, t

1

) ∈ I and for all controls u ∈ S

m×1

(I ∩ (−∞, s

1

] × (−∞, t

1

]) for any two trajectories x, ˜ x of (1) belonging to the same input u, from

C(s, t)x(s, t) = C(s, t)˜ x(s, t), (s, t) ∈ I ∩ (−∞, s

1

] × (−∞, t

1

] it follows necessarily that

x(s, t) = ˜ x(s, t) in I ∩ (−∞, s

1

] × (−∞, t

1

].

Then the system (1), (2) with the output (32) is said to be observable in I.

Remark 3. Write ˆ x = x− ˜ x. Then observability is equiv- alent to the condition that

C(s, t)ˆ x(s, t) = 0, (s, t) ∈ I ∩ (−∞, s

1

] × (−∞, t

1

] implies

x(s, t) = 0 in I ∩ (−∞, s ˆ

1

] × (−∞, t

1

], where ˆ x is any solution of the homogeneous system (1), i.e. with u = 0.

Hence observability is equivalent to the condition that for all (s

1

, t

1

) ∈ I and for all nontrivial solutions ˆ x of (8) there holds

ˆ

x

T

C

T

6≡ 0 in I ∩ (−∞, s

1

] × (−∞, t

1

].

Comparing this last remark with the controllability criterion obtained in Theorem 7 and Corollary 2 yields a necessary criterion for observability.

Theorem 8. If the system (1), (2) with the output (32) is observable in I, then the system of type (30)

2

∂s∂t x − A

T1

(−s, −t) ∂

∂s x − A

T2

(−s, −t) ∂

∂t x

− A

T0

(−s, −t) + ∂

∂s A

T1

(−s, −t) + ∂

∂t A

T2

(−s, −t)x

= C

T

(−s, −t)v (33) is completely controllable in −I.

Proof. From Remark 3 we infer that for any nontrivial so- lution ˆ x of (8) there holds ˆ x

T

C

T

6≡ 0 in I ∩ (−∞, s

1

] × (−∞, t

1

]. Then y(s, t) := ˆ x(−s, −t) is a nontrivial so- lution of

2

∂s∂t y + ∂

∂s yA

T1

(−s, −t) + ∂

∂t yA

T2

(−s, −t) 

− y A

T0

(−s, −t) + ∂

∂s A

T1

(−s, −t) + ∂

∂t A

T2

(−s, −t) = 0.

Since this is the adjoint homogeneous differential equa- tion of (33), using Corollary 1 we infer the controllability of (33).

4. Initial-Boundary-Value Control

In this section we briefly indicate that for equations of type (1) there is also a possibility of steering the system by its initial-boundary values.

First we define the following operators mapping the set of piecewise continuous functions into itself:

L

11

u :=

Z

s1

s0

R(σ, t

0

, s

1

, t)u(σ) dσ,

L

12

u :=

Z

t t0

R(s

0

, τ, s

1

, t)u(τ ) dτ,

L

21

u :=

Z

s s0

R(σ, t

0

, s, t

1

)u(σ) dσ,

L

22

u :=

Z

t1 t0

R(s

0

, τ, s, t

1

)u(τ ) dτ.

(34)

(12)

Theorem 9. Consider the homogeneous system

2

∂s∂t x = A

0

(s, t)x + A

1

(s, t) ∂

∂s x + A

2

(s, t) ∂

∂t x

t

(35) and let furthermore two piecewise continuous functions ϕ

1

(t), ϕ

2

(s), t

0

≤ t ≤ t

1

, s

0

≤ s ≤ s

1

be given and x

0

∈ R

n

. If the operator

L

11

L

12

L

21

L

22

!

, (36)

where L

11

, L

12

, L

21

, L

22

, as defined in (34), is invert- ible, then there exist initial-boundary values

x

1

(σ) = x(σ, t

0

), x

2

(τ ) = x(s

0

, τ ) (37) with x

1

(s

0

) = x

2

(t

0

) = x

0

, such that the associated solution of (35) given by (16) fulfils

x(s

1

, t) = ϕ

1

(t), x(s, t

1

) = ϕ

2

(s). (38)

Proof. From the representation formula (16), with u = 0, we see that

x(s, t) = R(s

0

, t

0

, s, t)x

1

(s

0

) +

Z

s s0

R(σ, t

0

, s, t) x

01

(σ)−A

2

(σ, t

0

)x

1

(σ) dσ

+ Z

t

t0

R(s

0

, τ, s, t) x

02

(τ )−A

1

(s

0

, τ )x

2

(τ ) dτ. (39)

Now let x

1

and x

2

be determined as the solutions of the following differential equations:

x

01

(σ) − A

2

(σ, t

0

)x

1

(σ) = u

1

(σ), x

1

(s

0

) = x

0

, x

02

(τ ) − A

1

(s

0

, τ )x

2

(τ ) = u

2

(τ ), x

2

(t

0

) = x

0

, (40) with u

1

, u

2

to be determined next.

From (40), (39), (34) and using x(s

1

, t) = ϕ

1

(t), x(s

1

, t) = ϕ

2

(t), x(s

0

, t

0

) = x

0

, we obtain the integral equation

ϕ

1

(t) ϕ

2

(s)

!

− R(s

0

, t

0

, s

1

, t)x

0

R(s

0

, t

0

, s, t

1

)x

0

!

= L

11

L

12

L

21

L

22

! u

1

u

2

! . (41) If (41) is solvable, then it determines u

1

, u

2

in terms of the initial point x

0

= x(s

0

, t

0

) and the prescribed termi- nal data x(s

1

, t), x(s, t

1

). So (40) determines the Goursat data needed to steer the system to the prescribed terminal data.

5. Optimal Control

In (Kaczorek, 1995; 1996), among other results, a solu- tion to the minimum energy control problem for (1) was obtained. We present a solution to the optimal control problem for (1) where we relax the condition to meet ex- actly a predefined endpoint x(t

1

, s

1

) and, furthermore, impose quadratic costs also for the state. Therefore, in this section we study an optimal control problem associ- ated with (1) and with a quadratic performance criterion.

Here we prefer a Hilbert space approach where, beside the spaces already used, we introduce the Hilbert space H

k

(J ), k ∈ N of all R

k

-valued functions in J , square integrable with the scalar product

hx, yi = x

T

(s

1

, t

1

)y(s

1

, t

1

) +

Z

s1

s0

Z

t1

t0

x

T

(σ, τ )y(σ, τ ) dσ dτ for all x, y ∈ H

k

(J ).

Definition 3. (i) Let R be the matrix Riemann function of (1). Then

R : R ˜

n

→ H

n

(J ), x

0

7→ R(s

0

, t

0

, ·, ·)x

0

. (ii) Let R be the matrix Riemann function of (1) and let

B, as in (1), be piecewise continuous on J . Then L : H

m

(J ) → H

n

(J ),

u 7→ L(u) = Z

(·)

s0

Z

(·) t0

R(σ, τ, ·, ·)B(σ, τ )u(σ, τ )dσdτ.

(iii) Let Q(s, t) ∈ R

n×n

be piecewise continuous in J , K

1

∈ R

n×n

. Then

Q : H ˜

n

(J ) → H

n

(J ),

x(·, ·) 7→ (s, t) 7→

( Q(s, t)x(s, t) (s, t) 6= (s

1

, t

1

) K

1

x(s, t) (s, t) = (s

1

, t

1

)

! .

(iv) Let T (s, t) ∈ R

m×m

be piecewise continuous in J . Then

T : H ˜

m

(J ) → H

m

(J ),

x(·, ·) 7→ (s, t) 7→

( T (s, t)x(s, t) (s, t) 6= (s

1

, t

1

) 0 (s, t) = (s

1

, t

1

)

! .

Moreover, for the matrix Riemann function R and any piecewise continuously differentiable function x

1

(σ) ∈ R

n

in [s

0

, s

1

] we set

Θ

1

(·, ·) :=

Z

(·) s0

R(σ, t

0

, ·, ·) x

01

(σ)

− A

2

(σ, t

0

)x

1

(σ) dσ ∈ H

n

(J ), (42)

Cytaty

Powiązane dokumenty

We first notice that if the condition (1.7) is satisfied then the a priori estimates for u − ε 1 (x) given in Corollary 3.3 can be modified so as to be independent of ε... Below

The convergence of difference schemes was proved first locally, next in the unbounded case for differential problems [2], and finally for differential-functional systems using a

Di Blasio, Differentiability of spatially homogeneous solution of the Boltzmann equation in the non Maxwellian case, Comm.. Ehlers, Survey of general relativity theory,

D i b l´ık, On existence and asymptotic behaviour of solutions of singular Cauchy problem for certain system of ordinary differential equations, Fasc. H a l e, Theory of

A very good recent review of the state of art in this problem can be found in Chapter 1 of Wirsching’s book [Wir98]... This gives a more precise answer to question (ii) above as we

Theorem 1.1 was proved for a variety of nonlinear differential equations under homogeneous Dirichlet boundary conditions in [2, 4, 7, 8] and for a system of differential equations

The equation (L) has been studied by the aforementioned authors either under the assump- tion that solutions are continuous and bounded (G. Derfel, who moreover uses probability

We consider a nonconvex and nonclosed Sturm-Liouville type differential inclusion and we prove the arcwise connectedness of the set of its solutions.. 2000 Mathematics