• Nie Znaleziono Wyników

A linear quadratic optimal control problem for a class of discrete distributed systems is analyzed

N/A
N/A
Protected

Academic year: 2021

Share "A linear quadratic optimal control problem for a class of discrete distributed systems is analyzed"

Copied!
10
0
0

Pełen tekst

(1)Int. J. Appl. Math. Comput. Sci., 2006, Vol. 16, No. 4, 431–440. A QUADRATIC OPTIMAL CONTROL PROBLEM FOR A CLASS OF LINEAR DISCRETE DISTRIBUTED SYSTEMS M OSTAFA RACHIK, M USTAPHA LHOUS, O UAFA EL KAHLAOUI E LHOUSSINE LABRIJI, H ASNA JOURHMANE Department of Mathematics and Computer Sciences, Faculty of Sciences Ben M’sik Comandant Idriss El harti, 20450, Casablanca, Morocco e-mail: rachik@math.net. A linear quadratic optimal control problem for a class of discrete distributed systems is analyzed. To solve this problem, we introduce an adequate topology and establish that optimal control can be determined though an inversion of the appropriate isomorphism. An example and a numerical approach are given. Keywords: discrete distributed system, Hilbert uniqueness method, linear system, optimal control. Thus, setting x(ti ) = xi , we get. 1. Introduction Most of the systems encountered in practice are continuous in time (Athans and Falb, 1966; Curtain and Pritchard, 1978; Curtain and Zwart, 1995; Kalman, 1960; Lasiecka and Triggiani, 2000). However, the analysis and control of a continuous system with a computer requires sampling and thus a discretization of the system considered. The importance of discrete systems lies in the fact that they are present in a large number of fields, such as engineering, economics, biomathematics, etc. The recourse to discrete models is often preferred by engineers since, on the one hand, some mathematical complexities such as the choice of a function space and regularity of the solution are avoided and, on the other the hand, they are better adapted to computer processing. Let us start with a continuous distributed system:  t x(t) = S(t)x0 + S(t − r)Bu(r) dr, t ∈ [0, T ], (1) 0. where x0 , x(t) ∈ X , (S(t))t≥0 is a strongly continuous semigroup on X , B ∈ L(U, X ), and X , U are Hilbert spaces. (X and U can be of finite dimensions, and then the system is lumped.) One of the discretization procedures which is most often used (Lee et al., 1972; Ogata, 1995; Rabah and Malabre, 1999) consists in partitioning the time horizon time [0, T ] using the instants t0 = 0, t1 = δ, t2 = 2δ, . . . , tn = nδ, where δ = T /N and N ∈ N ∗ , δ being the sampling period. Then we assume that the control u is constant over each interval [t i , ti+1 [, i.e., u(t) = ui ,. ∀t ∈ [ti , ti+1 [.. (2). xi+1 = x(ti+1 ) = S(ti+1 )x0  ti+1 + S(ti+1 − r)Bu(r) dr 0. = S(δ)S(ti )x0  ti + S(δ)S(ti − r)Bu(r) dr 0. . ti+1. +. S(ti+1 − r)Bu(r) dr. ti.   = S(δ) S(ti )x0 + . 0. ti+1. +. ti. S(ti − r)Bu(r) dr. . S(ti+1 − r)Bu(r) dr,. ti. and then. . xi+1 = S(δ)xi +. ti+1. S(ti+1 − r)Bu(r) dr.. (3). ti. Using the hypothesis (2), we deduce that  ti+1  xi+1 = S(δ)xi + S(ti+1 − r)B dr ui ti. = S(δ)xi +. . 0. δ.  S(τ )B dτ ui ,. where τ = ti+1 − r. Then xi+1 = Φxi + Bui , δ with Φ = S(δ) and B = 0 S(τ )B dτ.. (4).

(2) M. Rachik et al.. 432 The discrete version (4) has been the subject of numerous works (Chraibi et al., 2000; Dorato, 1993; Faradzhev et al., 1986; Halkin, 1964; Klamka, 1995; 2002; Lee et al., 1972; Lun’kov, 1980; Weiss, 1972). Our contribution in this context consists in studying the quadratic control problem for a linear discrete system. It is true that we are not the first to have examined this problem. Lee et al. (1972) demonstrated that optimal control and optimal cost can be obtained using a discrete Riccati equation. Zabczyk (1974) proved that the optimum can be computed using Lagrange multipliers. In (Karrakchou and Rachik, 1995; Karrakchou et al., 1998), the Hilbert uniqueness method (HUM), set forth by Lions (1988a; 1988b), was used to prove that the optimum comes from solving an algebraic linear equation. The originality of our work consists in adopting the discretization scheme described previously, but in the absence of the hypothesis (2), i.e., we assume that the control u(·) is not necessarily constant in the time interval [ti , ti+1 [. In fact, when the difference between two consecutive sampling instants is quite important (for example, if the system (1) is a compartmental model (Daley and Gani, 2001; Jolivet, 1983) describing the evolution of a long illness or a chronic illness, it is natural that the difference between two measurements, t i+1 − ti , can be as long as several months), it does not make sense to suppose that u(t) is constant between the instants t i+1 and ti . In order to overcome this obstacle, we reconsider the discretization of the system (1), but without the hypothesis (2), which yields the difference equation  xi+1 = Φxi +. ti+1. Bi (θ)u(θ) dθ,. (5). ti. with Φ = S(δ) and Bi (θ) = S(ti+1 − θ)B. To be more precise and classify our problem in a more general mathematical framework, in this paper we consider the discretetime system  ti+1 ⎧ ⎨ x = Φx + Bi (θ)u(θ) dθ, i+1 i ti ⎩ x0 ∈ X ,. i ≥ 0,. (6). where xi ∈ X is the state variable and u(θ) ∈ U. Using a technique which is similar to the HUM (Lions, 1988a; 1988b; El Jai and Bel Fekih, 1990; El Jai and Berrahmoune, 1991), we introduce a suitable topology to prove that optimal control and optimal cost stem from the inversion of a coercive isomorphism and thus from an algebraic equation easy to solve by classical numerical methods. To motivate the problem discussed in this paper, consider temperature distribution in an industrial oven whose. simplified mathematical model is p. ∂2T ∂T (x, t) = α 2 (x, t) + gi Xωi ui (t), ∂t ∂ x i=1. ∀t ≥ 0,. (7) where T (·, t) is the temperature profile at the time t. We suppose that the system is controlled by a variable control u(t) = (u1 (t), . . . , up (t))T , where ui (t) acts on the zone ωi ⊂]0, 1[ according to a spatial distribution g i ∈ L2 (ωi ). The associated initial condition is supposed to be homogeneous, i.e., T (x, 0) = T0 (x),. ∀x ∈ [0, 1],. and the boundary condition is also homogeneous, i.e., T (0, t) = T (1, t) = 0,. ∀t ≥ 0.. Equation (7) can be written as ∂T (x, t) = AT (x, t) + Bu(t), ∂t. ∀t ≥ 0,. (8). where A is the operator ∂ 2 /∂x2 whose domain D(A) and spectrum σ(A) are respectively given by

(3)  D(A) = f ∈ L2 (0, 1) / f ∈ L2 (0, 1). and f (0) = f (1) = 0 , and.   σ(A) = λn = −n2 π 2 / n ∈ N∗ ,. while the associated eigenfunctions are √ ϕn (x) = 2 sin (nπx), n = 1, 2, . . . , (ϕn )n≥1 being an orthonormal basis of L 2 (0, 1). The bounded operator B is such that ⎧ Rp −→ L2 (0, 1), ⎪ ⎪ ⎪ ⎛ ⎪ ⎞ ⎪ ⎨ u1 p B:. ⎜ . ⎟ ⎪ ⎜ ⎟ . ⎪ gi Xωi ui . −→ ⎪ ⎝ . ⎠ ⎪ ⎪ ⎩ i=1 up It is known that the mild solution of Eqn. (8) is  t x(t) = S(t)x0 + S(t − r)Bu(r) dr, t ∈ [0, T ], 0. where x(t) ∈ X = L2 (0, 1), and (S(t))t≥0 is the strongly continuous semigroup generated by the operator A. Then the discretization of our system without the hypothesis (2) leads to the difference equation  ti+1 Bi (θ)u(θ) dθ. (9) xi+1 = Φxi + ti.

(4) A quadratic optimal control problem for a class of linear discrete distributed systems The corresponding output is supposed to be a sequence of measurements taken at the instants t 0 = 0, t1 = δ, . . . , tN = N δ = T , i.e.,. 433. The adjoint operator H ∗ is such that Hu, (x1 , . . . , xN ) l2 (1,...,N ;X ). y(ti ) = T (·, ti ).. =. N. (Hu)k , xk. k=1. The control strategy consists in determining minimumnorm control allowing us to minimize the differences y(tN ) − yd  and (y(ti ) − ri )0≤i≤N −1 , where yd is a desired state and (ri )1≤i≤N −1 is a given desired trajectory. Mathematically, solving this problem amounts to the minimization of the quadratic criterion. =. N  k . =. N N . =. N −1. yi − ri , M (yi − ri ).  +. N . ti. ∗ u(θ), Bi−1 (θ)Φ∗ k−i xk dθ. ti. u(θ),. i=1 ti−1. N. ∗ Bi−1 (θ)Φ∗ k−i xk dθ. k=i. Setting. T. u(θ), Ru(θ) dθ.. 0. . ti−1. i=1 k=i. i=1. Φk−i Bi−1 (θ)u(θ) dθ, xk. k=1 i=1 ti−1. J(u) = yN − yd , G(yN − yd ). +. ti. f (θ) =. N. ∗ Bi−1 (θ)Φ∗ k−i xk ,. ∀θ ∈ [ti−1 , ti [,. k=i. M , R and G are selected to weigh the relative importance of the performance measures caused by the vectors (y i )i , the control variable u and the final output y N , respectively.. In this section, we shall develop an optimality system to characterize some optimal control u ∗ . For this purpose, observe that the state (xk )1≤k≤N can be written as follows: k . j=1. tj. we have. . Hu, (x1 , . . . , xN ) l2 (1,...,N ;X ) =. 2. Some Useful Properties. xk = Φk x0 +. i = 1, 2, . . . , N,. Φk−j Bj−1 (θ)u(θ) dθ,. tj−1. k = 1, . . . , N. (10) If we introduce the bounded operator H defined by ⎧ ⎨ L2 (0, T ; U) −→ l2 (1, 2, . . . , N ; X ), H:   ⎩ u −→ Hu = (Hu)i 1≤i≤N , where. T. 0. u(θ), f (θ) dθ.. Then H∗ (x1 , x2 , . . . , xN ) ∈ L2 (0, T ; U) is given by H∗ (x1 , x2 , . . . , xN )(θ) =. N. ∗ Bi−1 (θ)Φ∗ k−i xk , k=i. θ ∈ [ti−1 , ti [ for i = 1, 2, . . . , N. (12) Consider the operator L defined by ⎧ 2 L (0, T ; U) −→ X , ⎪ ⎪ ⎪ ⎪ ⎨ u −→ (Hu)N L: N  tk ⎪. ⎪ ⎪ ⎪ = ΦN −k Bk−1 (θ)u(θ) dθ, ⎩ k=1 tk−1. so that xN = ΦN x0 + Lu. The adjoint operator L ∗ : X −→ L2 (0, T ; U) is such that. (Hu)k =. k . j=1. tj. Φ. k−j. Bj−1 (θ)u(θ) dθ,. Lu, x =. =.  ΦN −k Bk−1 (θ)u(θ) dθ, x. N . tk. ∗ u(θ), Bk−1 (θ)Φ∗ N −k x dθ. k=1 tk−1. then from (10) we establish that xk = Φ x0 + (Hu)k ,. tk. k=1 tk−1. tj−1. ∀k = 1, 2, . . . , N, (11). k. N  .  ∀k = 1, 2, . . . , N.. =. 0. T. u(θ), g(θ) dθ,.

(5) M. Rachik et al.. 434. Consider the sequence (a k )1≤k≤N and the operators D and R described by. and therefore ∗ (θ)Φ∗ N −k x, (L∗ x)(θ) = Bk−1. ∀θ ∈ [tk−1 , tk [,. k = 1, . . . , N.. . ak = M (Φk x0 − rk ),. k = 1, 2, . . . , N − 1,. aN = G(ΦN x0 − xd ),. 3. Optimality System Knowing that the functional to minimize in L 2 (0, T ; U) is J(u) = xN − xd , G(xN − xd ). +. N −1. xi − ri , M (xi − ri ). and. i=1.  +. u(θ), Ru(θ) dθ,. (13). we use the technical results established in the previous section to deduce that xN − xd , G(xN − xd ). = ΦN x0 − xd + Lu, G(ΦN x0 − xd ) + GLu. = ΦN x0 − xd , G(ΦN x0 − xd ) + u, L∗ GLu. + 2 Lu, G(ΦN x0 − xd ) , and, for i = 1, . . . , N − 1, we have. = Φi x0 − ri + (Hu)i , M (Φi x0 − ri ) + M (Hu)i. = Φi x0 − ri , M (Φi x0 − ri ) + (Hu)i , M (Hu)i. + 2 (Hu)i , M (Φi x0 − ri ) . We easily deduce that the functional J can be written J(u) = const + J ∗ (u),. where const = ΦN x0 − xd , G(ΦN x0 − xd ). +. N −1. Φi x0 − ri , M (Φi x0 − ri ) ,. i=1. and.  J (u) = 2 Lu, G(ΦN x0 − xd ). ∗. +.  (Hu)i , M (Φi x0 − ri ). N −1. i=1. + u, L∗ GLu +. N −1. (Hu)i , M (Hu)i. i=1. + u, Ru .. L2 (0, T ; U) −→ L2 (0, T ; U), u −→ Ru,. where (Ru)(θ) = Ru(θ). It is easy to see that ⎧ ∗ ⎪ ⎪ ⎨ D = D and D ≥ 0, (R)∗ = R, (R)−1 = R−1 ⎪ ⎪ ⎩ and Ru, u ≥ αu2L2 (0,T ;U ) .. (14). Moreover, since Lu = (Hu) N , the cost functional J ∗ can be written as J ∗ (u) = 2 Hu, (a1 , a2 , . . . , aN ). + Hu, DHu + u, Ru. xi − ri , M (xi − ri ). as.  R:. T. 0. ⎧ 2 2 ⎪ ⎪ ⎨F = l (1, 2, . . . , N, X ) −→ F = l (1, 2, . . . , N, X ), D: (x1 , x2 , . . . , xN ) −→ (M x1 , M x2 , . . . , ⎪ ⎪ ⎩ M xN −1 , GxN ). = 2 u, H∗ (a1 , a2 , . . . , aN ) + u, (H∗ DH+R)u. = 2l(u) + B(u, u), where l is the linear form  2 L (0, T ; U) −→ R, l: u −→ u, H∗ (a1 , . . . , aN ). and B(·, ·) is the symmetric bilinear form ⎧ 2 2 ⎪ ⎪ ⎨ L (0, T ; U) × L (0, T ; U) −→ R, B(·, ·) :. ⎪ ⎪ ⎩. (u, v) −→ B(u, v) = u, (H∗ DH + R)v .. We have B(u, u) = Hu, DHu + u, Ru so B(u, u) ≥ u, Ru because D ≥ 0. From (14) we deduce that B(u, u) ≥ αu2 . Thus J ∗ is the sum of a continuous linear form l and a bilinear, continuous, symmetric and coercive form B(·, ·). From the Lax-Milgram theorem (Brezis, 1987; Ciarlet, 1988; Lions, 1968)), it follows that J ∗ has a unique solution u∗ in L2 (0, T, U). Furthermore, u ∗ is characterized by B(u∗ , v) = −l(v),. ∀v ∈ L2 (0, T ; U),.

(6) A quadratic optimal control problem for a class of linear discrete distributed systems i.e., (H∗ DH + R)u∗ , v = − H∗ (a1 , . . . , aN ), v . Thus H∗ DHu∗ + Ru∗ = −H∗ (a1 , . . . , aN ). The optimal control u ∗ is characterized by   u∗ = −(R)−1 H∗ DHu∗ + H∗ (a1 , a2 , . . . , aN ) . Accordingly, u∗ = −(R)−1 H∗   × M (Hu∗ )1 , . . . , M (Hu∗ )N −1 , G(Hu∗ )N  + M (Φx0 − r1 ), . . . , M (ΦN −1 x0 − rN −1 ),  G(ΦN x0 − xd ) , which gives  ∗ ∗ ∗ ∗ u∗ = −R−1 H∗ (M xu1 , M xu2 , . . . , M xuN −1 , GxuN )  − (M r1 , M r2 , . . . , M rN −1 , Gxd ) . Hence for θ ∈ [ti−1 , ti [, i = 1, 2, . . . , N −1, we have −1 N. ∗ (θ) u∗ (θ) = −R−1 Bi−1. +Φ. ∗ N −i. ∗ G(xuN. ∗ k−i. Φ  − xd ). ∗. M (xuk − rk ). ∗ u∗ (θ) = −R−1 BN −1 (θ)G(xN − xd ) if θ ∈ [tN −1 , tN [.. Consider the signal (p i )1≤i≤N defined by ⎧ N −1. ⎪ ⎪ ⎪ ⎪ = Φ∗ k−i M (xk − rk ) + Φ∗ N −i G(xN − xd ), p i ⎨ k=i. i = 1, 2, . . . , N − 1. We have Φ∗ pi+1 =. N −1. Φ∗ k−i M (xk − rk )+Φ∗ N −i G(xN −xd ). k=i+1. = pi − M (xi − ri ),. 4. Convenient Topology In this section, we develop a technique similar to the HUM. Indeed, let f = (x1 , x2 , . . . , xN −1 , xN ) ∈ F = f l2 (1, 2, . . . , N ; X ) and the signal z f = (z1f , . . . , zN ) be described by the difference equation ⎧ N −1. ⎪ 1 ⎪ ⎪ zif = Φ∗ N −i G 12 xN + Φ∗ k−i M 2 xk , ⎨ k=i (16) ⎪ i = 1, 2, . . . , N − 1, ⎪ ⎪ ⎩ f 1 zN = G 2 xN . We define the following functional on l 2 (1, . . . , N ; X ): |f |2 = f 2 +. N . tj. ∗ ∗ Bj−1 (θ)zjf , R−1 Bj−1 (θ)zjf dθ.. k=1 tj−1. (17) Lemma 1. | · | is a norm on F equivalent to the usual one. Proof. From the linearity of the map f → z jf , it is easy to deduce that | · | is a norm on the space F . The equivalence is then immediate. For f = (x1 , x2 , . . . , xN −1 , xN ) ∈ F, we define Ψ = (Ψfi )0≤i≤N by ⎧  ti+1 ⎪ ⎪ Ψf = ΦΨf + Bi (θ)uf (θ) dθ, ⎪ i ⎨ i+1 ti (18) i = 0, 1, . . . , N − 1, ⎪ ⎪ ⎪ ⎩ Ψf0 = 0, f. where ∗ uf (θ) = R−1 Bi−1 (θ)zif , θ ∈ [ti−1 , ti [, i = 1, 2, . . . , N.. i = 1, 2, . . . , N − 2,. and thus the signal p i satisfies the following difference equation:  pi = Φ∗ pi+1 + M (xi − ri ), i = 1, 2, . . . , N − 1, pN = G(xN − xd ).. Finally, we deduce the following optimality system: ⎧ ∗ ∗ u (θ) = −R−1 Bi−1 (θ)pi , θ ∈ [ti−1 , ti [, ⎪ ⎪ ⎪ ⎪ i = 1, 2, . . . , N, ⎪ ⎪ ⎪ ⎪ ⎪ ∗ ⎪ ⎪ pi = Φ∗ pi+1 + M (xui − ri ), ⎪ ⎪ ⎨ i = 1, 2, . . . , N − 1, (15) ∗ ⎪ ⎪ pN = G(xuN − xd ), ⎪ ⎪ ⎪  ti+1 ⎪ ⎪ ⎪ ∗ ∗ ⎪ ⎪ xui+1 = Φxui + Bi (θ)u∗ (θ) dθ, ⎪ ⎪ ⎪ t i ⎩ i = 0, 1, . . . , N − 1.. k=i. and. ⎪ ⎪ ⎪ ⎪ ⎩ p = G(x − x ). N N d. 435. Remark 1. We can easily see that Ψfk =. k. j=1. Φk−j. . tj.  ∗ Bj−1 (θ)R−1 Bj−1 (θ) dθ zjf ,. tj−1. k = 1, . . . , N..

(7) M. Rachik et al.. 436 Define the operator Λ by  F → F, Λ: 1 1 1 f → f + (M 2 Ψf1 , . . . , M 2 ΨfN −1 , G 2 ΨfN ).. and 2. Λf, f = f  +. Proof. Setting g = (y1 , y2 , . . . , yN ), we have 1. N −1. 1. ∗ ∗ Bj−1 (θ)zjf , R−1 Bj−1 (θ)zjf dθ. = |f |2 .. Remark 2. As a consequence of Lemma 2, we easily deduce that Λ is an isomorphism.. 1. Λf, g = f +(M 2 Ψf1 , . . . , M 2 ΨfN −1 , G 2 ΨfN ), g. = f, g +. tj. j=1 tj−1. Lemma 2. The operator Λ is bounded and self-adjoint, and we have Λf, f = |f |2 . 1. N . Finally, we state our fundamental result of this sec-. 1. M 2 Ψfi , yi + G 2 ΨfN , yN .. tion.. i=1 1. If we define Pi = M 2 for all i ∈ {1, . . . , N − 1} and 1 PN = G 2 , then N. Λf, g = f, g +. ∗ (θ)zif , θ ∈ [ti−1 , ti [, i = 1, 2, . . . , N, u∗ (θ) = R−1 Bi−1 (19) where zif is the solution of the difference equation. Ψfi , Pi yi. i=1 N  i. = f, g +. i=1. ×. . ⎧ N −1. ⎪ 1 ⎪ f ∗ N −i 12 ⎪ z = Φ G f + Φ∗ k−i M 2 fk , ⎪ N ⎨ i. Φi−j. j=1. tj. Bj−1 (θ)R. −1. ∗ Bj−1 (θ) dθ. . tj−1. zjf , Pi yi. . ⎪ ⎪ ⎪ ⎪ ⎩. N  i  tj. zjf , Bj−1 (θ)R−1. = f, g +.   ∗ × Bj−1 (θ)Φ∗ i−j Pi yi dθ = f, g + . (20). 1. f = G 2 fN , zN.  1 1 Λf = − M 2 (Φx0 − r1 ), . . . , M 2 (ΦN −1 x0 − rN −1 ),  1 G 2 (ΦN x0 − xd ) . (21). zjf , Moreover, the optimal cost is. j=1 i=j. ×. k=i. i = 1, 2, . . . , N − 1,. and f = (f1 , . . . , fN −1 , fN ) is the unique solution of the algebraic equation. i=1 j=1 tj−1. N N . Theorem 1. The optimal control u ∗ minimizing the functional (13) in L 2 (0, T ; U) is.   ∗ Bj−1 (θ)R−1 Bj−1 (θ) dθ Φ∗ i−j Pi yi. tj. J(u∗ ) = |f |2 .. (22). tj−1. = f, g +. N −1. zjf ,. j=1. ×. . tj. Bj−1 (θ)R tj−1. −1. ∗ Bj−1 (θ) dθ. . −1  N 1 1 Φ∗ i−j M 2 yi + Φ∗ N −j G 2 yN k=j.  1    tN f ∗ 2 , BN −1 (θ)R−1 BN + zN −1 (θ) dθ G yN tN −1. = f, g +. N . tj. ∗ ∗ Bj−1 (θ)zjf , R−1 Bj−1 (θ)zjg dθ. j=1 tj−1. = f, Λg ,. Proof. Since the operator Λ ∈ L(F ) constitutes an isomorphism, Eqn. (21) possesses a unique solution f = (f1 , . . . , fN −1 , fN ). Using the optimality system and the definition ⎧ N −1. ⎪ 1 ⎪ ⎪ z f = Φ∗ N −i G 12 fN + Φ∗ k−i M 2 fk , ⎪ ⎨ i ⎪ ⎪ ⎪ ⎪ ⎩. k=i. i = 1, 2, . . . , N − 1, 1. f = G 2 fN , zN. it is sufficient to establish that  1 fi = −M 2 (xui − ri ), 1 2. fN = −G (xuN − xd ),. i = 1, . . . , N − 1,.

(8) A quadratic optimal control problem for a class of linear discrete distributed systems where xuk = Φk x0 +. k. . tj. Bj−1 (θ)u(θ) dθ.. Φk−j. tj−1. j=1. But. Remark 3. (i) In order to obtain the minimizing control u∗ , one has to solve the infinite dimensional algebraic equation (21). However, in general, we do not know an explicit form of the operator Λ −1 . Since the bilinear continuous form.  1 1 Λf = − M 2 (Φx0 − r1 ), . . . , M 2 (ΦN −1 x0 − rN −1 ),  1 G 2 (ΦN x0 − xd ) ,. which implies  1 fk = −M 2 (Ψfk + Φk x0 − rk ),. k = 1, . . . , N − 1,. 1. fN = −G 2 (ΨfN + ΦN x0 − xd ). f. If we replace Ψ by its value given by (18), we get ⎧ k . ⎪ 1 ⎪ k ⎪ 2 fk = −M Φ x0 + Φk−j ⎪ ⎪ ⎪ ⎪ j=1 ⎪ ⎪  tj  ⎪ ⎪ ⎪ f −1 ∗ ⎪ , × B (θ)R B (θ)z dθ − r ⎪ j−1 k j−1 j ⎪ ⎪   ! tj−1 ⎪ ⎪ ⎪ u∗ (θ) ⎪ ⎨ k = 1, . . . , N − 1, ⎪ ⎪ ⎪ N  ⎪. ⎪ 1 ⎪ N ⎪ 2 f Φ = −G x + ΦN −j N 0 ⎪ ⎪ ⎪ ⎪ j=1 ⎪  tj ⎪  ⎪ ⎪ f −1 ∗ ⎪ ⎪ . × B (θ)R B (θ)z dθ − x j−1 d ⎪ j−1 j ⎪ ⎪   ! tj−1 ⎩ u∗ (θ). That gives  ∗ 1 fk = −M 2 (xuk − rk ),. k = 1, . . . , N − 1,. ∗. 1 2. fN = −G (xuN − xd ). ∗. J(u) = (xuN − xd ), G(xuN − xd ). N −1. ∗. ∗. (xuk − rk ), M (xuk − rk ). k=1. + u∗ 2L2 (0,T ;U ) ∗. 1. = G 2 (xuN − xd )2 +. N −1. ∗. 1. M 2 (xuk − rk )2 + u∗ 2. k=1. = fN 2 +. N −1. fk 2 +. k=1. = f 2 +. N  ti. = |f |2 .. (ii) If in the functional J we have M = 0, then setting F = X suffices to consider Γ instead of Λ, where Γ ∈ L(X ) is the operator defined by 1. Γf = f + G 2 ΨfN with ΨfN =. N. ΦN −j. .  ∗ Bj−1 (θ)R−1 Bj−1 (θ) dθ zjf ,. tj. tj−1. j=1. 1. zjf = Φ∗ N −j G 2 f,. j = 1, 2, . . . , N.. Then, for M = 0, from Theorem 1 it follows that the minimizing control u ∗ is ∗ u∗ (θ) = R−1 Bi−1 (θ)zif. for θ ∈ [ti−1 , ti [ and i = 1, 2, . . . , N , where i = 1, 2, . . . , N,. and f constitutes the unique solution of the algebraic equation 1 Γf = −G 2 (ΦN x0 − xd ). Moreover, using the Galerkin method, an approximate control sequence is given by ⎧ ∗ ⎪ u∗n (θ) = R−1 Bi−1 (θ)zifn , θ ∈ [ti−1 , ti [, ⎪ ⎪ ⎪ ⎪ ⎪ i = 1, 2, . . . , N, ⎪ ⎨ fn ∗ N −i 12 G fn , i = 1, 2, . . . , N, zi = Φ ⎪ ⎪ n ⎪. ⎪ ⎪ ⎪ fn = fn,i ϕi , ⎪ ⎩ i=1. . T 0. u∗ (t), Ru∗ (t) dt. ∗ ∗ R−1 Bi−1 (θ)zif , Bi−1 (θ)zif dt. i=1 ti−1. is coercive, the Galerkin method can be applied to approximate the solution f of (21) and, consequently, the optimal control u∗ .. 1. So u is the optimum of J. Moreover,. +. F ×F → R, (x, y) → x, Λy F. zif = Φ∗ N −i G 2 f,. ∗. ∗. 437. where (ϕn )n is an orthonormal basis of X , and the vector ⎛ ⎜ ⎜ ⎜ ⎜ ⎝. fn,1 fn,2 .. . fn,n. ⎞ ⎟ ⎟ ⎟ ⎟ ⎠.

(9) M. Rachik et al.. 438 is the unique solution of the matrix equation ⎛ ⎜ ⎜ Γn ⎜ ⎜ ⎝. ⎛. ⎞. fn,1 fn,2 .. . fn,n. 1. −G 2 (ΦN x0 − xd ), ϕ1. 1 −G 2 (ΦN x0 − xd ), ϕ2. .. . 1 −G 2 (ΦN x0 − xd ), ϕn. ⎟ ⎜ ⎟ ⎜ ⎟=⎜ ⎟ ⎜ ⎠ ⎝. we have ⎞ ⎟ ⎟ ⎟, ⎟ ⎠. with Γn = ( Γϕi , ϕj )1≤i,j≤n .. Bi (θ)u = u ·. where U = R, X = L2 (0, 1), R = 1, M = 0, G = I, x0 = 0, xd = γϕ1 and Bi u = u · Xwi , with wi = [ai , bi ] ⊂ [0, 1]. This means that we suppose that the activity of the control u on the parabolic system (7) is restricted to the zone [a i , bi ] (the action support of the control u changes at each moment i), where √ ϕn (θ) = 2 sin nπθ is an orthonormal basis of X . Since 2. e−n. ∞     2 2 Bi (θ)u, x = u · αi (n)e−n π (ti+1 −θ) ϕn , x. = u·. ∞ . π 2 (ti+1 −θ).  x, ϕn ,. and hence Bi∗ (θ)x =. ∞. 2. αi (n)e−n. π 2 (ti+1 −θ). x, ϕn .. n=1. According to Theorem 1, the solution to the optimal control problem is as follows: ∗ u∗ (θ) = Bi−1 (θ)zif. =. ∞. 2. αi−1 (n)e−n. π 2 (ti −θ). zif , ϕn. n=1. for θ ∈ [ti−1 , ti [ and i = 1, 2, . . . , N , where π2 t. x, ϕn X ϕn ,.  (N −i)   f = S (N − i)δ f zif = ΦN −i f = S(δ) = S(tN −i )f =. ∞. e−k. 2. π 2 tN −i. f, ϕk ϕk .. k=1 2. e−n. π2 δ. x, ϕn X ϕn ,. Then. 2. zif , ϕn = e−n. n=1. and. 2. αi (n)e−n. n=1. the operator Φ is such that Φx =.  ϕn .. Therefore. n=1. ∞. π 2 (ti+1 −θ). n=1. 0. S(t)x =. 2. αi (n)e−n. n=1. Example 1. Consider the system (7) defined in Section 1, and the cost functional  T J(u) = xN − xd 2 + |u(θ)|2 dθ,. ∞. ∞ .   Bi (θ) = S (i + 1)δ − θ Bi .. f, ϕn ,. and hence u∗ (θ) =. Here we have. π 2 tN −i. ∞. 2. αi−1 (n)e−n. π 2 (T −θ). f, ϕn. (23). n=1. Bi (θ)u = S(ti+1 − θ)Bi u =. ∞. 2. e−n. π 2 (ti+1 −θ). Bi u, ϕn ϕn. Γf = xd ,. n=1. = u·. ∞. 2. e−n. π 2 (ti+1 −θ). .  ϕn (x) dx ϕn .. wi. n=1. Consequently, setting  αi (n) = wi. √  ϕn (x) dx = 2. √  bi 2 = cos nπx , nπ ai. for θ ∈ [ti−1 , ti [ and i = 1, 2, . . . , N , where f is the unique solution of the algebraic equation. bi. ai. sin nπx dx. which is equivalent to the infinite linear system ⎛ ⎞ ⎛ ⎞ f1 γ ⎜ ⎟ ⎜ ⎟ f ⎟ ⎜ 0 ⎟, A⎜ ⎝ 2 ⎠=⎝ ⎠ .. .. . . A being an infinite matrix, A = ( Γϕi , ϕj )1≤i,j≤∞ ,. (24).

(10) A quadratic optimal control problem for a class of linear discrete distributed systems with. 439. with. Γϕi , ϕj = ⎧ # " 2 2 2 2 N. ⎪ e2i π tk − e2i π tk−1 ⎪ 2 −2i2 π 2 T ⎪ 1+e αk−1 (i) ⎪ ⎪ ⎪ 2i2 π 2 ⎪ k=1 ⎪ ⎪ ⎪ ⎪ if i = j, ⎪ ⎪ ⎪ ⎨ N. ⎪ −i2 π 2 T −j 2 π 2 T ⎪ e e αk−1 (i)αk−1 (j) ⎪ ⎪ ⎪ ⎪ k=1 ⎪ ⎪ " 2 2 2 # ⎪ ⎪ (i +j )π tk (i2+j 2 )π 2 tk−1 ⎪ e −e ⎪ ⎪ ⎪ × if i

(11) = j. ⎩ (i2 + j 2 )π 2 As was mentioned in Remark 3, f = such that f = lim f n , where f n =. $n i=1. $∞ i=1. fi ϕi is. ⎞. ⎟ ⎜ ⎟ ⎜ ⎜ . ⎟ ⎜ . ⎟ ⎝ . ⎠ fnn. ⎜ ⎜ An ⎜ ⎜ ⎝. f1n f2n .. . fnn. ⎞. ⎛. ⎟ ⎜ ⎟ ⎜ ⎟=⎜ ⎟ ⎜ ⎠ ⎝. sin nπx dx.. a. w. On the other hand, Bi∗ (θ)x =. ∞. 2. α(n)e−n. π 2 (ti+1 −θ). x, ϕn ,. γ 0 .. . 0. θ ∈ [0, T ].. n=1. Hence u∗ (θ) =. ∞. 2. α(n)e−n. π 2 (T−θ). f, ϕn , θ ∈ [0, T ], (26). n=1. where f is the unique solution of the algebraic equation Γf = xd ,. if. i = j,. if. i

(12) = j.. As in the previous example, using the Galerkin method, the optimal control u ∗ can be approximated by the sequence (u∗n )n≥1 given by. being the unique solution of the algebraic equation ⎛. b. ⎧ " # −2i2 π 2 T ⎪ 1 − e ⎪ 2 ⎪ 1 + αi ⎪ ⎪ ⎨ 2i2 π 2 Γϕi , ϕj = " # ⎪ −(i2 +j 2 )π 2 T ⎪ 1 − e ⎪ ⎪ αi αj ⎪ ⎩ (i2 + j 2 )π 2. fi ϕi , with f1n f2n. α(n) =. √  ϕn (x) dx = 2. with. n→∞. ⎛. . ⎞. u∗n (θ) =. ⎟ ⎟ ⎟. ⎟ ⎠. n. α(k)e−k. 2. π 2 (T −θ) n fk ,. θ ∈ [0, T ],. k=1. 5. Conclusion. An is the symmetric and positive definite matrix given by A = ( Γϕi , ϕj )1≤i,j≤n .. The passage from the continuous version of a linear system x(t) ˙ = Ax(t) + Bu(t) (27) to its discrete counterpart. Concluding, in accordance with (23), the optimal control u∗ can be approximated by the sequence (u ∗n )n≥1 defined as follows: u∗n (θ) =. n. αi−1 (k)e−k. 2. π 2 (T −θ) n fk. (25). k=1. for θ ∈ [ti−1 , ti [ and i = 1, 2, . . . , N .. . Remark 4. Consider the above example with Bu = u · Xw , where w = [a, b] ⊂ [0, 1]. (Here, in contrast to Example 1, we suppose that the action support of the control u is independent of i.) Then we have %∞ &. −n2 π 2 (ti+1 −θ) Bi (θ)u = u · α(n)e ϕn , n=1. xi+1 = Φxi + Ψui. (28). is, generally, based on the assumption that u(s) = u(ti ),. ∀s ∈ [ti , ti+1 [,. (29). where ti and ti+1 are two consecutive sampling instants. The approximation of the continuous system (27) by the difference equation (28) is often justified by the choice of a rather small sampling period. In this paper, we have studied the quadratic linear control problem associated with a linear system having a discrete state variable and a continuous control variable. Such a system can be regarded as a sampled version of the continuous system (27) in the absence of the assumption (29) (when the time interval [t i , ti+1 [ is rather large or.

(13) M. Rachik et al.. 440 when variations in the control variable u(·) are very fast, it makes no sense to adopt the hypothesis (29)). To solve the problem, we introduced an adequate Hilbertian structure and proved that the optimum and optimal cost stem from an algebraic linear infinite dimensional equation which is easily solvable by the classical Galerkin method. As a natural continuation of this work, while being inspired by (Rachik et al., 2003), we are going to investigate the linear quadratic control problem considered for an infinite time horizon.. References Athans M. and Falb P.L. (1966): Optimal Control. — New York: McGraw Hill.. Karrakchou L. and Rachik M. (1995): Optimal control of discrete distributed systems with delays in the control: The finite horizon case. — Arch. Contr. Sci., Vol. 4(XL), Nos. 1– 2, pp. 37–53. Klamka J. (1995): Contrained controllability of nonlinear discrete systems. — IMA J. Appl. Math. Contr. Inf., Vol. 12, No. 2, pp. 245–252. Klamka J. (2002): Controllability of nonlinear discrete systems. — Int. J. Appl. Math. Comput. Sci., Vol. 12, No. 2, pp. 173–180. Lasiecka I. and Triggiani R. (2000): Control Theory of Partial Differential Equations: Continuous and Approximation Theories. Part I: Abstract Parabolic Systems. — Cambridge: Cambridge University Press.. Brezis H. (1987): Analyse Fonctionnelle. — Paris: Masson.. Lee K.Y., Chow S. and Barr R.O. (1972): On the control of discrete-time distributed parameter systems. — SIAM J. Contr., Vol. 10, No. 2, pp. 361–376.. Chraibi L., Karrakchou J., Ouansafi A. and Rachik M. (2000): Exact controllability and optimal control for distributed systems with a discrete delayed control. — J. Franklin Instit., Vol. 337, No. 5, pp. 499–514.. Lions J.L. (1968): Contrôle Optimal des Systèmes Gouvernés par des Équations aux Dérivées Partielles. — Paris: Dunod.. Ciarlet P.G. (1988): Introduction à l’Analyse Numérique Matricielle et à l’Optimisation. — Paris: Masson.. Lions J.L. (1988a): Exact controllability, stabilization and perturbation for distributed systems. — SIAM Rev., Vol. 30, No. 1, pp. 1–68.. Curtain R.F. and Pritchard A.J. (1978): Infinite Dimensional Linear Systems Theory. — Berlin: Springer. Curtain R. and Zwart H. (1995): An Introduction to InfiniteDimensional Linear Systems Theory. — New York: Springer. Daley D.I. and Gani J. (2001): Epidemic Modelling. An Introduction. — Cambridge: Cambridge University Press. Dorato P. (1993): Theoretical development in discrete control. — Automatica, Vol. 19, No. 4, pp. 385–400. El Jai A. and Bel Fekih A. (1990): Exacte contrôlabilité et contrôle optimal des systèmes paraboliques. — Revue APII Control Systems Analysis, No. 24, pp. 357–376.. Lions J.L. (1988b): Contrôlabilité Exacte, Stabilisation et Perturbations des Systèmes Distribués, Vol. 1. — Paris: Masson. Lun’kov V.A. (1980): Controllability and stabilization of linear discrete systems. — Differents. Uravn., Vol. 16, No. 4, pp. 753–754, (in Russian). Ogata K. (1995): Discrete-Time Control Systems. — Englewood Cliffs: Prentice-Hall. Rabah R. and Malabre M. (1999): Structure at infinity for linear infinite dimensional systems. — Int. Report, Institut de Recherche en Cybernétique de Nantes.. El Jai A. and Berrahmoune L. (1991): Controllability of damped flexible systems. — Proc. IFAC Symp. Distributed Parameter Systems, Perpignan, France, pp. 97–102.. Rachik M., Lhous M. and Tridane A. (2003): Controllability and optimal control problem for linear time-varying discrete distributed systems. — Syst. Anal. Modell. Simul., Vol. 43, No. 2, pp. 137–164.. Faradzhev R.G., Phat Vu Ngoc and Shapiro A.V.: (1986): Controllability theory of discrete dynamic systems. — Autom. Remote Contr., Vol. 47, pp. 1–20.. Weiss L. (1972): Controllability, realisation and stability of discrete-time systems. — SIAM J. Contr., Vol. 10, No. 2, pp. 230–262.. Halkin H. (1964): Optimal control for systems described by difference equations, In: Advances in Control Systems, Vol. 1, (C.T. Leondes, Ed.). — New York: Academic Press, Vol. 1, pp. 173–196.. Zabczyk J. (1974): Remarks on the control of discrete-time distributed parameter systems. — SIAM J. Contr., Vol. 12, No. 4, pp. 721–735.. Jolivet E. (1983): Introduction aux Modèles Mathématiques en Biologie. — Paris: Masson.. Received: 12 February 2006 Revised: 28 June 2006 Re-revised: 16 October 2006. Kalman R.E. (1960): Contributions to the theory of optimal control. — Bol. Soc. Mat. Mexicana, Vol. 2, No. 5, pp. 102– 119. Karrakchou L., Rabah R. and Rachik M. (1998): Optimal control of discrete distributed systems with delays in state and control: State space theory and HUM approaches. — Syst. Anal. Modell. Simul., Vol. 30, pp. 225–245..

(14)

Cytaty

Powiązane dokumenty

This paper is concerned with the linear programming (LP) approach to deterministic, finite-horizon OCPs with value function J ∗ (t, x)—when the initial data is (t, x) [see (2.3)]...

This paper presents an optimal control problem governed by a quasi- linear parabolic equation with additional constraints.. The optimal control problem is converted to an

Optimal control problems for linear and nonlinear parbolic equations have been widely considered in the literature (see for instance [4, 8, 18]), and were studied by Madatov [11]

With reference to the work of Verriest and Lewis (1991) on continuous finite-dimensional systems, the linear quadratic minimum-time problem is considered for discrete

In the present paper, some results concerning the continuous dependence of optimal solutions and optimal values on data for an optimal control problem associated with a

A method for constructing -value functions for the Bolza problem of optimal control class probably it is even a discontinuous function, and thus it does not fulfil

More precisely, we show that two submanifolds of type number greater than one having the same affine connections and second fundamental forms are affinely equivalent.. The type

(2006a): Realization problem for positive multivari- able discrete-time linear systems with delays in the state vector and inputs. (2006b) A realization problem for