• Nie Znaleziono Wyników

1. The hybrid procedure. Let us consider the system of linear equa- tions

N/A
N/A
Protected

Academic year: 2021

Share "1. The hybrid procedure. Let us consider the system of linear equa- tions"

Copied!
16
0
0

Pełen tekst

(1)

A. A B K O W I C Z and C. B R E Z I N S K I (Lille)

ACCELERATION PROPERTIES OF THE

HYBRID PROCEDURE FOR SOLVING LINEAR SYSTEMS

Abstract. The aim of this paper is to discuss the acceleration properties of the hybrid procedure for solving a system of linear equations. These properties are studied in a general case and in two particular cases which are illustrated by numerical examples.

1. The hybrid procedure. Let us consider the system of linear equa- tions

(1) Ax = b,

where A ∈ R m ×m and x, b ∈ R m . We denote by e x the solution of (1).

Let G = Z T Z be a symmetric positive definite matrix. The G-inner product and the corresponding G-norm are respectively defined by (x, y) G = (x, Gy) and kxk G = p

(x, x) G . The corresponding G-matrix norm is given by

kAk G = sup

x 6=0

kAxk G

kxk G = q

̺((ZAZ −1 ) T ZAZ −1 ).

We shall also use the notation x ⊥ G y if (x, y) G = 0. For simplicity, the subscript G will be suppressed when unnecessary.

Let us now assume that two iterative methods for solving the system (1) are used simultaneously. Their iterates are denoted respectively by x n and x ′′ n and the corresponding residual vectors by r n = b−Ax n and r n ′′ = b−Ax ′′ n . The hybrid procedure defined in [1] consists of constructing a new iterate x n and a new residual r n = b − Ax n by

(2) x n = α n x n + (1 − α n )x ′′ n , r n = α n r n + (1 − α n )r n ′′ ,

1991 Mathematics Subject Classification: 65F10, 65B05.

Key words and phrases : acceleration, linear equations, iterative methods.

[417]

(2)

with

α n = − (r n − r n ′′ , r n ′′ ) (r n − r ′′ n , r n − r ′′ n ) . From the definition of r n , we see that

kr n k = min

α kαr n + (1 − α)r ′′ n k.

We have

(r n , r n ) = (r n , r ′′ n ) = (r n , r n ) and, setting p n = r n − r ′′ n , (2) can be written as (3) r n = r ′′ n − (p n , r ′′ n )

(p n , p n ) p n , r n = r n − (p n , r n ) (p n , p n ) p n . It is easy to check that

(r n , r n ) = (r n , r n )(r ′′ n , r n ′′ ) − (r n , r ′′ n ) 2 (r n − r n ′′ , r n − r n ′′ ) (4)

= (r n ′′ , r ′′ n ) − (p n , r n ′′ ) 2 (p n , p n ) (5)

= (r n , r n ) − (p n , r n ) 2 (p n , p n ) . (6)

2. Properties of the hybrid procedure. We now study the acceler- ation properties of the hybrid procedure.

2.1. Asymptotic behavior of the hybrid procedure . Let θ n be the angle between Zr n and Zr ′′ n . Using the relation (r n , r n ′′ ) = kr n kkr n ′′ k cos θ n we have

α n = − kr n kkr n ′′ k cos θ n − kr ′′ n k 2 kr n k 2 − 2kr n kkr n ′′ k cos θ n + kr ′′ n k 2 and

kr n k 2 = kr n k 2 kr n ′′ k 2 (1 − cos 2 θ n ) kr n k 2 − 2kr n kkr ′′ n k cos θ n + kr n ′′ k 2 . Setting ̺ n = kr n k/kr ′′ n k we obtain

α n = − ̺ n cos θ n − 1

̺ 2 n − 2̺ n cos θ n + 1 , kr n k 2

kr n k 2 = 1 − cos 2 θ n

̺ 2 n − 2̺ n cos θ n + 1 (7)

= 1 − (̺ n − cos θ n ) 2n − cos θ n ) 2 + sin 2 θ n (8)

= sin 2 θ n

(̺ n − cos θ n ) 2 + sin 2 θ n

(9)

(3)

= sin 2 θ n

̺ 2 n − 2̺ n cos θ n + 1 . (10)

From these relations, we immediately obtain

Theorem 2.1. Suppose that the limit lim n →∞ θ n = θ exists.

1. If lim n →∞ ̺ n = 0 then lim n →∞ α n = 1.

2. If lim n →∞ ̺ n = 1 and θ 6= 0, π then lim n →∞ α n = 1/2.

3. If lim n →∞ ̺ n = ∞ then lim n →∞ α n = 0.

This theorem shows that the hybrid procedure asymptotically selects the best method among the two.

Let us now consider the convergence behavior of kr n k/kr n k. From (10), we immediately have

Theorem 2.2. If the limits lim n →∞ ̺ n = ̺ and lim n →∞ θ n = θ exist and ̺ 2 − 2̺ cos θ + 1 6= 0, then

n lim →∞

kr n k 2

kr n k 2 = sin 2 θ

̺ 2 − 2̺ cos θ + 1 ≤ 1.

R e m a r k 1. Obviously if ̺ ≤ 1, we also have lim n →∞ kr n k 2 /kr n ′′ k 2 ≤ 1.

Thus lim n →∞ kr n k/ min(kr n k, kr n ′′ k) exists and is not greater than 1.

Similar results can be obtained by considering the ratio kr n k 2 /kr ′′ n k 2 . It must also be noticed that kr n k 2 /kr n k 2 tends to 1 if and only if ̺ = cos θ. This result comes out directly from (8) and we also get

Theorem 2.3. A necessary and sufficient condition for the existence of an N such that

0 ≤ kr n k 2 /kr n k 2 < 1 for all n ≥ N is that (r n − r n ′′ , r n ) 6= 0 for all n ≥ N .

P r o o f. Suppose that (r n − r n ′′ , r n ) 6= 0 for all n ≥ N . Thus we have

̺ n − cos θ n = kr n k

kr n ′′ k − (r n , r ′′ n )

kr n kkr n ′′ k = (r n , r n − r ′′ n ) kr n kkr n ′′ k 6= 0

and, from (8), it follows that kr n k 2 /kr n k 2 < 1. The reverse implication is proved similarly.

Let us now study some cases where (r n ) converges to zero faster than (r n ) and (r ′′ n ). From (9), we have

Theorem 2.4. If there are ̺ and N such that 0 ≤ ̺ n ≤ ̺ < 1 for all n ≥ N , then a necessary and sufficient condition for

n lim →∞ kr n k/kr n k = 0

to hold is that (θ n ) tends to 0 or π.

(4)

P r o o f. First let us prove the sufficiency. Suppose that (θ n ) tends to 0 or π. Thus, since ̺ n ≤ ̺ < 1, from (9) we have lim n →∞ kr n k/kr n k = 0.

To prove the necessity, suppose that lim n →∞ kr n k/kr n k = 0. The con- dition ̺ n ≤ ̺ < 1 implies that sin θ n tends to 0, which ends the proof.

R e m a r k 2. Since ̺ n < 1 we have kr n k < kr ′′ n k for all n ≥ N and so

n lim →∞ kr n k/ min(kr n k, kr n ′′ k) = 0.

Let us now study the case where (̺ n ) tends to 1. From (10), we first have

Theorem 2.5. If lim n →∞ ̺ n = 1, then a sufficient condition for

n lim →∞ kr n k/kr n k = 0 to hold is that (θ n ) tends to π.

R e m a r k 3. Since lim n →∞ ̺ n = 1, it follows that

n lim →∞ kr n k/ min(kr n k, kr n ′′ k) = 0.

Another result in the case where (̺ n ) tends to 1 is given by

Theorem 2.6. If kr n k/kr n ′′ k = 1 + a n with lim n →∞ a n = 0, then a sufficient condition for

n lim →∞ kr n k/kr n k = 0 to hold is that θ n = o(a n ).

P r o o f. We have

cos θ n = 1 − θ n 2 /2 + O(θ 4 n ), sin θ n = θ n + O(θ n 3 ).

Replacing in formula (10), we have kr n k

kr n k = sin 2 θ n

̺ 2 n − 2̺ n cos θ n + 1

= (θ n (1 + O(θ 2 n ))) 2

(1 + a n ) 2 − 2(1 + a n )(1 − θ 2 n /2 + O(θ 4 n )) + 1

= θ 2 n (1 + O(θ n 2 ))

a 2 n + θ n 2 + a n θ 2 n + (1 + a n )O(θ n 4 )

= 1 + O(θ n 2 )

(a nn ) 2 + 1 + a n + (1 + a n )O(θ n 2 ) and the result follows.

R e m a r k 4. Since lim n →∞ ̺ n = 1, the conclusion of Remark 3 still

holds.

(5)

Another presentation consists in considering the angle ϑ n between Zr n

and Zp n . From (6) we have kr n k 2 = kr n k 2 sin 2 ϑ n . Directly from this equation we obtain

Theorem 2.7. If there exists ϑ 6= π/2 such that lim n →∞ ϑ n = ϑ then lim n →∞ kr n k/kr n k = |sin ϑ| < 1.

Also, we have

Theorem 2.8. lim n →∞ kr n k/kr n k = 0 if and only if (ϑ n ) tends to 0 or π.

These results are simpler than the preceding ones, in particular those of Theorems 2.2–2.4.

R e m a r k 5. Similarly, if we denote by ϕ n the angle between Zr n ′′ and Zp n , we have kr n k 2 = kr ′′ n k 2 sin 2 ϕ n . Obviously θ n = ϕ n − ϑ n .

2.2. Geometric behavior of the hybrid procedure . A sphere in R m with respect to the G-norm will be denoted by

Υ G (q, r) = {y ∈ R m : ky − qk G = r}.

We have the following properties:

Property 1. r n ∈ Υ G (r n /2, kr n k G /2) ∩ Υ G (r ′′ n /2, kr n ′′ k G /2).

P r o o f. By definition, we have (r n , r n ) = (r n , r n ) = (r n , r ′′ n ). Computing kr n − r n /2k 2 we get

kr n − r n /2k 2 = kr n k 2 − (r n , r n ) + 1 4 kr n k 2 = 1 4 kr n k 2 .

In the same way, we can prove that kr n − r ′′ n /2k 2 = 1 4 kr ′′ n k 2 and the result follows.

Let us denote by e n = e x − x n , e n = e x − x n , e ′′ n = e x − x ′′ n the error vectors corresponding respectively to x n , x n , x ′′ n . Using the relation r n = Ae n and the preceding property we have

Property 2.

e n ∈ Υ A

T

GA (e n /2, ke n k A

T

GA /2) ∩ Υ A

T

GA (e ′′ n /2, ke ′′ n k A

T

GA /2).

The hybrid procedure is a projection method because there exists a ma- trix ℘ n ∈ R m ×m such that

r n = ℘ n r n = ℘ n r ′′ n with ℘ n = I − p n p T n G p T n Gp n

.

It is easy to see that ℘ 2 n = ℘ n and (G℘ n ) T = G℘ n . So ℘ n is a G-orthogonal projection. By definition of ℘ n we get

℘ n v = v if v ⊥ G p n ,

n v = 0 if v ∈ span{p n }.

(6)

The above results can be considered as a generalization of the results given in [8].

3. Applications. It seems quite difficult to obtain more theoretical results on the convergence of the hybrid procedure in the general case. So, (r n ) being an arbitrary sequence of residual vectors, we shall assume that we are in one of the following particular cases:

(i) r ′′ n = Br n −1 , (ii) r ′′ n = Br n .

Such a situation arises, for example, if we consider a splitting of the matrix A,

A = M − N,

and if x ′′ n is obtained from y (equal to x n −1 or x n ) by x ′′ n = M −1 N y + M −1 b.

In this case the associated residual has the form r n ′′ = b − Ax ′′ n = b − A(M −1 N y + M −1 b)

= b − (M − N )(M −1 N y + M −1 b) = N M −1 (b − Ay).

Thus we have B = N M −1 with y = x n −1 (case (i)) and y = x n −1 (case (ii)).

It must be noticed that B = I − AM −1 . This situation also holds if B = I − AC with C an arbitrary matrix. In this case, we have

x n = α n x n + (1 − α n )(y + C(b − Ay)).

(We are indebted to one of the referees for this remark.)

3.1. Case (i). Let r n be computed by the hybrid procedure from r n ′′ = Br n −1 and r n . We have

r n = α n r n + (1 − α n )Br n −1

and we get

Lemma 3.1. Let r 0 = r 0 . Then, for all n ≥ 1,

H1(n) r n =

X n i =0

a (n) i B n −i r i

with H2(n)

X n i =0

a (n) i = 1.

P r o o f. a (0) 0 = 1 and so H1(0) and H2(0) are true. Suppose that

H1(n − 1) and H2(n−1) hold. From the definition of r n and from H1(n−1),

(7)

we get

r n = α n r n + (1 − α n )Br n −1 = α n r n + (1 − α n )B

n X −1 i =0

a (n−1) i B n −1−i r i

= α n r n +

n X −1 i =0

(1 − α n )a (n−1) i B n −i r i = X n

i =0

a (n) i B n −i r i , where the a (n) i ’s are given by

a (n) i = (1 − α n )a (n−1) i , i = 0, . . . , n − 1, a (n) n = α n .

Thus H1(n) is true with H2(n) obviously satisfied.

R e m a r k 6. When r n is computed by a polynomial method of the form r n = P n (B)r 0 then r n = Q n (B)r 0 with Q n given by Q n (t) = α n P n (t) + (1 − α n )tQ n −1 (t).

Let us now prove other results. We have

Theorem 3.2. Let γ be an eigenvector of B. If r n = c n γ + a n with

n lim →∞ (γ, r n )/kr n k = kγk and lim

n →∞ ka n k/c n = 0 then lim n →∞ kr n k/kr n k = 0.

P r o o f. Let θ n be the angle between ZBr n −1 and Zr n . We have (Br n −1 , r n ) 2 = c 2 n −1 λ 2 (γ, r n ) 2 + (Ba n −1 , r n ) 2 + 2c n −1 λ(γ, r n )(Ba n −1 , r n ),

kBr n −1 k 2 = c 2 n −1 λ 2 kγk 2 + kBa n −1 k 2 + 2c n −1 λ(γ, Ba n −1 ), where λ is the eigenvalue of B corresponding to γ. Thus

n lim →∞ cos 2 θ n = lim

n →∞

(Br n −1 , r n ) 2 kBr n −1 k 2 kr n k 2

= lim

n →∞

 

 

λ 2 (γ, r n ) 2

kr n k 2 + (Ba n −1 , r n ) 2

c 2 n −1 kr n k 2 + 2λ (γ, r n )(Ba n −1 , r n ) c n −1 kr n k 2 λ 2 kγk 2 + kBa n −1 k 2

c 2 n −1

+ 2λ (γ, a n −1 ) c n −1

 

 

= 1

and the result follows from Theorem 2.4.

From the minimization property of r n we have

kr n k ≤ kBr n −1 k ≤ kBkkr n −1 k

and thus kr n k ≤ kr n −1 k if kBk ≤ 1.

(8)

In particular, consider a splitting A = M − N of the matrix A. Premul- tiplying the system (1) from the right by M −1 we get a new system of the form

A (M ) x = b (M ) with

B (M ) = M −1 N, A (M ) = I − B (M ) , b (M ) = M −1 b.

Applying the method described above to this new system, we get kr n k ≤ kB (M ) kkr n −1 k.

Thus, a good choice of B (M ) is equivalent to a good choice of the precondi- tioner M from the right-hand side.

When B = I, the method is called the Minimal Residual Smoothing (MRS) algorithm. It was introduced in [6, 7] and applied to some well known methods. For more details, see [2, 9–12].

We now apply it to an error-minimization method [4]. Set e n = e x − x n

and e n = e x − x n . Let ϕ be any norm in R m . For any x ∈ R m we denote by z(x) a vector such that

(z(x), x) = ϕ(x).

This is called a decomposition of the norm ϕ. Such decompositions were introduced by Gastinel [3] for the case of the l 1 -norm.

Let x 0 be a given vector. The Transformed Norm Decomposition Method (TNDE) [4] is defined by

r 0 = b − Ax 0 , p 0 = A T z 0 , and for n = 0, 1, . . . ,

x n +1 = x n − β n p n , r n +1 = r n + β n Ap n , p n +1 = A T z n +1 +

X n i =0

γ (i) n +1 p i ,

where z i is such that (z i , r i ) = ϕ(r i ). The coefficients β n and γ n (i) +1 are given by

β n = (p n , e n )

(p n , p n ) = − ϕ(r n ) (p n , p n ) and

γ n (i) +1 = − (p i , A T z n +1 )

(p i , p i ) , i = 0, . . . , n.

The sequence (e n = e x − x n ) has the following properties:

1. e n ∈ V n = e 0 + span{p 0 , . . . , p n }, 2. V n −1 ⊂ V n ,

3. ke n k = min e ∈V

n

kek,

4. ke n k ≤ ke n −1 k.

(9)

If the MRS is applied to the TNDE, that is, to the sequence (r n ) defined above with r 0 = r 0 , then we have

Theorem 3.3. If α n ∈ ]0, 1[ then ke n k ≤ ke n −1 k.

P r o o f. We have

e n −1 = α n −1 e n −1 + (1 − α n −1 )e n −2 . It follows that e n −1 = P n −1

i =1 a (n−1) i e i ∈ V n −1 for all n. Thus, from prop- erty 3,

ke n −1 k ≤ ke n −1 k.

Using property 4, we get

ke n k ≤ α n ke n k + (1 − α n )ke n −1 k ≤ α n ke n −1 k + (1 − α n )ke n −1 k

≤ α n ke n −1 k + (1 − α n )ke n −1 k = ke n −1 k and the result follows.

3.2. Case (ii). Suppose now that r n is given by r n = α n r n + (1 − α n )Br n

with B = I − AM −1 . Then r n can be written as r n = r n − (AM −1 r n , r n )

(AM −1 r n , AM −1 r n ) AM −1 r n .

R e m a r k 7. If r n = r n −1 and M = I, then the hybrid procedure is identical to the Minimal Residual Method.

Definition 1. Consider two vector sequences (u n ), (v n ) ∈ R m such that lim n →∞ u n = u and lim n →∞ v n = v. We say that (u n ) converges with the same speed as (v n ) if there exists N such that for all n ≥ N there are M n ∈ R m ×m and a n ∈ R m with ka n k ≤ ε such that

• v n +1 = M n v n ,

• u n +1 = M n u n + a n .

Lemma 3.4. Suppose that there exists N such that for all n ≥ N , there is M n ∈ R m ×m such that r n +1 = M n r n and AM −1 M n = M n AM −1 . If lim n →∞ α n = α exists and if there is K such that kM n k < K for all n, then (r n ) converges with the same speed as (r n ).

P r o o f. If (α n ) converges, then there is a sequence (ε n ) with lim n →∞ ε n

= 0 such that α n +1 = α n − ε n for all n. Setting a n = ε n AM −1 M n r n , we get from the definition

r n +1 = r n +1 − (1 − α n +1 )AM −1 r n +1 = M n r n − (1 − α n + ε n )AM −1 M n r n

= M n (r n − (1 − α n )AM −1 r n ) + ε n AM −1 M n r n = M n r n + a n .

Obviously lim n →∞ a n = 0 and the result follows.

(10)

We now assume that r n = c n γ + a n , where c n ∈ R, a n ∈ R m , and that γ is an eigenvector of B. In this case we get

Lemma 3.5. Let γ be an eigenvector of B. If r n = c n γ + a n , then there are K ∈ R and M n ∈ R m such that for all n, kM n k ≤ K and r n = M n a n .

P r o o f. We know that

r n = ℘ n r n = c n ℘ n γ + ℘ n a n

with

℘ n = I − p n p T n G/(p T n Gp n ), where p n = AM −1 r n . Premultiplying p n by ℘ n we get

0 = ℘ n p n = (1 − λ)c n ℘ n γ + ℘ n AM −1 a n ,

where λ is the eigenvalue of B corresponding to γ. Thus, since A is assumed to be regular,

c n ℘ n γ = − 1

1 − λ ℘ n AM −1 a n . Setting

M n = ℘ n



I − 1

1 − λ AM −1

 ,

we get r n = M n a n . The matrix ℘ n is a G-orthogonal projection and thus k℘ n k G = 1. It follows that

kM n k ≤ 1 + 1

|1 − λ| kAM −1 k which ends the proof.

R e m a r k 8. As a consequence of Lemma 3.5 we have kr n k = O(ka n k).

From Theorem 2.8, we easily get

Theorem 3.6. Let γ be an eigenvector of B with the corresponding eigen- value λ. If r n = c n γ + a n with lim n →∞ ka n k/c n = 0 then

n lim →∞ α n = − λ

1 − λ and lim

n →∞

kr n k kr n k = 0.

P r o o f. We have

r n = c n γ + a n , Br n = λc n γ + Ba n , AM −1 r n = (1 − λ)c n γ + AM −1 a n ,

and

(11)

n lim →∞ α n = − (Br n , AM −1 r n ) (AM −1 r n , AM −1 r n )

= lim

n →∞



λ(1 − λ)c

2n

(γ, γ) + c

n

λ(γ, AM

−1

a

n

)

(1 − λ)

2

c

2n

(γ, γ) + 2(1 − λ)c

n

(γ, AM

−1

a

n

) + (AM

−1

a

n

, AM

−1

a

n

)

+ (1 − λ)c

n

(Ba

n

, γ) + (Ba

n

, AM

−1

a

n

)

(1 − λ)

2

c

2n

(γ, γ) + 2(1 − λ)c

n

(γ, AM

−1

a

n

) + (AM

−1

a

n

, AM

−1

a

n

)



= lim

n →∞

 

λ(1 − λ)(γ, γ) + λ (γ, AM

−1

a

n

) c

n

(1 − λ)

2

(γ, γ) + 2(1 − λ) (γ, AM

−1

a

n

)

c

n

+ (AM

−1

a

n

, AM

−1

a

n

) c

2n

+

(1 − λ) (Ba

n

, γ)

c

n

+ (Ba

n

, AM

−1

a

n

) c

2n

(1 − λ)

2

(γ, γ) + 2(1 − λ) (γ, AM

−1

a

n

)

c

n

+ (AM

−1

a

n

, AM

−1

a

n

) c

2n

 

= − λ

1 − λ .

Let θ n be the angle between Zr n and ZAM −1 r n . Replacing r n and AM −1 r n

by their expressions above, we also get

n lim →∞ cos 2 θ n = lim

n →∞

(r n , AM −1 r n ) 2 kr n k 2 kAM −1 r n k 2

= lim

n →∞

 1

c

2n

(γ, γ) + 2c

n

(γ, a

n

) + (a

n

, a

n

)

× [(1 − λ)c

2n

(γ, γ) + c

n

(γ, AM

−1

a

n

) + (1 − λ)c

n

(a

n

, γ) + (a

n

, AM

−1

a

n

)]

2

(1 − λ)

2

c

2n

(γ, γ) + 2(1 − λ)c

n

(γ, AM

−1

a

n

) + (AM

−1

a

n

, AM

−1

a

n

)



= lim

n →∞

  1

(γ, γ) + 2 (γ, a

n

)

c

n

+ (a

n

, a

n

) c

2n

×



(1 − λ)(γ, γ) + (γ, AM

−1

a

n

)

c

n

+ (1 − λ) (a

n

, γ)

c

n

+ (a

n

, AM

−1

a

n

) c

2n



2

(1 − λ)

2

(γ, γ) + 2(1 − λ) (γ, AM

−1

a

n

)

c

n

+ (AM

−1

a

n

, AM

−1

a

n

) c

2n

 

= 1

and the result follows by Theorem 2.8.

The conditions of Lemma 3.5 and Theorem 3.6 seem difficult to check in practice. We now give an example where these results can be applied.

Example. Let {λ i } m i =1 be the eigenvalues of B = I − A with the corre-

(12)

sponding eigenvectors {γ i } m i =1 . Suppose that |λ 1 | ≥ . . . ≥ |λ m | and that the eigenvectors form a basis of R m . Let r n be such that r n = Br n −1 and let r n be obtained by the hybrid procedure from r n and r n +1 . Let r 0 = P m

i =1 d i γ i . Thus

r n = X m i =1

d i λ n i γ i = d 1 λ n 1 γ 1 + X m

i =2

d i λ n i γ i . Setting

c n = d 1 λ n 1 , a n = X m i =2

d i λ n i γ i , we get from Remark 8 and Theorem 3.6

Theorem 3.7. If r n = Br n −1 , r 0 = r 0 and if r n is obtained by the hybrid procedure from r n and r n +1 , then kr n k = O(|λ 2 | n ). Moreover , if |λ 2 | < |λ 1 |, then lim n →∞ kr n k/kr n k = 0.

R e m a r k 9. This theorem holds even if |λ 2 | < 1 < |λ 1 |.

R e m a r k 10. Since (α n ) converges, Lemma 3.4 shows that (r n ) con- verges with the same speed as (r n ). In this case, the iterations will be stopped when |α n + λ 1 /(1 − λ 1 )| ≤ ε, where ε is an arbitrary threshold. Of course the value of λ 1 is usually unknown and this test cannot be used in practice. Thus the iterations will be stopped when |α n − α n −1 | ≤ ε. How- ever, it must be noticed that, due to a possible stagnation of the method, this test does not guarantee that the recurrence is close to the limit.

4. Numerical examples. In all the examples we take G = I, M = I, B = I − A and x 0 = 0. The right-hand side is computed in order that the solution be e x = [1, . . . , 1] T . Each figure shows log kr n k and log kr n k as a function of the number n of iterations and the lowest curve always corresponds to the hybrid procedure.

Let {λ i } m i =1 be the set of eigenvalues of B. The elements of the matrix A ∈ R 100×100 were randomly chosen in [0, 1]. The values of kBk, |λ i | (i = 1, . . . , 100) were computed with Matlab with a precision of 10 −20 .

4.1. Case (i). Let r n be obtained by the norm decomposition method of Gastinel [3] with ϕ 1 (r) = P m

i =1 |r i |. This method is as follows: for n = 0, 1, . . . ,

x n +1 = x n − α n A T z n , r n +1 = r n + α n AA T z n , where z n = sgn(r n ). Thus, (z n , r n ) = ϕ 1 (r n ) and

α n = − ϕ 1 (r n )

(A T z n , A T z n ) .

(13)

Let r n be computed by the hybrid procedure from r n and Br n −1 .

Example 1 Example 2 Example 3 kBk 0.998605 0.663839 1.485374

1

| 0.030418 0.661562 0.078104

2

| 0.030372 0.040093 0.046249

For each example |λ 2 | < |λ 1 | and thus condition 2 of Theorem 3.2 is satisfied.

We did not check condition 1 but the numerical results show that, in this

case, the convergence of Gastinel’s method has been accelerated.

(14)

4.2. Case (ii). Let r n be such that r n = Br n −1 and let r n be computed by the hybrid procedure from r n and r n +1 .

Example 4 Example 5 Example 6 kBk 6.296298 3.273282 6.457731

1

| 6.274695 1.158723 0.822448

2

| 0.380272 0.099341 0.195185

Let N be the index such that |α N + λ 1 /(1 − λ 1 )| ≤ 10 −20 . We get

Example 4 Example 5 Example 6

N 12 > 35 > 55

log kr

N

k 20.992904

log kr

N

k −9.771103

(15)

For each example we have |λ 2 | < |λ 1 |. Thus the conditions of Theorem 3.7 are satisfied and we get lim n →∞ kr n k/kr n k = 0 even if lim n →∞ kr n k = ∞ (see Examples 4 and 5). For Example 1 we get, at iteration 12, |α 12 + λ 1 /(1 − λ 1 )| ≤ 10 −20 . Moreover, we also have |α n + λ 1 /(1 − λ 1 )| ≤ 10 −20 for n ∈ [12, 20], and thus we see that (r n ) converges with the same speed as (r n ). We can also remark that, since the sequence (r n ) diverges, so does (r n ) (from iteration 12) and thus it is better to stop the iterations at n = 12.

References

[1] C. B r e z i n s k i and M. R e d i v o Z a g l i a, Hybrid procedures for solving linear sys-

tems, Numer. Math. 67 (1994), 1–19.

(16)

[2] R. W. F r e u n d, A transpose-free quasi-minimal residual algorithm for non-Her- mitian linear systems, SIAM J. Sci. Statist. Comput. 14 (1993), 470–482.

[3] N. G a s t i n e l, Proc´ed´e it´eratif pour la r´esolution num´erique d’un syst`eme d’´equa- tions lin´ eaires, C. R. Acad. Sci. Paris 246 (1958), 2571–2574.

[4] K. J b i l o u, Projection-minimization methods for nonsymmetric linear systems, Lin- ear Algebra Appl. 229 (1995), 101–125.

[5] —, G-orthogonal projection methods for solving linear systems, to appear.

[6] W. S c h ¨ o n a u e r, Scientific Computing on Vector Computers, North-Holland, Am- sterdam, 1987.

[7] W. S c h ¨ o n a u e r, H. M ¨ u l l e r and E. S c h n e p f, Numerical tests with biconjugate gradient type methods, Z. Angew. Math. Mech. 65 (1985), T400–T402.

[8] R. W e i s s, Convergence behavior of generalized conjugate gradient methods, Ph.D.

Thesis, University of Karlsruhe, 1990.

[9] —, Error-minimizing Krylov subspace methods, SIAM J. Sci. Statist. Comput. 15 (1994), 511–527.

[10] —, Properties of generalized conjugate gradient methods, Numer. Linear Algebra Appl. 1 (1994), 45–63.

[11] R. W e i s s and W. S c h ¨ o n a u e r, Accelerating generalized conjugate gradient methods by smoothing, in: Iterative Methods in Linear Algebra, R. Beauwens and P. de Groen (eds.), North-Holland, Amsterdam, 1992, 283–292.

[12] L. Z h o u and H. F. W a l k e r, Residual smoothing techniques for iterative methods, SIAM J. Sci. Statist. Comput. 15 (1994), 297–312.

Anna Abkowicz and Claude Brezinski

Laboratoire d’Analyse Num´erique et d’Optimisation UFR IEEA–M3

Universit´e des Sciences et Technologies de Lille F-59655 Villeneuve d’Ascq Cedex, France E-mail: brezinsk@omega.univ-lille1.fr

Received on 24.1.1995

Cytaty

Powiązane dokumenty

Neurophysiological measures via fNIRS were found to be positively correlated with behavioral results suggesting that those who were actively engaged in finding targets, had

Idea wychowania narodowego w poglądach Wincentego Lutosławskiego // W: 62. Wincenty Lutosławski 1863–1954, Materiały z Posiedzenia Naukowego PAU w dniu 19. – Tłumaczenie z

But when the transaction was a typical one, without any particular or excep- tional clauses, the subscriptions alone were written in the presence of the notary who noted in the

Wszystko to powoduje, że określenie prawdy, mając na względzie realizm epistemologiczno-metafizyczny, jest niemożliwe, albowiem re- alizm, w myśl zwolenników powszechnej

Biblioterapia rozwojowa, która adresowana jest do osób zdrowych w celu wzmac- niania pozytywnych postaw, rozwoju wrażliwości i empatii, pobudzenia ciekawości oraz otwartości na to

W niniejszym opracowaniu zostaną zaprezentowane i omówione wybrane orze- czenia Naczelnego Sądu Administracyjnego i wojewódzkich sądów administracyjnych dotyczące

Pokazują one, że aż 64% badanych dzieci poniżej 6 roku życia korzysta z urządzeń mobil- nych, w tym 25% codziennie, 26% dzieci posiada własne urządzenia mobilne, 79%

UMCS.. Odejście Profesora przeżywam wyjątkowo boleśnie. Odszedł promotor mojej pra- cy magisterskiej i doktor skiej. Odszedł człowiek, któremu osobiście wiele zawdzię- czam, z