• Nie Znaleziono Wyników

ACCELERATION FOR THE E-ALGORITHM

N/A
N/A
Protected

Academic year: 2021

Share "ACCELERATION FOR THE E-ALGORITHM"

Copied!
21
0
0

Pełen tekst

(1)

A. F D I L (Marrakech)

SOME RESULTS ON CONVERGENCE

ACCELERATION FOR THE E-ALGORITHM

Abstract. Some new results on convergence acceleration for the E-algorithm which is a general extrapolation method are obtained. A technique for avoiding numerical instability is proposed. Some applica- tions are given. Theoretical results are illustrated by numerical experi- ments.

1. Introduction. The E-algorithm [2], [7] is a general extrapolation process. It depends on some auxiliary sequences (g i (n)), i ≥ 1. For some choices of the auxiliary sequences, one obtains some known convergence acceleration methods (see [2], [3]). When applied to a sequence (s n ) of complex numbers, the algorithm has the following rules:

E 0 (n) = s n , g (n) 0,i = g i (n), n ≥ 0, i ≥ 1, E k (n) = g (n+1) k−1,k E k−1 (n) − g (n) k−1,k E k−1 (n+1)

g (n+1) k−1,k − g (n) k−1,k , k ≥ 1 (main rule), g k,j (n) = g (n+1) k−1,k g (n) k−1,j − g k−1,k (n) g (n+1) k−1,j

g (n+1) k−1,k − g (n) k−1,k , j > k (auxiliary rule).

Some results on convergence acceleration for the E-algorithm were ob- tained by Brezinski [2] when the auxiliary sequences satisfy the following condition:

∀i ≥ 1, g i (n + 1) g i (n) −→

n→∞ b i 6= 1, and b i 6= b j for i 6= j.

This condition will be called Brezinski’s condition.

1991 Mathematics Subject Classification: Primary 65B05.

Key words and phrases : convergence acceleration, extrapolation, summation of series, numerical quadrature.

[393]

(2)

In this article, we give some results on convergence acceleration in three cases where Brezinski’s condition is not satisfied.

All the sequences considered are sequences of real numbers.

In Section 1, we consider the case where g i (n + 1) = g(n)b n i for all i ≥ 1 with g(n + 1)/g(n) → 0 as n → ∞. Since g i (n + 1)/g i (n) → 0 as n → ∞ for all i ≥ 1, we cannot apply the results obtained by Brezinski [2].

In Section 2, we study the E-algorithm when the auxiliary sequences are such that for each i ≥ 1, g i (n) has an asymptotic expansion of the form

g i (n) ≈ λ n i n θ i



a i,0 + a i,1

n α i,1 + . . . + a i,j

n α i,j + . . .

 .

When the numbers λ i are close to 1, the E-algorithm is numerically unstable.

In order to avoid numerical instability, we propose to apply the E-algorithm with some subsequences of (g i (n)) as auxiliary sequences. This technique is very interesting in particular when (s n ) has the following asymptotic expansion:

s n − s ≈ a 1 g 1 (n) + a 2 g 2 (n) + . . . + a i g i (n) + . . .

By choosing appropriately the subsequences of the auxiliary sequences, we can apply the results of Section 1 where the auxiliary sequences converge superlinearly to 0. This technique is illustrated by numerical examples in Section 3.

Section 3 is devoted to the application of the E-algorithm to the sum- mation of series and computation of integrals.

2. Superlinear convergence. Let us begin by the following notations and definitions. N denotes the set of positive integers, and R the set of real numbers. If (s n ) is a convergent sequence, we denote by s its limit;

u n = o(v n ) means that u n /v n → 0 as n → ∞.

Definitions. Let (f 1 (n)), (f 2 (n)), . . . , (f i (n)), . . . be some sequences of real numbers.

• We say that (f 1 , f 2 , . . . , f i , . . .) is an asymptotic sequence if for each index i, f i+1 (n) = o(f i (n)) as n → ∞.

Let (u n ) be a sequence of real numbers, and (f 1 , f 2 , . . . , f i , . . .) be an asymptotic sequence.

• We say that (u n ) has an asymptotic expansion with respect to the asymptotic sequence (f 1 , f 2 , . . . , f i , . . .) if there exist constants a i ∈ R, i ≥ 1, such that for all k ≥ 1,

u n =

k

X

i=1

a i g j (n) + o(g k (n)) as n → ∞.

(3)

Then we write

u n ≈ a 1 g 1 (n) + . . . + a k g k (n) + . . . Definitions. Let A be a set of convergent sequences.

• We say that the E-algorithm is regular on A if for all (s n ) ∈ A and k ≥ 1,

E k (n) − s = o(1) as n → ∞.

• We say that the E-algorithm is effective on A if for all (s n ) ∈ A and k ≥ 1, either

∃n 0 , ∀n ≥ n 0 , E (n) k = E (n) k−1 = s or E k (n) − s = o(E k−1 (n) − s) as n → ∞.

Let us now assume that the auxiliary sequences of the E-algorithm are such that

(H 1 ) ∀i ≥ 1, g i (n) = g(n)b n i ,

where g(n + 1) = o(g(n)), b 1 = 1, and b i 6= b j for i 6= j.

Then Brezinski’s condition is not satisfied.

Lemma 1. For any k ≥ 0 and i > k, we have

(i) ∀n, g k,i (n+1)

g (n) k,i = b i b k+1

· g (n+1) k,k+1 g k,k+1 (n) ; (ii) g (n+1) k,i = o(g k,i (n) ) as n → ∞.

P r o o f. By induction on k.

Theorem 1. Let (s n ) be a convergent sequence. If the auxiliary sequences (g i (n)) of the E-algorithm satisfy (H 1 ), then E k (n) → s as n → ∞, for each k ≥ 0.

P r o o f (induction on k). For k = 0, the result is obvious. Assume that

(1) E k (n) − s = o(1) as n → ∞.

From the main rule of the E-algorithm, we get

(2) E k+1 (n) − s =

g k,k+1 (n+1)

g (n) k,k+1 (E k (n) − s) − (E (n+1) k − s) g k,k+1 (n+1)

g k,k+1 (n) − 1

.

From Lemma 1 we get

(3) g k,k+1 (n+1) = o(g k,k+1 (n) ) as n → ∞.

From (1)–(3) we deduce that E k+1 (n) − s = o(1) as n → ∞.

(4)

Theorem 2. Let k ≥ 1. If the conditions of Theorem 1 are satisfied, then

(i) E k (n) − s = o(E k−1 (n) − s) as n → ∞ iff E (n+1) k−1 − s = o(E k−1 (n) − s) as n → ∞;

(ii) E k (n) − s = o(E k−1 (n+1) − s) as n → ∞ iff (E k−1 (n+1) − s)g (n) k−1,k (E k−1 (n) − s)g (n+1) k−1,k −→

n→∞ 1.

P r o o f. (i) We have

(4) E k (n) − s

E k−1 (n) − s =

g (n+1) k−1,k

g k−1,k (n) − E k−1 (n+1) − s E k−1 (n) − s g k−1,k (n+1)

g k−1,k (n) − 1 .

Lemma 1(ii) gives

(5) g k−1,k (n+1) = o(g k−1,k (n) ) as n → ∞.

From (4), (5) we deduce that

E k (n) − s = o(E k−1 (n) − s) as n → ∞ ⇔ E k−1 (n+1) − s = o(E k−1 (n) − s) as n → ∞.

(ii) We have

(6) E k (n) − s

E k−1 (n+1) − s =

g k−1,k (n+1)

g k−1,k (n) · E k−1 (n) − s E k−1 (n+1) − s − 1 g k−1,k (n+1)

g k−1,k (n) − 1

.

From (5), (6) we get

E (n) k − s = o(E k−1 (n+1) − s) as n → ∞ ⇔ (E k−1 (n+1) − s)g (n) k−1,k (E (n) k−1 − s)g k−1,k (n+1) −→

n→∞ 1.

Property 1. If the auxiliary sequences (g i (n)) satisfy (H 1 ) with |b i+1 | <

|b i | for i ≥ 1, then for k ≥ 0 and i > k,

g k,i+1 (n) = o(g k,i (n) ) as n → ∞

(i.e. (g k,k+1 , g k,k+2 , . . . , g k,i , . . .) is an asymptotic sequence).

P r o o f. From Lemma 1 we get g k,i (n+1)

g (n) k,i = b i

b k+1

· g k,k+1 (n+1)

g (n) k,k+1 , g (n+1) k,i+1

g k,i+1 (n) = b i+1

b k+1

· g (n+1) k,k+1

g k,k+1 (n) .

(5)

Thus

g (n+1) k,i+1

g k,i+1 (n) = b i+1

b i

· g k,i (n+1) g (n) k,i . Consequently,

g k,i+1 (n)

g k,i (n) =  b i+1

b i

 n g k,i+1 (0) g k,i (0) .

Since |b i+1 | < |b i |, it follows that g (n) k,i+1 = o(g (n) k,i ) as n → ∞.

Let us mention that, if the auxiliary sequences (g i (n)) of the E-algorithm form an asymptotic sequence, then in general, for k ≥ 1, (g k,k+1 , g k,k+2 , . . . , g k,i , . . .) is not an asymptotic sequence. For example, let

g i (n) = (−1) n(i+1) /(n + 1) i , n ≥ 0, i ≥ 1.

Then (g 1 , g 2 , . . . , g i , . . .) is an asymptotic sequence, but (g 1,2 , g 1,3 , . . . . . . , g 1,i , . . .) is not.

Lemma 2. Let (s n ) be a convergent sequence. If

(i) the auxiliary sequences (g i (n)) satisfy (H 1 ) with |b i+1 | < |b i | for i ≥ 1,

(ii) s n − s ≈ a 1 g 1 (n) + a 2 g 2 (n) + . . . + a i g i (n) + . . . , then for each k ≥ 0, E k (n) − s ≈ a k+1 g k,k+1 (n) + . . . + a i g k,i (n) + . . .

P r o o f. By induction on k.

R e m a r k. If a j = 0 for all j > k, then there exists n 0 such that E k (n) = s for n ≥ n 0 ; this result is a particular case of a general result given by Brezinski [2] for the kernel of step k of the E-algorithm.

Theorem 3. Let (s n ) be a convergent sequence. If the auxiliary sequences (g i (n)) satisfy (H 1 ) with |b i+1 | < |b i | for i ≥ 1, and if

s n − s ≈ a 1 g 1 (n) + a 2 g 2 (n) + . . . + a i g i (n) + . . . , then for all k ≥ 1,

(i) if a i = 0 for all i > k, then there exists n 0 such that E k (n) = s for all n ≥ n 0 ;

(ii) if a k 6= 0, then E k (n) − s = o(E k−1 (n+1) − s) as n → ∞;

(iii) if a k = 0 and there exists i > k such that a i 6= 0, then E k (n) − s = o(E k−1 (n) − s) as n → ∞, E k (n) − s 6= o(E k−1 (n+1) − s) as n → ∞.

P r o o f. (i) follows from the preceding remark.

(6)

(ii) Assume that a k 6= 0. From Lemma 2, we get E (n) k−1 − s = a k g k−1,k (n) + o(g (n) k−1,k ) as n → ∞. Thus

(E (n+1) k−1 − s)g k−1,k (n) (E k−1 (n) − s)g (n+1) k−1,k −→

n→∞ 1,

and from Theorem 2 we deduce that E k (n) − s = o(E k−1 (n+1) − s) as n → ∞.

(iii) Let i 0 > k be the smallest integer such that a i 0 6= 0. From Lemma 2 we get E k−1 (n) − s = a i 0 g (n) k−1,i

0 + o(g (n) k−1,i

0 ) as n → ∞. Thus (7) E k−1 (n+1) − s

E k−1 (n) − s = g (n+1) k−1,i 0

g (n) k−1,i 0 (1 + o(1)) as n → ∞.

From Lemma 1, we get

(8) g (n+1) k−1,i

0 = o(g k−1,i (n)

0 ) as n → ∞.

The relations (7), (8) and Theorem 2(i) give E k (n) − s = o(E k−1 (n) − s) as n → ∞.

We have

(9) (E k−1 (n+1) − s)g (n) k−1,k

(E k−1 (n) − s)g (n+1) k−1,k = g k−1,i (n+1) 0

g k−1,i (n) 0 · g (n) k−1,k

g k−1,k (n+1) (1 + o(1)) as n → ∞.

From Lemma 1, we deduce that

(10) g k−1,i (n+1)

0

g k−1,i (n) 0 · g (n) k−1,k g k−1,k (n+1) = b i 0

b k

. The relations (9), (10) show that the sequence

(E k−1 (n+1) − s)g (n) k−1,k /((E k−1 (n) − s)g k−1,k (n+1) )

does not converge to 1; then, from Theorem 2, we deduce that E k (n) − s 6=

o(E k−1 (n+1) − s) as n → ∞.

Let us now assume that the sequence (s n ) satisfies (11) s n − s ≈ g(n)(a 1 c n 1 + a 2 c n 2 + . . . + a i c n i + . . .),

where a 1 6= 0, |c 1 | > . . . > |c n | > |c n+1 | > . . . > 0. Let (r (n) j,i ), j ≥ 0, i ≥ 1, be the sequences obtained by applying the E-algorithm with the auxiliary sequences g k (n) = g(n)b n k to the sequence (g(n)c n i ). We have

r 0,i (n) = g(n)c n i , n ≥ 0, r j,i (n) = g j−1,j (n+1) r (n) j−1,i − g (n) j−1,j r (n+1) j−1,i

g (n+1) j−1,j − g (n) j−1,j , j ≥ 1.

(7)

By induction on k, one can easily prove the following lemmas.

Lemma 3. Let i ≥ 1. Then

∀k ≥ 0, ∀j > k, ∀n ≥ 0, r (n+1) k,i r k,i (n) = c i

b j

· g k,j (n+1) g k,j (n) .

Lemma 4. If the auxiliary sequences (g i (n)) satisfy (H 1 ), and if (s n ) satisfies (11), then for all k ≥ 0,

E k (n) − s ≈ a 1 r k,1 (n) + a 2 r k,2 (n) + . . . + a i r k,i (n) + . . .

Theorem 4. If the conditions of Lemma 4 are satisfied, then the E-algorithm is effective on (s n ).

P r o o f. Let k ≥ 0. Lemma 4 gives

(12) E k (n) − s ≈ a 1 r (n) k,1 + . . . + a i r (n) k,i + . . . From Lemma 3, we get

r k,1 (n+1) r (n) k,1 = c 1

b k+1

· g k,k+1 (n+1) g (n) k,k+1 . Then, from Lemma 1, we obtain

(13) r k,1 (n+1) = o(r k,1 (n) ) as n → ∞.

From (12), (13) we deduce that E (n+1) k − s = o(E k (n) − s) as n → ∞, and from Theorem 2 we obtain E k+1 (n) − s = o(E k (n) − s) as n → ∞. Consequently, the E-algorithm is effective on (s n ).

2. Linear and logarithmic convergence. In this section, we shall study the E-algorithm when the auxiliary sequences are such that

(H 2 ) ∀i ≥ 1, g i (n) ≈ λ n i n θ i



a i,0 + a i,1

n α i,1 + . . . + a i,j

n α i,j + . . .

 ,

where a i,0 6= 0, 0 < α i,1 < α i,2 < . . . < α i,j < . . . , and (λ i , θ i ) 6= (λ j , θ j ) for i 6= j.

R e m a r k. If λ i 6= λ j for i 6= j, then Brezinski’s condition is satisfied.

2.1. Linear convergence. From the sequence (θ i ), we define the following double sequence:

θ 0,i = θ i for i ≥ 1, and for k ≥ 1 and i > k,

θ k,i =  θ k−1,i − 1 if λ i = λ k ,

θ k−1,i else.

(8)

One can easily prove the following property:

Property 2. Let k ≥ 0. Let j > i > k. If λ i = λ j , then (i) θ k,i 6= θ k,j ;

(ii) if θ j < θ i , then θ k,j < θ k,i .

Property 3. If the auxiliary sequences (g i (n)) satisfy (H 2 ) with λ i 6= 1 for i ≥ 1, then for all k ≥ 0 and i > k,

g (n) k,i ≈ λ n i n θ k,i



a k,i,0 + a k,i,1

n α k,i,1 + . . . + a k,i,j

n α k,i,j + . . .

 where a k,i,0 6= 0 and 0 < α k,i,1 < . . . < α k,i,j < . . .

P r o o f (induction on k). For k = 0, the property is true. Assume that it is true up to index k. Let i > k + 1. From the auxiliary rule of the E-algorithm we get

(14) g (n) k+1,i

g k,i (n) =

g k,k+1 (n+1)

g (n) k,k+1 − g k,i (n+1) g (n) k,i g k,k+1 (n+1) g (n) k,k+1 − 1

.

From the induction assumption we get (15) g k,i (n) ≈ λ n i n θ k,i



a k,i,0 + a k,i,1

n α k,i,1 + . . . + a k,i,j

n α k,i,j + . . .

 , where a k,i,0 6= 0 and 0 < α k,i,1 < . . . < α k,i,j < . . . , and

(16) g k,k+1 (n) ≈ λ n k+1 n θ k,k+1



a k,k+1,0 + a k,k+1,1

n α k,k+1,1 + . . . + a k,k+1,j

n α k,k+1,j + . . .

 , where a k,k+1,0 6= 0 and 0 < α k,k+1,1 < . . . < α k,k+1,j < . . . Thus

(17) g k,i (n+1) g (n) k,i ≈ λ i



1 + θ k,i

n + d i,1

n p i,1 + . . . + d i,j n p i,j + . . .

 , where 1 < p i,1 < . . . < p i,j < . . . , and

(18) g (n+1) k,k+1

g k,k+1 (n) ≈ λ k+1



1 + θ k,k+1

n + d k+1,1

n p k+1,1 + . . . + d k+1,j

n p k+1,j + . . .

 , where 1 < p k+1,1 < . . . < p k+1,j < . . . From (14), (17), (18) we get

(19) g (n) k+1,i g (n) k,i

k+1 − λ i ) + λ k+1 θ k,k+1 − λ i θ k,i

n + . . .

λ k+1 − 1 + λ k+1 θ k,k+1

n + . . .

.

(9)

If λ i 6= λ k+1 , then the relations (15), (19) give g (n) k+1,i ≈ λ n i n θ k+1,i



a k+1,i,0 + a k+1,i,1

n α k+1,i,1 + . . . + a k+1,i,j

n α k+1,i,j + . . .



with θ k+1,i = θ k,i ,

a k+1,i,0 = a k,i,0 (λ k+1 − λ i ) λ k+1 − 1 6= 0, and 0 < α k+1,i,1 < . . . < α k+1,i,j < . . .

If λ i = λ k+1 , then, from Property 2, we get θ k,i 6= θ k,k+1 , and from (15), (19) we deduce that

g (n) k+1,i ≈ λ n i n θ k+1,i



a k+1,i,0 + a k+1,i,1

n α k+1,i,1 + . . . + a k+1,i,j

n α k+1,i,j + . . .



with θ k+1,i = θ k,i − 1,

a k+1,i,0 = a k,i,0 (θ k,k+1 − θ k,i )λ k+1

λ k+1 − 1 6= 0, and 0 < α k+1,i,1 < . . . < α k+1,i,j < . . .

Thus the property is true for k + 1.

Property 4. If the condition of Property 3 is satisfied, then for all k ≥ 0 and i > k,

g k,i (n+1) /g k,i (n) −→

n→∞ λ i . P r o o f. This is obvious.

An immediate consequence of Property 4 is

Theorem 5. Let (s n ) be a convergent sequence. If the condition of Prop- erty 3 is satisfied, then for all k ≥ 0, E k (n) → s as n → ∞.

Theorem 6. Let (s n ) be a convergent sequence. If the condition of Prop- erty 3 is satisfied, then for all k ≥ 0,

E k+1 (n) − s = o(E k (n) − s) as n → ∞ ⇔ E k (n+1) − s E k (n) − s −→

n→∞ λ k+1 . P r o o f. We have

(20) E k+1 (n) − s

E k (n) − s =

g (n+1) k,k+1

g k,k+1 (n) − E k (n+1) − s E (n) k − s g (n+1) k,k+1

g k,k+1 (n) − 1

.

(10)

From Property 4, we get

(21) g k,k+1 (n+1)

g (n) k,k+1 −→

n→∞ λ k+1 . From (20), (21) we deduce that

E k+1 (n) − s = o(E k (n) − s) as n → ∞ ⇔ E k (n+1) − s E (n) k − s −→

n→∞ λ k+1 .

R e m a r k. Since λ k+1 6= 0, it follows that E k+1 (n) − s = o(E k (n) − s) as n → ∞ iff E k+1 (n) − s = o(E k (n+1) − s) as n → ∞.

Let us now assume that the sequences (λ n ), (θ n ) of the assumption (H 2 ) satisfy

(H 3 ) ∀n ≥ 1, 0 < |λ n | < 1;

(H 4 ) ∀n > m, either |λ n | < |λ m | or λ n = λ m and θ n < θ m .

Property 5. If the auxiliary sequences (g i (n)) satisfy (H 2 ) and if (H 3 ), (H 4 ) are satisfied, then for all k ≥ 0 and i > k, g k,i (n) = o(1) and g k,i+1 (n) = o(g (n) k,i ) as n → ∞.

P r o o f. Let k ≥ 0 and i > k. From Property 3, we get g (n) k,i ≈ λ n i n θ k,i (a k,i,0 + o(1)) as n → ∞, with a k,i,0 6= 0. Since |λ i | < 1, it fol- lows that g k,i (n) = o(1) as n → ∞.

From Property 3, we obtain g k,i+1 (n)

g k,i (n) =  λ i+1

λ i

 n

n θ k,i+1 −θ k,i  a k,i+1,0

a k,i,0

+ o(1)



as n → ∞.

If |λ i+1 | < |λ i |, then g (n) k,i+1 = o(g k,i (n) ) as n → ∞. If λ i+1 = λ i , then θ i+1 < θ i , and from Property 2 we get θ k,i+1 < θ k,i , thus g (n) k,i+1 = o(g k,i (n) ) as n → ∞.

R e m a r k. For each k ≥ 0, (g k,k+1 , g k,k+2 , . . . , g k,i , . . .) is an asymptotic sequence.

Lemma 5. Let (s n ) be a convergent sequence. If the conditions of Prop- erty 5 are satisfied and

s n − s ≈ a 1 g 1 (n) + a 2 g 2 (n) + . . . + a i g i (n) + . . . , then for all k ≥ 0,

E (n) k − s ≈ a k+1 g k,k+1 (n) + . . . + a i g (n) k,i + . . .

(11)

P r o o f (by induction on k). For k = 0, this is obvious. Assume that E k−1 (n) − s ≈ a k g k−1,k (n) + . . . + a i g k−1,i (n) + . . .

We shall prove that for all p > k,

E k (n) − s = a k+1 g k,k+1 (n) + . . . + a p g k,p (n) + o(g k,p (n) ) as n → ∞.

Let p > k. From the induction assumption we get

(22) E k−1 (n) − s = a k g (n) k−1,k + . . . + a p g (n) k−1,p + r n

with

(23) r n = o(g (n) k−1,p ) as n → ∞.

We have

(24) E k (n) = g k−1,k (n+1) E k−1 (n) − g k−1,k (n) E k−1 (n+1) g k−1,k (n+1) − g k−1,k (n) . From (22)–(24) we deduce that

E k (n) = s +

p

X

i=k+1

a i g (n) k,i + d n with d n = g (n+1) k−1,k r n − g (n) k−1,k r n+1

g (n+1) k−1,k − g k−1,k (n) . We shall prove that d n = o(g k,p (n) ) as n → ∞. If a i = 0 for all i > p, then d n = 0. Assume that the coefficients a i , i > p, are not all 0. Let i 0 > p be the smallest index such that a i 0 6= 0. We have r n = g (n) k−1,i 0 (a i 0 + o(1)) as n → ∞, and so

(25) r n+1

r n

= λ i 0

 1 + a

n + o  1 n



as n → ∞.

We have

(26) d n

g k,p (n) =

g (n+1) k−1,k

g k−1,k (n) − r n+1

r n g k−1,k (n+1)

g k−1,k (n) − g k−1,p (n+1) g k−1,p (n)

· r n

g k−1,p (n) .

From Property 3, we get g (n+1) k−1,k g (n) k−1,k = λ k



1 + θ k−1,k

n + o  1 n



as n → ∞;

(27)

g (n+1) k−1,p g k−1,p (n) = λ p



1 + θ k−1,p

n + o  1 n



as n → ∞.

(28)

(12)

If λ k 6= λ p , then from (23), (25)–(28) we get d n = o(g k,p (n) ) as n → ∞.

Assume that λ k = λ p . From Property 2 we deduce that θ k−1,k 6= θ k−1,p . Then, from (25)–(28) we obtain

(29) d n

g k,p (n) =

(λ k − λ i 0 ) + λ k θ k−1,k − λ i 0 a

n + o  1

n

 λ kk−1,k − θ k−1,p ) + o(1) · nr n

g k−1,p (n) as n → ∞.

If λ k = λ i 0 , then from (23), (29) we get d n = o(g k,p (n) ) as n → ∞.

Assume that λ k 6= λ i 0 . We have nr n = λ n i 0 n 1+θ k−1,i0 (a i 0 a k−1,i 0 ,0 + o(1)) and g (n) k−1,p = λ n p n θ k−1,p (a k−1,p,0 + o(1)) as n → ∞. Thus

(30) nr n = o(g (n) k−1,p ) as n → ∞.

The relations (29), (30) show that d n = o(g k,p (n) ) as n → ∞. Finally, E k (n) = s +

p

X

i=k+1

a i g (n) k,i + o(g (n) k,p ) as n → ∞.

Theorem 7. Let (s n ) be a convergent sequence. If (H 2 )–(H 4 ) are satis- fied and

s n − s ≈ a 1 g 1 (n) + a 2 g 2 (n) + . . . + a i g i (n) + . . . , then for all k ≥ 1,

(i) if a i = 0 for all i > k, then E k (n) = s;

(ii) if a k+1 6= 0, then there exists b k 6= 0 such that E (n) k −s = λ n k+1 n θ k,k+1

× (b k+1 + o(1)) as n → ∞;

(iii) if a k+1 = 0, then E k (n) − s = o(λ n k+1 n θ k,k+1 ) as n → ∞.

P r o o f. The results follow from Property 3 and from Lemma 5.

R e m a r k. If a i 6= 0 for all i ≥ 1, then the E-algorithm is effective on (s n ).

Let 0 < |λ| < 1. Assume that the auxiliary sequences (g i (n)) satisfy (H 2 ) and

(H 5 ) ∀i ≥ 1, λ i = λ, and θ 1 > θ 2 > . . . > θ i > θ i+1 > . . . One can easily check that θ k,i = θ i − k for all k ≥ 1 and i > k.

An immediate consequence of Theorem 7 is

Corollary 1. If s n −s ≈ a 1 g 1 (n)+. . . +a i g i (n)+. . . then for all k ≥ 1, either

• ∃n 0 , ∀n ≥ n 0 , E k (n) = s, or

(13)

• E k (n) − s = λ n n θ k+1 −k (b k + o(1)) as n → ∞, with b k 6= 0 if a k+1 6= 0, or

• E (n) k − s = o(λ n n θ k+1 −k ) as n → ∞ if a k+1 = 0.

Theorem 8. Let k ≥ 1. Let (s n ) be a convergent sequence. Assume that the conditions (H 2 ), (H 5 ) are satisfied. If

E k−1 (n) − s ≈ λ n n α



b 0 + b 1

n α 1 + . . . + b i

n α i + . . .

 with b 0 6= 0 and 0 < α 1 < α 2 < . . . < α i < . . . , then

E k (n) − s ≈ λ n n β



c 0 + c 1

n β 1 + . . . + c i

n β i + . . .

 with β ≤ α − 1, c 0 6= 0, and 0 < β 1 < β 2 < . . . < β i < . . .

P r o o f. We have

(31) E k (n) − s =

g k−1,k (n+1)

g k−1,k (n) − E k−1 (n+1) − s E k−1 (n) − s g k−1,k (n+1)

g k−1,k (n) − 1

(E k−1 (n) − s).

From Property 3, we get (32) g (n+1) k−1,k

g (n) k−1,k ≈ λ + d 1

n γ 1 + d 2

n γ 2 + . . . + d i

n γ i + . . . , where 1 ≤ γ 1 < γ 2 < . . . < γ i < . . . Moreover, we have (33) E k−1 (n+1) − s

E k−1 (n) − s ≈ λ + e 1

n t 1 + e 2

n t 2 + . . . + e i

n t i + . . . , where 1 ≤ t 1 < t 2 < . . . < t i < . . . From (32), (33) we obtain (34) g (n+1) k−1,k

g (n) k−1,k − E k−1 (n+1) − s E k−1 (n) − s ≈ f 1

n p 1 + . . . + f j

n p j + . . .

with f 1 6= 0 and 1 ≤ p 1 < p 2 < . . . < p j < . . . Furthermore, we have (35) E k−1 (n) − s ≈ λ n n α



b 0 + b 1

n α 1 + . . . + b i

n α i + . . .

 . From (31), (32), (34), (35) we deduce that

E k (n) − s ≈ λ n n β



c 0 + c 1

n β 1 + . . . + c j

n β j + . . .

 ,

where β = α − p 1 , c 0 = f 1 b 0 /(λ − 1), and 0 < β 1 < β 2 < . . . < β i < . . .

(14)

An immediate consequence of Theorem 8 is

Corollary 2. Let (s n ) be a convergent sequence. If the conditions (H 2 ), (H 5 ) are satisfied and

(36) s n − s ≈ λ n n α 0



a 0 + a 1

n α 0,1 + . . . + a i

n α 0,i + . . .



with a 0 6= 0 and 0 < α 0,1 < α 0,2 < . . . < α 0,i < . . . , then there exists a strictly decreasing sequence (α k ) such that α k ≤ α 0 − k for all k ≥ 1, and

E (n) k − s ≈ λ n n α k



a k,0 + a k,1

n α k,1 + . . . + a k,i

n α k,i + . . .

 with a k,0 6= 0 and 0 < α k,1 < . . . < α k,i < . . .

R e m a r k. The E-algorithm is effective on (s n ).

It was shown by Brezinski [2] that the E-algorithm includes the following sequence transformations: Shanks transformation (g i (n) = ∆s n+i−1 ) [1, 13], the process p (g 1 (n) = λ n n θ , θ ∈ R, and g i (n) = ∆s n+i−2 for i ≥ 2), Levin’s transformations (g i (n) = g(n)∆s n−1 /n i−1 , with g(n) = 1 (resp. g(n) = n, g(n) = ∆s n ∆s n−1 /∆ 2 s n−1 )) for the transformation T (resp. U, V ) [9].

If (s n ) satisfies (36), then for each of these transformations, one can easily prove that for all i ≥ 1,

g i (n) ≈ λ n n θ i



b i,0 + b i,1

n β i,1 + . . . + b i,j

n β i,j + . . .

 . Then, from the preceding remark, we deduce

Corollary 3. If (s n ) satisfies (36), then the Shanks transformation, the process p, and Levin’s transformations (T , U , and V ) are effective on (s n ).

Let us end this subsection with

Theorem 9. If (s n ) satisfies (36), then the algorithm E 0 (n) = s n , n ≥ 0, E (n) k+1 = λE (n) k − E k (n+1)

λ − 1 , k ≥ 1, is effective on (s n ).

P r o o f. By induction on k, we prove that for all k ≥ 0, E (n) k − s ≈ λ n n α k



a k,0 + a k,1

n α k,1 + . . . + a k,i

n α k,i + . . .



with α k < α k−1 and 0 < α k,1 < α k,2 < . . . < α k,i < . . . Then, the result

follows immediately.

(15)

2.2. Logarithmic convergence. In this section, the auxiliary sequences (g i (n)) of the E-algorithm are such that

∀i ≥ 1, g i (n) ≈ n θ i



a i,0 + a i,1

n α i,1 + . . . + a i,j

n α i,j + . . .



with a i,0 6= 0 and 0 > θ 1 > θ 2 > . . . > θ i > . . . Lemma 6. For all k ≥ 0 and i > k,

g (n) k,i ≈ n θ i



a k,i,0 + a k,i,1

n α k,i,1 + . . . + a k,i,j

n α k,i,j + . . .

 , where a k,i,0 6= 0 and 0 < α k,i,1 < α k,i,2 < . . . < α k,i,j < . . .

P r o o f. By induction on k.

By using Lemma 6, one can easily prove

Property 6. For all k ≥ 0 and i > k, g (n) k,i = o(1) and g k,i+1 (n) = o(g k,i (n) ) as n → ∞, and

g k,i (n+1) /g (n) k,i −→

n→∞ 1.

R e m a r k. For each k ≥ 0, (g k,k+1 , g k,k+2 , . . . , g k,i , . . .) is an asymptotic sequence.

Property 7. The E-algorithm is not regular on the set of convergent sequences.

P r o o f. This follows from the fact that

∀k ≥ 0, g (n+1) k,k+1 g k,k+1 (n) −→

n→∞ 1.

Lemma 7. If s n − s ≈ a 1 g 1 (n) + . . . + a i g i (n) + . . . , then for all k ≥ 0, E (n) k − s ≈ a k+1 g k,k+1 (n) + . . . + a i g (n) k,i + . . .

P r o o f. By induction on k.

Using Lemmas 6 and 7, one can easily prove

Theorem 10. If s n −s ≈ a 1 g 1 (n)+. . .+a i g i (n)+. . . , then for all k ≥ 1, either

• ∃n 0 , ∀n ≥ n 0 , E k (n) = s, or

• E (n) k − s = n θ k+1 (b k + o(1)) as n → ∞, with b k 6= 0 if a k+1 6= 0, or

• E (n) k − s = o(n θ k+1 ) as n → ∞ if a k+1 = 0.

(16)

R e m a r k. If a i 6= 0 for all i ≥ 1, then the E-algorithm is effective on (s n ).

2.3. Numerical instability. Consider the E-algorithm of the preceding subsections. We have for all i ≥ 1,

g i (n) ≈ λ n i n θ i



a i,0 + a i,1

n α i,1 + . . . + a i,j

n α i,j + . . .



and g (n+1) i−1,i g (n) i−1,i −→

n→∞ λ i . When the numbers λ i are close to 1, the E-algorithm is numerically unstable.

Then, in practice, the properties of convergence acceleration are lost, and a good approximate value of the limit of (s n ) cannot be computed. In order to avoid numerical instability, we propose to use some subsequences of the auxiliary sequences (g i (n)), i ≥ 1.

For example, set h i (n) = g i (2 n ), i ≥ 1. Then for all i ≥ 1, h i (n) ≈ λ 2 i n 2 i



a i,0 + a i,1

 1 2 α i,1

 n

+ . . .

 . If 0 < |λ i | < 1 for all i ≥ 1, then

∀i ≥ 1, h i (n + 1) h i (n) −→

n→∞ 0.

If λ i = 1 for all i ≥ 1, then

∀i ≥ 1, h i (n + 1) h i (n) −→

n→∞ 2 θ i

with 0 > θ 1 > θ 2 > . . . > θ i > . . . Consequently, the E-algorithm with (h i (n)) as auxiliary sequences is more stable than the E-algorithm with (g i (n)) as auxiliary sequences.

Let us mention that the E-algorithm with (h i (n)) as auxiliary sequences must be applied to the subsequence (s 2 n ) of (s n ).

Let us now compare the two algorithms in terms of the number of arith- metical operations.

It was shown by Brezinski [2] that the computation of E k 0 from s 0 , . . . , s k by the E-algorithm needs approximately 5 3 k 3 arithmetical operations. Con- sequently, for computing an approximate value of the limit of (s n ) from s 0 , . . . , s 2 n , the E-algorithm with (g i (n)) as auxiliary sequences needs O(2 3n ), and the E-algorithm with (h i (n)) as auxiliary sequences applied to (s 2 n ) needs only O(n 3 ) arithmetical operations.

Some numerical examples illustrating this technique will be given in the

following section.

(17)

3. Applications

3.1. Summation of series. Consider the power series s(x) = P

n=0 a n x n , where

a n ≈ n θ



α 0 + α 1

n + . . . + α i

n i + . . .



with α 0 6= 0.

Let s n (x) = P n

k=0 a k x k for n ≥ 0. From a result of Wimp ([14], p. 19), we deduce the following.

If |x| ≤ 1 and x 6= 1, then s n (x) − s(x) ≈ x n n θ



β 0 + β 1

n + . . .



with β 0 6= 0.

If x = 1 and θ < −1, then s n (1) − s(1) ≈ n θ+1



γ 0 + γ 1

n + . . .



with γ 0 6= 0.

1) Setting g i (n) = x n n θ−i+1 for i ≥ 1, the auxiliary sequences satisfy the conditions of Section 2. Then the E-algorithm is effective on (s n (x)).

2) Let us now consider the subsequence (s 2 n (x)) of (s n (x)). We have s 2 n (x) − s(x) ≈ (x 2 n )2



β 0 + β 1

2 n + . . . + β i

(2 i ) n + . . .

 .

If 0 < |x| < 1, then we can apply the results of Section 1 for the E-algorithm with the auxiliary sequences

h i (n) = (x 2 n )2 b n i and b i = 1/2 i−1 .

3) Let 0 < |x| < 1, x 6= 1. Then (s n (x)) satisfies (36), and from Corol- lary 3 and Theorem 9 we deduce that the Shanks transformation, the pro- cess p, Levin’s transformations, and the sequence transformation given in Theorem 9 are all effective on (s n ).

Let us now give some numerical examples. We begin by the following linearly convergent series:

X

n=0

(n + 1)x n = 1

(x − 1) 2 , x = 0.9, 0.99, 0.999.

The results obtained by applying the E-algorithm to (s n ) (resp. (s 2 n ))

with g i (n) = x n n θ−i+1 (resp. h i (n) = g i (2 n )) as auxiliary sequences are

summarized in Table 1 (resp. Table 2), where we indicate, at each step n,

the number of exact digits of E n 0 .

(18)

T A B L E 1 n . 9 . 99 . 999

2 13 13 11

4 13 12 8

8 12 12 10

16 9 8 5

32 6 0 0

T A B L E 2 n . 9 . 99 . 999

1 0 0 0

2 14 13 10

3 15 13 11

4 15 14 10

5 15 15 11

The comparison of Tables 1 and 2 shows that the E-algorithm with (h i (n)) as auxiliary sequences is more stable than the E-algorithm with (g i (n)) as auxiliary sequences.

Let us now consider the following logarithmic convergent series [4]:

1) X ∞ n=1

1 n 2 = π 2

6 , 2)

X

n=1

n 4 + n 2 + 1

n 2 (1 + n 4 ) ∼ = 2.223411646515, 3)

X ∞ n=1

(n + e 1/n ) 2 ∼ = 1.71379673554030,

4) X ∞ n=1

Log  n + 1 n



Log  n + 2 n + 1



∼ = .68472478856,

5) 1 +

X

n=0

 1

n + 1 + Log

 n

n + 1



∼ = .57721566490153286.

The results obtained by applying the E-algorithm to (s n ) with (g i (n)), i ≥ 1, as auxiliary sequences are given in Table 3.

T A B L E 3

n 1) 2) 3) 4) 5)

s n E n 0 s n E 0 n s n E 0 n s n E 0 n s n E n 0

2 1 1 0 1 0 1 1 2 1 3

4 1 3 0 2 0 2 1 4 1 4

8 1 6 1 5 0 5 1 8 1 10

12 1 11 1 9 0 9 1 10 1 10

14 1 11 1 9 0 9 2 10 1 11

16 1 9 1 8 0 8 2 9 1 9

32 2 4 1 5 1 3 2 4 2 5

40 2 4 1 3 1 3 2 4 2 4

50 2 2 2 4 1 3 2 3 2 3

Applying the E-algorithm to (s 2 n ), with (h i (n)) as auxiliary sequences,

(19)

we obtain the results given in Table 4.

T A B L E 4

n 1) 2) 3) 4) 5)

s 2n E n 0 s 2n E 0 n s 2n E n 0 s 2n E n 0 s 2n E 0 n

1 1 1 0 1 0 0 1 1 1 1

2 1 2 0 2 0 1 1 2 1 2

3 1 4 1 3 0 1 1 2 1 3

4 1 6 1 4 1 3 2 4 1 4

5 2 7 1 5 1 4 2 4 2 5

6 2 9 2 9 1 6 2 7 2 7

7 2 11 2 10 1 8 2 8 2 9

8 3 13 2 12 1 10 3 10 3 12

9 3 15 3 14 1 11 3 12 3 15

The results of Tables 3 and 4 show that the E-algorithm with (h i (n)) as auxiliary sequences is more effective than the E-algorithm with the auxiliary sequences (g i (n)).

3.2. Numerical quadrature. The numerical computation of the integral s =

T

1

0 f (x) dx often leads to an asymptotic expansion of the form T (h) − s ≈

X ∞ j=1

m j

X

i=0

a j,i (Log(h)) i h γ j

with a j,m j 6= 0, m j ∈ N, 0 < γ 1 < γ 2 < . . . < γ j < . . . , where T (h) is an approximate value of s obtained by some quadrature formulae with steplength h ([0, 1] is divided into 1/h subintervals of length h; see [5–8, 10–12]).

Let σ ≥ 2, σ ∈ N. For h n = 1/σ n , we have T (h n ) − s ≈

X ∞ j=1

σ −nγ j n m j



a j,m j + . . . + a j,1

n m j −1 + a j,0

n m j

 . Let (p n ) be the sequence defined by p 0 = 0, p n = n + P n

i=1 m i for n ≥ 1.

Let j ≥ 1. Set θ i = p j − i and λ i = σ −γ j for i = 1 + p j−1 , . . . , p j . The sequence s n = T (h n ) has the asymptotic expansion

s n − s ≈ a 1 g 1 (n) + . . . + a k g k (n) + . . . with g i (n) = λ n i n θ i for i ≥ 1.

Thus, we can apply the results of Section 2.

Let us end this section with a numerical example. We have s =

1

\

0

x Log x

x + 1 dx = π 2

12 − 1 ∼ = −.1775329665759.

(20)

Let T (h) be the approximate value of s obtained by the trapezoidal rule with steplength h. We have

T (h) − s ≈

X

i=2

h i (a i,1 + a i,2 Log h) (see [5]).

For σ = 2, we have

n E n

0 E n 0

2 . 1758293422171846 . 1775484527978632 4 . 1773979134959518 . 1775329593577489 6 . 1775227574385099 . 1775329674394418 8 . 1775322183192145 . 1775329667708583

Acknowledgements. I want to thank the referee for his valuable sug- gestions. The paper has also profited from discussions with A. Lembarki, Faculty of Semlalia.

References

[1] C. B r e z i n s k i, Algorithmes d’Acc´el´eration de la Convergence. Etude Num´erique, Technip, Paris, 1978.

[2] —, A general extrapolation algorithm, Numer. Math. 35 (1980), 175–187.

[3] C. B r e z i n s k i and M. R e d i v o Z a g l i a, Extrapolation Methods, Theory and Prac- tice, North-Holland, Amsterdam, 1991.

[4] W. F. F o r d and D. A. S m i t h, Acceleration of linear and logarithmic convergence, SIAM J. Numer. Anal. 16 (1979), 223–240.

[5] L. F o x, Romberg integration for a class of singular integrands, Comput. J. 10 (1967), 87–93.

[6] T. H˚ a v i e, Error derivation in Romberg integration, BIT 12 (1972), 516–527.

[7] —, Generalized Neville type extrapolation schemes, ibid. 19 (1979), 204–213.

[8] D. C. J o y c e, Survey of extrapolation processes in numerical analysis, SIAM Rev.

13 (1972), 435–487.

[9] D. L e v i n, Development of nonlinear transformations for improving convergence of sequences , Internat. J. Computer Math. 3 (1973), 371–388.

[10] J. N. L y n e s s, Applications of extrapolation techniques to multidimensional quadra- ture of some integrand functions with a singularity , J. Comput. Phys. 20 (1976), 346–364.

[11] J. N. L y n e s s and E. d e D o n c k e r-K a p e n g a, On quadrature error expansions, Part I , J. Comput. Appl. Math. 17 (1987), 131–149.

[12] J. N. L y n e s s and B. W. N i n h a m, Numerical quadrature and asymptotic expan- sions, Math. Comput. 21 (1967), 162–178.

[13] D. S h a n k s, Non-linear transformations of divergent and slowly convergent se-

quences, J. Math. Phys. 34 (1955), 1–42.

(21)

[14] J. W i m p, Sequence Transformations and their Applications, Academic Press, New York, 1984.

A. Fdil

D´epartement de Math´ematiques E.N.S. de Marrakech

B.P. S 41

40000 Marrakech, Morocco

Received on 28.5.1996;

revised version on 7.11.1996

Cytaty

Powiązane dokumenty

The minimal extension of sequences (Abstract ), presented at the Conference on Logic and Algebra dedicated to Roberto Magari on his 60th birthday, Pontignano (Siena), 26–30 April

Find, if exist, central algorithm and optimal linear algorithm for the problems of:..

SuperK-Gd pre-SN warning system ( from M. Vagins talk at Sendai conference 6-9 March 2019 ) Hyper-Kamiokande project starting construction next year, operating 2027 other low

We study the asymptotic behaviour of discrete time processes which are products of time dependent transformations defined on a complete metric space.. Our suffi- cient condition is

The purpose of this paper is to provide acceleration methods for these vector sequences.. Comparisons are made with some

On the other hand for any function f from A 1 properties of T f are similar to properties of topologies generated by sequences, so similar to properties of the density topology T

Let m be an arbitrary but fixed natural

Diagnostics of material damages and their description are of importance for the development of the methods for improving the reliability, prediction of the