• Nie Znaleziono Wyników

ON THE CONVERGENCE RATE OF REGULARIZATION METHODS FOR ILL-POSED EXTREMAL PROBLEMS

N/A
N/A
Protected

Academic year: 2021

Share "ON THE CONVERGENCE RATE OF REGULARIZATION METHODS FOR ILL-POSED EXTREMAL PROBLEMS"

Copied!
12
0
0

Pełen tekst

(1)

INSTITUTE OF MATHEMATICS POLISH ACADEMY OF SCIENCES

WARSZAWA 1994

ON THE CONVERGENCE RATE OF REGULARIZATION METHODS FOR ILL-POSED EXTREMAL PROBLEMS

M . K O V ´ A C S

Department of Computer Science, Lor´ and E¨ otv¨ os University Budapest 112, P.O. Box 157, H-1502 Hungary

F . P . V A S I L ’ E V

Faculty of Computational Mathematics and Cybernetics, Moscow State University GSP, Leninskie Gory, 119899 Moscow, Russia

1. Introduction. Let us consider the following mathematical programming problem:

(1) J (u) → inf, u ∈ U ,

(2) U = {u ∈ U 0 : g i (u) ≤ 0 , i = 1, . . . , m, g i (u) = 0, i = m + 1, . . . , s} , where U 0 is a given set and J (u), g 1 (u), . . . , g s (u) are finite functions defined on U 0 . It is known [7, 8, 10, 21], that the problem (1)–(2) is generally unstable under perturbation of the functions J (u), g i (u); therefore solving it we have to use some regularization technique. In the following we will use classical regularization methods such as the method of stabilization, the method of residuals and the method of quasisolutions. These methods will be based on the extension of the feasible set, or on the penalty and barrier function methods. One can find the application of these methods for the problem (1)–(2) in [7, 8, 10, 21, 9, 11, 2, 12, 22, 3, 13, 14, 24, 15] where the convergence rate for the objective function values has also been investigated.

In this paper, which is a continuation of the papers [8, 21], we survey the results published in [13, 14, 24, 15, 16, 23, 26, 4, 5, 17] on the convergence rate of minimizing sequences constructed by the mentioned regularization methods.

1991 Mathematics Subject Classification: 65K05, 90C31.

The paper is in final form and no version of it will be published elsewhere.

[233]

(2)

We note here that the estimates of the convergence rate for the function values can be obtained under not very strong assumptions on the problem (1)–

(2), namely it is usually supposed that

Assumption 1. J ∗ = inf U J (u) > −∞, U ∗ = {u ∈ U 0 : J (u) = J ∗ } 6= ∅ and there exist c 1 ≥ 0, . . . , c s ≥ 0 such that

(3) J ∗ ≤ J (u) +

s

X

i=1

c i g i + (u) ≡ G 0 (u), u ∈ U 0 ,

where g i + (u) = max{g i (u), 0}, i = 1, . . . , m; g + i (u) = |g i (u)|, i = m + 1, . . . , s.

Assumption 2. The set U 0 is exactly known, but instead of the exact functions J (u), g i (u) we only have their approximations J δ (u), g iδ (u) such that

(4) max{|J δ (u)−J (u)|; max

1≤i≤s |g (u)−g i (u)|} ≤ δ(1+Ω(u)), u ∈ U 0 , δ > 0, where Ω(u) is a nonnegative function on U 0 .

It is worth mentioning that condition (3) is satisfied if the Lagrangian function L(u, λ) = J (u) + P s

i=1 λ i g i (u), u ∈ U 0 , λ ∈ Λ 0 = {λ ∈ R s : λ 1 ≥ 0, . . . , λ m ≥ 0}, has a saddle point (u ∗ , λ ), i.e. L(u ∗ , λ) ≤ L(u ∗ , λ ) ≤ L(u, λ ), u ∈ U ∗ , λ ∈ Λ 0 . In this case any number for which |λ i | ≤ c i can play the role of c i . Consequently, the class of problems satisfying condition (3) is rich enough (see e.g. [18]).

The common structure of the mentioned methods of regularization consists of the following steps: they construct the set U ∗ (δ) of optimal points for the prob- lem given by the functions (J δ (u), g 1δ (u), . . . , g sδ (u), δ) for which the following estimates are computed:

(5) J (u) ≤ J + β(δ), Ω(u) ≤ Ω + γ(δ), max

1≤i≤s g + i (u) ≤ %(δ), u ∈ U (δ) , where Ω ∗ = inf U

Ω(u); β(δ), γ(δ), %(δ) are positive functions of δ > 0. From (4) and (5) one can deduce the following estimate of the convergence rate for the objective function:

(6) −|c| 1 %(δ) ≤ J (u) − J ≤ β(δ), u ∈ U (δ), |c| 1 =

s

X

i=1

c i . It is obvious from (6) that the convergence condition is

(7) lim

δ→0 β(δ) = lim

δ→0 %(δ) = 0 ; in that case

(8) lim

δ→0 sup

U

(δ)

|J (u) − J | = 0 .

Conditions (7) are the consistency conditions for the parameters of the regular-

ization methods.

(3)

In Sections 2–4 we expound this common structure for each regularization method. Assuming that Assumptions 1 and 2 are satisfied we describe the con- struction of U ∗ (δ) and the consistency conditions of the parameters; moreover, we give explicit formulas for the functions β(δ), γ(δ), %(δ) in (5).

In Section 5 we deal with the convergence rate for the optimizing sequences.

To obtain such type of convergence rate much stronger assumptions on problem (1)–(2) will be required. We formulate these assumptions and under these as- sumptions we give estimates for the distance (in the norm-metric) between the optimal solution and its approximation obtained by any regularization technique.

2. The methods of extended feasible set. Let us define (9) W (δ) = {u ∈ U 0 : g + (u) ≤ δ(1 + Ω(u)), i = 1, . . . , s} . From (4) it follows that

g + (u) ≤ g + i (u) + δ(1 + Ω(u)), u ∈ U, i = 1, . . . , s,

therefore U ⊆ W (δ), δ > 0. Consequently, W (δ) is a nonempty set which is in fact an extension of U . Moreover, using (4), for every u ∈ W (δ) we have

(10) g i + (u) ≤ g + (u) + δ(1 + Ω(u)) ≤ 2δ(1 + Ω(u)), u ∈ W (δ), i = 1, . . . , s.

The method of stabilization based on the extended feasible set solves the prob- lem

(11) t(u) = J δ (u) + αΩ(u) → inf, u ∈ W (δ), α = α(δ) > 0.

The problem (11) is considered as an extremal problem of the first type [7, 8, 10, 21], i.e. we seek a point u = u(δ) from the set

(12) U ∗ (δ) = {u ∈ W (δ) : t(u) ≤ t ∗ + ε(δ)}, ε(δ) > 0,

where t = inf W (δ) t(u). Choose the parameter α(δ) consistently with the param- eter δ > 0 in the sense that

(13) α(δ) > δ(1 + 2|c| 1 ), δ > 0.

Then under Assumptions 1 and 2 the estimates (5) and (6) will be satisfied with the parameters (cf. [10, 12, 15])

γ(δ) = 2δ(1 + Ω ∗ )(1 + |c| 1 ) + ε(δ) α(δ) − δ(1 + 2|c| 1 ) ,

β(δ) = α(δ)Ω ∗ + ε(δ) + 2δ(1 + Ω ∗ ) + δγ(δ), (14)

%(δ) = 2δ + Ω + γ(δ), δ > 0.

(15)

The method of residuals based on the extended feasible set defines a point u = u(δ) from the set

(16) U ∗ (δ) = {u ∈ V (δ) : Ω(u) ≤ inf

V (δ) Ω(u) + ε(δ)}, ε(δ) > 0,

(4)

where

(17) V (δ) = {u ∈ W (δ) : J δ (u) ≤ inf

W (δ) [J δ (u) + θ(δ)Ω(u)] + σ(δ)} ,

θ(δ) > 0, σ(δ) > 0 . Let the parameters θ(δ), σ(δ) be made consistent with the parameter δ by choosing (18) θ(δ) ≥ δ(1 + 2|c| 1 ), σ(δ) ≥ δ(3 + Ω ∗ + 2|c| 1 ).

Then under Assumptions 1 and 2 for the problem (1)–(2) the estimates (5) and (6) hold [15] with parameters

(19) γ(δ) = ε(δ) ,

β(δ) = 2δ(1 + Ω ) + θ(δ)Ω + σ(δ) + δε(δ), and %(δ) is computed by the rule (15) with γ(δ) taken from (19).

The method of quasisolutions based on the extended feasible set defines a point u = u(δ) from the set

(20) U ∗ (δ) = {u ∈ Q(δ) : J δ (u) ≤ inf

Q(δ) J δ (u) + ed}, ε(δ) > 0, where

(21) Q(δ) = {u ∈ W (δ) : Ω(u) ≤ r(δ)}, r(δ) > ω = inf

U Ω(u).

If Assumptions 1 and 2 are satisfied then the estimates (5) and (6) can be obtained [15] with parameters

(22) γ(δ) = max{0; r(δ) − Ω ∗ } ,

β(δ) = max{0; J ∗ (r(δ)) − J ∗ } + ε(δ) + 2δ(1 + Ω + γ(δ)), where

(23) J (r) = inf

U (r) J (u), U (r) = {u ∈ U : Ω(u) ≤ r}

and with parameter %(δ) obtained by putting the value γ(δ) from (22) into (15).

3. The barrier function methods. Discussing this method we will limit our investigations to the most simple generalized barrier function [2, 3, 5]

(24) B δ (u) =

 

 

 

 

 

 

s

X

i=1

1

ν(δ) + δ(1 + Ω(u)) − g iδ (u) +

s

X

i=m+1

1

ν(δ) + δ(1 + Ω(u)) + g iδ (u) , u ∈ W (δ), ν(δ) > 0,

+∞, u ∈ U 0 \ W (δ),

where W (δ) is defined as in (9).

(5)

If {u k } ∈ W (δ) and lim k→∞ (g + (u k ) − δ(1 + Ω(u k ))) = 0, then lim k→∞ B δ (u k )

≥ 1/ν(δ). It is seen from here that in the case of little value of ν(δ) the value lim k→∞ B δ (u k ) will be large. Consequently, B δ (u) characterizes to what extent we penalize the closeness of the point u to the boundary of the extended feasible set, i.e. to the set

Γ pW (δ) = {u ∈ U 0 : g + (u) = δ(1 + Ω(u)) for at least one i, 1 ≤ i ≤ s}.

The cases of other simple and more general barrier functions are discussed in [2, 3, 5].

The method of stabilization based on the barrier function technique chooses a point u = u(δ) which minimizes the function

(25) t(u) = J δ (u) + α(δ)Ω(u) + a(δ)B δ (u), α(δ) > 0 , a(δ) > 0 ,

with accuracy ε(δ) > 0, i.e. this point is chosen from the set U ∗ (δ) which was defined according to (12). The consistency of the parameters is characterized by the inequality

(26) α(δ) > δ(1 + 2|c| 1 ) + 2δ a(δ)

ν 2 (δ) (2s − m).

If Assumptions 1 and 2 and condition (26) are satisfied then the estimate for the method satisfies the inequalities (5) and (6) [2, 5] with parameters

(27)

γ(δ) =

2δ(1 + Ω ∗ )



1 + |c| 1 + 2 a(δ)

ν 2 (δ) (2s − m)



+ ε(δ) + a(δ)

ν(δ) (2s − m) α(δ) − δ



1 + 2|c| 1 + 2 a(δ)

ν 2 (δ) (2s − m)

 ,

β(δ) = α(δ)Ω + ε(δ) + a(δ)

ν(δ) (2s − m) + δ(1 + Ω )



1 + 2 a(δ)

ν 2 (δ) (2s − m)



+ δ(1 + Ω ∗ + γ(δ))



1 + 2 a(δ)

ν 2 (δ) (2s − m)

 , and %(δ) is defined by (15) with γ(δ) from (27).

The method of residuals based on the barrier function technique defines the set U ∗ (δ) according to (16), but here

(28) V (δ) = {u ∈ W (δ) : J δ (u) + a(δ)B δ (u)

≤ inf

W (δ) [J δ (u) + a(δ)B δ (u) + θ(δ)Ω(u)] + σ(δ)},

σ(δ) > 0, θ(δ) > 0, a(δ) > 0 .

Under Assumptions 1 and 2 and the consistency condition (18), we have the

estimates (5) and (6) for the method defined by (16) and (28), where the exact

(6)

values of the parameters are (cf. [5])

(29)

γ(δ) = ε(δ),

β(δ) = 2δ(1 + Ω ∗ ) + θ(δ)Ω ∗ + σ(δ) + δε(δ) + a(δ)

ν(δ) (2s − m), while the parameter %(δ) is defined by (15) with γ(δ) taken from (29).

The method of quasisolutions based on the barrier function technique defines the set U ∗ (δ) by

(30) U ∗ (δ) = {u ∈ Q(δ) : J δ (u) + a(δ)B δ (u) ≤ inf

Q(δ) [J δ (u) + a(δ)B δ (u)] + ε(δ)}, where a(δ) > 0, ε(δ) > 0 and the set Q(δ) is given by (21). For this method we obtain the following result [5]: if the problem (1)–(2) satisfies Assumptions 1 and 2 and condition (21) is also satisfied then the estimates (5) and (6) are valid with parameters

(31)

γ(δ) = max{0; r(δ) − Ω ∗ },

β(δ) = max{0; J ∗ (r(δ)) − J ∗ } + ε(δ) + 2δ(1 + Ω + γ(δ)) + a(δ)

ν(δ) (2s − m),

while %(δ) is computed by the formula (15) with γ(δ) taken from (31) and J ∗ (r) is defined in (23).

4. The penalty function methods. Analyzing the regularization methods in connection with the penalty function techniques we will limit our investigations to the very simple penalty function P (u) and its approximation P δ (u) given by the formulas

(32) P (u) =

s

X

i=1

(g + i (u)) p , P δ (u) =

s

X

i=1

(g + (u)) p , u ∈ U 0 , p ≥ 1.

More general cases are discussed in [22]. Since the deviation |P δ (u) − P (u)| may be computed from (4) and (32), for the sake of simplicity we replace the condition (4) in Assumption 2 with

(33) max{|J δ (u) − J (u)|; |P δ (u) − P (u)|} ≤ δ(1 + Ω(u)), u ∈ U 0 , δ > 0 ; we will refer to the modified assumption as Assumption 2 0 .

The method of stabilization based on the penalty function methods defines the set U ∗ (δ) by the rule (12), but here

(34)

t(u) = J δ (u) + α(δ)Ω(u) + A(δ)P δ (u), α(δ) > 0, A(δ) > 0 , t ∗ = inf

U

0

t(u) .

(7)

For the consistency of the parameters the following is required in addition to (13):

(35) α(δ) > δ(1 + A(δ)), δ > 0.

If Assumptions 1 and 2 0 and conditions (13) and (35) are satisfied then the es- timates (5) and (6) hold for the method defined by (12) and (34), where the parameters are given as follows (cf. [8, 11, 12, 22, 13]):

(36) γ(δ) = 2δ(1 + Ω ∗ )(1 + A(δ)) + ε(δ) α(δ) − δ(1 + A(δ)) +

0 if p = 1,

M A(δ) 1/(p−1)

α(δ) − δ(1 + A(δ)) if p > 1, β(δ) = α(δ)Ω ∗ + ε(δ) + δ(1 + A(δ))(2 + 2Ω ∗ + γ(δ)),

(37) %(δ) =

 

 

 

 

 β(δ) A(δ) − |c|



if p = 1, inf

δ>0 A(δ) > |c|,

 |c|

A(δ)

 p/(p−1)

+ p

p − 1 · β(δ) A(δ)

 1/p

if p > 1 , where

|c| =

 

 

max |c i | if p = 1,

 X s

i=1

|c i | p/(p−1)  (p−1)/p

if p > 1; M = (p − 1)  |c|

p

 p/(p−1)

.

The method of residuals based on the penalty function methods defines the set U (δ) by the rule (16), but in this formula V (δ) is the following set:

(38) V (δ) = {u ∈ U 0 : J δ (u) + A(δ)P δ (u)

≤ inf

U

0

[J δ (u) + A(δ)P δ (u) + δ(1 + A(δ))Ω(u)] + σ(δ)}

σ(δ) > 0, A(δ) > 0.

If the parameters σ(δ), A(δ) are consistent in the sense that

(39) σ(δ) ≥ δ(1 + A(δ))(3 + Ω ∗ ) + M (A(δ)) −1/(p−1) , δ > 0,

and Assumptions 1 and 2 0 are satisfied, then the estimates (5) and (6) hold [14]

with parameters

(40) γ(δ) = ε(δ) , β(δ) = δ(1 + A(δ))(3Ω ∗ + 2 + ε(δ)) + σ(δ) and %(δ) is computed by (37) with γ(δ) and β(δ) taken from (40).

The method of quasisolutions based on the penalty function methods defines the set U ∗ (δ) as follows:

U (δ) = {u ∈ Q(δ) : J δ (u) + A(δ)P δ (u) (41)

≤ inf

Q(δ) [J δ (u) + A(δ)P δ (u)] + ε(δ)}, ε(δ) > 0,

(8)

where

(42) Q(δ) = {u ∈ U 0 : Ω(u) ≤ r(δ)}, r(δ) > ω = inf

U

0

Ω(u).

Under Assumptions 1 and 2 0 the method defined by (41) and (42) satisfies the estimates (5) and (6) with parameters (cf. [24])

(43) γ(δ) = max{0; r(δ) − Ω },

β(δ) = max{0; J ∗ (r(δ)) − J ∗ } + ε(δ) + δ(1 + A(δ))(1 + Ω + γ(δ)), and %(δ) is defined by the rule (37), in which the parameters β(δ) and γ(δ) are taken from (43) and J ∗ (r) is from (23).

5. Estimation of the convergence rate of the regularized minimizers.

In this section we show that under stronger assumptions on the problem (1)–(2) for each regularization method described above we may obtain an estimate of the convergence rate of the chosen optimizing sequences.

Assumption 3. The set U 0 is a convex closed subset of a reflexive Banach space B which is equipped with the norm kuk; J (u), g + 1 (u), . . . , g m + (u) are convex and (in the strong topology) lower semicontinuous functions on U 0 .

Theorem 1. Let Assumptions 1–3 be satisfied and let J (u) be a strictly uni- formly convex function on U 0 with modulus of convexity ω J (t). Let the set U ∗ (δ) be defined by one of the described methods of regularization with consistently cho- sen parameters (see the conditions (13), (18), (26), (35), (39)) and let u denote the solution of the problem (1)–(2). Then

(44) ku − u k ≤ ω J −1 (β(δ) + |c| 1 %(δ)), u ∈ U ∗ (δ) ,

where ω J −1 (ξ) is the inverse function of ω J (t) and the parameters β(δ), %(δ) are the values from (5) and (6) corresponding to the considered regularization method.

If the parameters β(δ), %(δ) satisfy the condition (7), then

(45) lim

δ→0 sup

u∈U

(δ)

ku − u k = 0.

If B = H is a Hilbert space and J (u) is strongly convex on U 0 , i.e. ω J (t) = γt 2 , then the estimate (44) has the following particular form:

ku − u k ≤ 1

γ (β(δ) + |c| 1 %(δ)) 1/2 , u ∈ U ∗ (δ).

P r o o f. Under the conditions of the theorem the set U is convex and closed

and with the strictly uniformly convex function J (u) the solution of the problem

(1)–(2) is unique, i.e. U ∗ = {u ∗ } [10, 18]. Moreover, the function G 0 (u) in (3)

is also strictly uniformly convex with the same modulus of convexity ω J as

the function J (u), and it reaches the infimum on U 0 at the point u ∗ , while

(9)

inf U

0

G 0 (u) = G(u ∗ ) = J ∗ . Then [10, 18]

ω J (ku − u ∗ k) ≤ G 0 (u) − G 0 (u ∗ ) = J (u) − J ∗ +

s

X

i=1

c i g + i (u), u ∈ U 0 . Using the estimate (5) we hence obtain the estimate ω J (ku−u ∗ k) ≤ β(δ)+|c| 1 %(δ) for every u ∈ U ∗ (δ). Taking into consideration the strict monotonicity of ω J this inequality is equivalent to (44).

Since lim ξ→+0 ω J −1 (ξ) = 0, lim ξ→+0 ω −1 (ξ) = 0 [18], we deduce that (45) is valid.

Another estimate for the convergence rate of optimizing sequences may be obtained under a weaker assumption for the objective function J (u) but a much stronger condition for the stabilizing function Ω(u). Namely, let us associate with (1)–(2) the problem of finding the Ω-normal solution u ∗ ∈ U as follows:

(46)

Ω(u) → inf

U

, U ∗ = {u ∈ U 0 : g i (u), i = 1, . . . , m;

g i (u) = 0, i = m + 1, . . . , s; J (u) − J ∗ ≤ 0} .

Assumption 4. The function Ω(u) is strictly uniformly convex on U 0 with modulus of convexity ω Ω (t) and for the problem (46) there exist real numbers µ 0 ≥ 0, . . . , µ s ≥ 0 such that

(47) Ω ∗ ≤ Ω(u) + µ 0 (J (u) − J ∗ ) +

s

X

i=1

µ i g + i (u) ≡ G 1 (u), u ∈ U 0 .

Condition (47) holds, for example, if the Lagrangian function of the extremal problem (46) has a saddle point; in this case any value not less than the modulus of the ith Lagrangian multiplier can play the role of µ i in (47).

Theorem 2. Let Assumptions 1–4 be satisfied and let the set U ∗ (δ) be de- fined by one of the described methods of regularization with consistently chosen parameters. Then

(48) ku − u k ≤ ω −1 (µ 0 β(δ) + |µ| 1 %(δ) + γ(δ)), u ∈ U (δ), where ω −1 (ξ) is the inverse function of ω Ω (t), |µ| 1 = P s

i=1 , and β(δ), %(δ), γ(δ) are the values from (5) and (6) corresponding to the considered regularization method.

If the parameters β(δ), %(δ) satisfy condition (7) and

(49) lim

δ→+0 γ(δ) = 0,

then also (45) holds.

(10)

If B = H is a Hilbert space and Ω(u) = kuk 2 then ω Ω (t) = t 2 , and the estimate (48) has the form

ku − u k ≤ (µ 0 β(δ) + |µ| 1 %(δ) + γ(δ)) 1/2 , u ∈ U ∗ (δ).

P r o o f. Under the conditions of the theorem the set U ∗ in (46) is a closed, convex set, and the strictly uniformly convex function Ω(u) has a unique minimum point u ∗ on it [10, 18]. The function G 1 (u) given by (47) is also strictly uniformly convex on U ∗ with modulus of convexity ω Ω (t). From (47) it follows that the function G 1 (u) reaches an infimum on U 0 at the point u ∗ while inf U

0

G 1 (u) = G 1 (u ) = Ω . Consequently,

ω Ω (ku − u ∗ k) ≤ G 1 (u) − G 1 (u ∗ )

= Ω(u) − Ω ∗ + µ 0 (J (u) − J ∗ ) +

s

X

i=1

µ i g i + (u), u ∈ U 0 . From this, using (5) it follows that ω Ω (ku − u k) ≤ γ(δ) + µ 0 β(δ) + |µ| 1 %(δ)), u ∈ U ∗ (δ), which is equivalent to (48).

(45) can be proved as in the previous theorem.

Using the explicit formulas for β(δ), γ(δ), %(δ) we can express the conditions (7) and (49) by the parameters of the methods. For example, for the method given by (11) and (12) the conditions

sup

δ>0

δ

α(δ) < 1 1 + 2|c| 1

, sup

δ>0

ε(δ)

α(δ) < ∞, lim

δ→0 (α(δ) + ε(δ)) = 0 guarantee (7), and to satisfy (49) it is enough to require that

δ→0 lim

δ + ε(δ) α(δ) = 0 .

We remark that the rate of convergence for the minimizing sequence can be obtained for all considered regularization methods and also for their modifications in the case if we assume that the problem (1)–(2) has the property of the so called strong compatibility [18]. Some results in this field can be found in [13, 14, 24, 15, 16, 5]. Furthermore, we mention the papers [1, 25, 6, 20, 19], in which the regularization of the linear programming problems is discussed, and the rate of convergence is established without the assumption of uniform convexity of the objective and the stabilizing functions.

References

[1] V. A. B e r e z n e v, V. G. K a r m a n o v and A. A. T r e t ’ y a k o v, Stable methods of solv- ing extremal problems with approximate information, preprint, Scientific Council for the Complex Problem “Cybernetics”, Acad. Sci. USSR, 1987 (in Russian).

[2] M. K o v ´ a c s, On the regularization of not well-posed extremal problems using a barrier

function, in: Numerical Analysis, Computer Centers, Moscow Univ. and Budapest Univ.,

1978, 62–78 (in Russian).

(11)

[3] M. K o v ´ a c s, On the convergence of the method of generalized barrier functions, Vestnik Moskov. Univ. Ser. XV Vychisl. Mat. Kibernet. 1981 (1), 40–45 (in Russian).

[4] M. K o v ´ a c s and F. P. V a s i l ’ e v, On the convergence rate of the continuous version of the regularized gradient method , Optimization 18 (5) (1987), 689–696.

[5] —, —, Convergence rate for regularized barrier function methods, ibid. 22 (3) (1991), 427–438.

[6] V. V. M o r o z o v and M. Y a c h i m o v i c h, An estimate of convergence rate of a regu- larization method for a linear programming problem, in: Computational Complexes and Modelling of Complex Systems, Izdat. Moskov. Gos. Univ., Moscow 1989, 134–138 (in Russian).

[7] A. N. T i k h o n o v and V. Ya. A r s e n i n, Methods for the Solution of Ill-Posed Problems, Nauka, Moscow 1986 (in Russian).

[8] A. N. T i k h o n o v and F. P. V a s i l ’ e v, Methods of solution of ill-posed extremal prob- lems, in: Mathematical Models and Numerical Methods, Banach Center Publ. 3, PWN, Warszawa 1978, 297–342 (in Russian).

[9] A. N. T i k h o n o v, F. P. V a s i l ’ e v, M. M. P o t a p o v and A. D. Y u r i˘ı, Regularization of minimization problems on sets given approximately , Vestnik Moskov. Univ. Ser. XV Vychisl. Mat. Kibernet. 1977 (1), 4–19 (in Russian).

[10] F. P. V a s i l ’ e v, Methods of Solution of Extremal Problems, Nauka, Moscow 1981 (in Russian).

[11] —, On the regularization of ill-posed extremal problems, Dokl. Akad. Nauk SSSR 241 (5) (1978), 1001–1004 (in Russian).

[12] —, Regularization of ill-posed minimization in approximately specified sets, Zh. Vychisl.

Mat. i Mat. Fiz. 20 (1) (1980) (in Russian).

[13] —, Regularization of unstable minimization problems, Trudy Mat. Inst. Steklov. 185 (1988), 60–65 (in Russian).

[14] —, A residual method for solving unstable minimization problems, Vestnik Moskov. Univ.

Ser. XV Vychisl. Mat. Kibernet. 1987 (4), 6–10, 72 (in Russian).

[15] —, Regularization methods for unstable minimization problems, based on the set extension, ibid. 1990 (1), 3–16 (in Russian).

[16] —, An estimate of the rate of convergence of A. N. Tikhonov’s regularization method for nonstable minimization problems, Dokl. Akad. Nauk SSSR 299 (4) (1988), 792–796 (in Russian).

[17] —, An estimate of the convergence rate of regularization methods for unstable minimization problems, in: Direct and Inverse Problems of Mathematical Physics, Izdat. Moskov. Gos.

Univ., Moscow 1991, 115–122 (in Russian).

[18] —, Numerical Methods for Solving Extremal Problems, Nauka, Moscow 1988 (in Russian).

[19] —, An estimate for the convergence rate of the quasisolution method for a linear program- ming problem, Vestnik Moskov. Univ. Ser. XV Vychisl. Mat. Kibernet. 1991 (1), 16–22 (in Russian).

[20] F. P. V a s i l ’ e v, A. Yu. I v a n i t s k i˘ı and V. A. M o r o z o v, An estimate for the rate of convergence of the residual method for problems of linear programming with approximate data, Zh. Vychisl. Mat. i Mat. Fiz. 30 (8) (1990), 1257–1262, 1279 (in Russian).

[21] F. P. V a s i l ’ e v and M. K o v ´ a c s, Regularization of ill-posed extremal problems with impre- cisely given initial data, in: Computationl Mathematics, Banach Center Publ. 13, PWN, Warszawa 1984, 297–341 (in Russian).

[22] —, —, Regularization of ill-posed extremal problems in connection with penalty functions of general type, in: Problems of Computational Mathematics and System Analysis, Computer Centers, Moscow Univ. and Budapest Univ., 1980, 19–41 (in Russian).

[23] F. P. V a s i l ’ e v, M. K o v ´ a c s, M. M. P o t a p o v and Yu. N. C h e k a n o v, An estimate

of the convergence rate for a continuous analogue of the regularized gradient method for

(12)

a linear programming problem, in: Numerical Methods for Solution of Boundary Value and Initial Value Problems for Differential Equations, Izdat. Moskov. Gos. Univ., Moscow 1986, 98–106 (in Russian).

[24] F. P. V a s i l ’ e v and M. A. K u r z h a n s k i˘ı, On the method of quasisolution for unstable problems of minimization with imprecisely defined initial data, Vestnik Moskov. Univ. Ser.

XV Vychisl. Mat. Kibernet. 1989 (4), 13–18 (in Russian).

[25] F. P. V a s i l ’ e v, V. V. M o r o z o v and M. Y a c h i m o v i c h, An estimate for the rate of convergence of a regularization method for a linear programming problem, Zh. Vychisl.

Mat. i Mat. Fiz. 29 (4) (1989), 631–635 (in Russian).

[26] F. P. V a s i l ’ e v, M. M. P o t a p o v and Yu. N. C h e k a n o v, An estimate for the rate of

convergence of A. N. Tikhonov’s regularization method for a linear programming problem,

in: Mathematical Models and Numerical Methods, Izdat. Moskov. Gos. Univ., Moscow

1987, 21–27 (in Russian).

Cytaty

Powiązane dokumenty

The present work showed that the molecular dynamics method, illustrative of particle-based solution methods in a multi- disciplinary CFD method, has an excellent parallel

Stanisław Siek. &#34;Psychological

tury rzeczywistości z racji jej istnienia. Takie nazewnictwo, a co za tym idzie, podejście do zagadnień metafizykalnych, może budzić pewne za­ strzeżenia, ale - jak się zdaje -

and extended for real valued features by Campbell and Vicente [8] search for the threshold neuron that performs best separation between the set of vectors from a given class R ∈ Q +

Nach Angaben des Grünen-Politikers Hans Christian Ströbele soll der Ausschuss nun klären, warum Schily weder die Bundesbehörden noch die Staatsanwaltschaft

Z  tego pragnienia rodzi się ich obecność we wczesnej twórczości Biełego.. Obraz tych mitycznych zwierząt służy symboliście rosyjskiemu za element gry

Jeżeli przyjmiemy, że  u  zwierząt istnieją stany wewnętrzne (stany mentalne), które mogą pełnić role przyczyno- we w  ich zachowaniach, to  pełny opis i  wyjaśnienie

For functions of one and two real variables, the rate of pointwise and uniform convergence of some Bernstein type algebraic polynomials is investigated..