J. K O K O (Clermont-Ferrand)
A CONJUGATE GRADIENT METHOD WITH QUASI-NEWTON APPROXIMATION
Abstract. The conjugate gradient method of Liu and Storey is an efficient minimization algorithm which uses second derivatives information, without saving matrices, by finite difference approximation. It is shown that the finite difference scheme can be removed by using a quasi-Newton approx- imation for computing a search direction, without loss of convergence. A conjugate gradient method based on BFGS approximation is proposed and compared with existing methods of the same class.
1. Introduction. We are concerned with the unconstrained minimiza- tion problem
(P ) min f (x), x ∈ R n ,
with f a twice continuously differentiable function. When the dimension of (P ) is large, conjugate gradient (CG) methods are particularly useful thanks to their storage saving properties. The classical conjugate gradient methods aim to solve (P ) by a sequence of line searches
x k+1 = x k + t k d k , k = 1, 2, . . . ,
where t k is the step length and the search direction d k is of the form d k = −g k + β k d k−1
with g k = ∇f (x k ). There are many formulas for computing the coefficient β k ; they can be found in [9], [3], [12] and [8].
Liu and Storey [9] propose a new CG method in which the search direc- tion is of the form
(1) d k = −α k g k + β k d k−1 , α k > 0,
1991 Mathematics Subject Classification: 65K10, 49M07.
Key words and phrases: unconstrained high-dimensional optimization, conjugate gra- dient methods, Newton and quasi-Newton methods.
[153]