Mathematics Source Library
C & ASM


Fletcher - Powell - Davidon

Fletcher-Powell-Davidon

Given a function f : RnR the Fletcher-Powell-Davidon method belongs to a group of methods variously called quasi-Newton methods, matrix updating methods, or variable metric methods which attempt to locate a local minimum of f. The Fletcher-Powell-Davidon method is also refered to as the Davidon-Fletcher-Powell method or sometimes more simply as the Fletcher-Powell method.

Given a point x0Rn, Newton-Raphson's method generates the search direction
H( f, x0 )-1grad f ( x0 ), where H( f, x0 ) is the Hessian of f at x0. The quasi-Newton methods avoid the need for calculating the Hessian directly and then inverting it by successively estimating the inverse of the Hessian using only the gradient, grad f ( x0 ).

In general the quasi-Newton methods proceed as follows:
  1. Initialization: Choose a point, x0, which approximates the location of a local minimum. Set H0 = I, the identity matrix, and calculate d0 = H0 grad f ( x0 ).
  2. Determine the Displacement Vector Δ xk: Given xk and dk, let λk be the value of λ which minimizes f ( xk - λ dk ). Then set Δ xk = λk dk.
  3. Update the estimate for the location of a local minimum: Set xk+1 = xk + Δ xk.
  4. Check if the stopping criterion is satisfied: The conventional stopping rules are: (1) Stop if || g || < ε for either the max-norm or the l2-norm, and/or (2) Stop if
    || d || < ε for either the max-norm or the l2-norm. If the stopping criterion is satisfied, then halt.
  5. Calculate New Search Direction: Define Δgk = grad f ( xk+1 ) - grad f ( xk ), update Hk+1 (see below) and finally calculate dk+1 = Hk+1 grad f ( xk+1 ). Go to step 2.
For quasi-Newton methods, the formula for updating Hk vary. For the Fletcher-Powell-Davidon method the estimate for Hk+1 is
Hk+1 = Hk + Δ xk ΔxTk / ΔxTk Δgk - ( Hk Δgk )( Hk Δgk )T / ΔgTkHk Δgk


Function List

  • int Fletcher_Powell_Davidon( double (*f)(double *), void (*df)(double*, double*), int (*Stopping_Rule)(double*, double*, int, int), double *a, double *dfa, double line_search_cutoff, double line_search_cutoff_scale_factor, double line_search_tolerance, int n )

    This routine uses the Fletcher-Powell-Davidon method to approximately locate a local minimum of the user-supplied function f(x). The procedure starts at x = a. The gradient is calculated using the user-supplied function df(x, dfa) where dfa is the n-dimensional array whose ith component dfa[i] is ∂f(x)/∂xi. The parameters line_search_cutoff, line_search_cutoff_scale_factor and line_search_tolerance are used for minimizing the function, f ( x ), down the line in a direction opposite to that given by the gradient. See Minimize_Down_the_Line for a discription of these parameters. The arguments to the user-supplied stopping_rule are: the displacement vector d between two successive iteratates, the gradient of the function evaluated at the new estimate f(x), the total number of iterations performed, and n the dimension of x. The user-supplied stopping_rule should return a non-zero, either positive or negative, integer to halt the procedure and return zero to continue iterating, the return value -1 is the only intrinsic return value used by the Fletcher-Powell-Davidon routine. Upon exiting the function returns either the return of the user-supplied stopping-rule or a -1 indicating that there was not enough dynamic heap memory. The value a is set to the current estimate.

C Source Code