Monday, February 1, 2010

Powell's Method

The essence of Powell's method is to add two steps to the process described in the preceding paragraph. The vector represents, in some sense, the average direction moved over the n intermediate steps in an iteration. Thus the point is determined to be the point at which the minimum of the function f occurs along the vector . As before, f is a function of one variable along this vector and the minimization could be accomplished with an application of the golden ratio or Fibonacci searches. Finally, since the vector was such a good direction, it replaces one of the direction vectors for the next iteration. The iteration is then repeated using the new set of direction vectors to generate a sequence of points . In one step of the iteration instead of a zig-zag path the iteration follows a "dog-leg" path. The process is outlined below.
Let be an initial guess at the location of the minimum of the function
.

Let for be the set of standard base vectors.
Initialize the vectors for and use their transpose to form the columns of the matrix U as follows:

.

Initialize the counter .

(i) Set .

(ii) For ,
find the value of
that minimizes ,
and set
.

(iii) Set for and
set
.

(iv) Increment the counter .

(v) Find the value of that minimizes ,
and set .

(vi) Repeat steps (i) through (v) until convergence is achieved.

A typical sequence of points generated by Powell's method is shown in Figure 2 below.

Figure 2. A sequence of points in 2D generated by Powell's method.

Algorithm (Powell's Method for Finding a Minimum). To numerically approximate a local minimum of , where f is a continuous function of n real variables and by starting with one point and using the "dog-leg" search along the directions of the direction vectors .

Enhanced Powell's Method

In step (iii) of Powell's method the first vector was discarded and the average direction vector was added to the list of direction vectors. In fact, it would be better to discard the vector along which the greatest decrease in f occurred. It seems reasonable that the vector is a large component of the average direction vector . Thus, as the number of iterations increase, the set of direction vectors will tend to become linearly dependent. When the set becomes linearly dependent one or more of the directions will be lost and it is likely that the set of points will not converge to the point at which the local minimum occurs. Furthermore, in step (iii) it was assumed that the average direction vector represented a good direction in which to continue the search. But that may not be the case.

Outline of the Enhanced Powell's Method

(i) Set .

(ii) For ,
find the value of that minimizes ,
and set .

(iii) Let for . Find the subscript r
so that is the magnitude of the maximum decrease f ,
and the direction of the maximum decrease over all the direction vectors in step (ii).

(iv) Increment the counter .

(iv) Let for .
Let be the function value in the extended direction from .
If set and go to step (i), or
if set and go to step (i).
Otherwise, continue to step (vi).

(vi) Set , where the subscript r is given in step (iii).

(vii) Find the value of that minimizes ,
and set .

(vii) Repeat steps (i) through (vii).

If the conditions in step (v) are satisfied, then the set of direction vectors is left unchanged. The first inequality in step (v) indicates that there is no further decrease in the value of f in the average direction . The second inequality indicates that the decrease in the function f in the direction of greatest decrease was not a major part of the total decrease in f in step (ii). If the conditions in step (v) are not satisfied, then the direction of greatest decrease is replaced with the average direction from step (ii); . In step (vii) the function is minimized in this direction. Stopping criteria based on the magnitudes or are typically found in steps (v) and (vii). We leave it for the reader to investigate these enhancements.