The function is unimodal on , if there exists a unique number such that

is decreasing on ,

and

is increasing on .

**Minimization Using Derivatives**

Suppose that is unimodal over and has a unique minimum at . Also, assume that is defined at all points in . Let the starting value lie in . If , then the minimum point p lies to the right of . If , then the minimum point p lies to the left of .

Our first task is to obtain three test values,

(1) ,

so that

(2) .

Suppose that ; then and the step size h should be chosen positive. It is an easy task to find a value of h so that the three points in (1) satisfy (2). Start with in formula (1) (provided that ); if not, take , and so on.

**Case (i)** If (2) is satisfied we are done.**Case (ii)** If , then .

We need to check points that lie farther to the right. Double the step size and repeat the process.**Case (iii)** If , we have jumped over p and h is too large.

We need to check values closer to . Reduce the step size by a factor of and repeat the process.

When , the step size h should be chosen negative and then cases similar to (i), (ii), and (iii) can be used.

**Quadratic Approximation to Find p**

Finally, we have three points (1) that satisfy (2). We will use quadratic interpolation to find , which is an approximation to p. The Lagrange polynomial based on the nodes in (1) is

where .

The derivative of is

(4).

Solving in the form yields

Multiply each term in (5) by and collect terms involving :

This last quantity is easily solved for :

.

The value is a better approximation to p than . Hence we can replace with and repeat the two processes outlined above to determine a new h and a new . Continue the iteration until the desired accuracy is achieved. In this algorithm the derivative of the objective function was used implicitly in (4) to locate the minimum of the interpolatory quadratic. The reader should note that the subroutine makes no explicit use of the derivative.

**Cubic Approximation to Find p**

.

Thus . The cubic approximating polynomial is expanded in a Taylor series about (which is the abscissa of the minimum). At the minimum we have , and we write in the form:

(6),

and

(7).

The introduction of in the denominators of (6) and (7) will make further calculations less tiresome. It is required that , , , and *. *To find we define:

(8) ,

and we must go through several intermediate calculations before we end up with .

Use use (6) to obtain

Then use (8) to get

Then substitute and we have

(9)

Use use (7) to obtain

Then use (8) to get

Then substitute and we have

(10)

Finally, use (7) and write

Then use (8) to get

(11)

Now we will use the three nonlinear equations (9), 10), (11) listed below in (12). The order of determining the variables will be (the variable will be eliminated).

(12)

First, we will find which is accomplished by combining the equation in (12) as follows:

Straightforward simplification yields , therefore is given by

(13) .

Second, we will eliminate by combining the equation in (12) as follows, multiply the first equation by and add it to the third equation

which can be rearranged in the form

Now the quadratic equation can be used to solve for

It will take a bit of effort to simplify this equation into its computationally preferred form.

Hence,

(14)

Therefore, the value of is found by substituting the calculated value of in (14) into the formula . To continue the iteration process, let and replace and with and , respectively, in formulas (12), (13), and (14). The algorithm outlined above is not a bracketing method. Thus determining stopping criteria becomes more problematic. One technique would be to require that , since .

## No comments:

## Post a Comment