Skip to main content

Gradient and Newton's Methods

Now we turn to the minimization of a function [Graphics:Images/GradientSearchMod_gr_1.gif] of n variables, where [Graphics:Images/GradientSearchMod_gr_2.gif] and the partial derivatives of [Graphics:Images/GradientSearchMod_gr_3.gif] are accessible.

Steepest Descent or Gradient Method

Let [Graphics:Images/GradientSearchMod_gr_4.gif] be a function of [Graphics:Images/GradientSearchMod_gr_5.gif] such that [Graphics:Images/GradientSearchMod_gr_6.gif] exists for [Graphics:Images/GradientSearchMod_gr_7.gif]. The gradient of [Graphics:Images/GradientSearchMod_gr_8.gif], denoted by [Graphics:Images/GradientSearchMod_gr_9.gif], is the vector

(1) [Graphics:Images/GradientSearchMod_gr_10.gif].

Illustrations: -

Recall that the gradient vector in (1) points locally in the direction of the greatest rate of increase of [Graphics:Images/GradientSearchMod_gr_17.gif]. Hence [Graphics:Images/GradientSearchMod_gr_18.gif] points locally in the direction of greatest decrease [Graphics:Images/GradientSearchMod_gr_19.gif]. Start at the point [Graphics:Images/GradientSearchMod_gr_20.gif] and search along the line through [Graphics:Images/GradientSearchMod_gr_21.gif] in the direction [Graphics:Images/GradientSearchMod_gr_22.gif]. You will arrive at a point [Graphics:Images/GradientSearchMod_gr_23.gif], where a local minimum occurs when the point [Graphics:Images/GradientSearchMod_gr_24.gif] is constrained to lie on the line [Graphics:Images/GradientSearchMod_gr_25.gif]. Since partial derivatives are accessible, the minimization process can be executed using either the quadratic or cubic approximation method.

Next we compute [Graphics:Images/GradientSearchMod_gr_26.gif] and move in the search direction [Graphics:Images/GradientSearchMod_gr_27.gif]. You will come to [Graphics:Images/GradientSearchMod_gr_28.gif], where a local minimum occurs when [Graphics:Images/GradientSearchMod_gr_29.gif] is constrained to lie on the line [Graphics:Images/GradientSearchMod_gr_30.gif]. Iteration will produce a sequence, [Graphics:Images/GradientSearchMod_gr_31.gif], of points with the property
[Graphics:Images/GradientSearchMod_gr_32.gif]

If [Graphics:Images/GradientSearchMod_gr_33.gif] then [Graphics:Images/GradientSearchMod_gr_34.gif] will be a local minimum [Graphics:Images/GradientSearchMod_gr_35.gif].

Outline of the Gradient Method

Suppose that [Graphics:Images/GradientSearchMod_gr_36.gif] has been obtained.

(i) Evaluate the gradient vector [Graphics:Images/GradientSearchMod_gr_37.gif].

(ii) Compute the search direction [Graphics:Images/GradientSearchMod_gr_38.gif].

(iii) Perform a single parameter minimization of [Graphics:Images/GradientSearchMod_gr_39.gif] on the interval [Graphics:Images/GradientSearchMod_gr_40.gif], where b is large.
This will produce a value [Graphics:Images/GradientSearchMod_gr_41.gif] where a local minimum for [Graphics:Images/GradientSearchMod_gr_42.gif]. The relation [Graphics:Images/GradientSearchMod_gr_43.gif]
shows that this is a minimum for [Graphics:Images/GradientSearchMod_gr_44.gif] along the search line [Graphics:Images/GradientSearchMod_gr_45.gif].

(iv) Construct the next point [Graphics:Images/GradientSearchMod_gr_46.gif].

(v) Perform the termination test for minimization, i.e.
are the function values [Graphics:Images/GradientSearchMod_gr_47.gif] and [Graphics:Images/GradientSearchMod_gr_48.gif] sufficiently close and the distance[Graphics:Images/GradientSearchMod_gr_49.gif]small enough ?

Repeat the process.

Comments

Popular Posts

Runge-Kutta-Fehlberg Method

One way to guarantee accuracy in the solution of an I.V.P. is to solve the problem twice using step sizes h and and compare answers at the mesh points corresponding to the larger step size. But this requires a significant amount of computation for the smaller step size and must be repeated if it is determined that the agreement is not good enough. The Runge-Kutta-Fehlberg method (denoted RKF45) is one way to try to resolve this problem. It has a procedure to determine if the proper step size h is being used. At each step, two different approximations for the solution are made and compared. If the two answers are in close agreement, the approximation is accepted. If the two answers do not agree to a specified accuracy, the step size is reduced. If the answers agree to more significant digits than required, the step size is increased. Each Runge-Kutta-Fehlberg step requires the use of the following six values: Then an approximation to the solution of the I.V.P....

Van Der Pol System

The van der Pol equation is an ordinary differential equation that can be derived from the Rayleigh differential equation by differentiating and setting . It is an equation describing self-sustaining oscillations in which energy is fed into small oscillations and removed from large oscillations. This equation arises in the study of circuits containing vacuum tubes and is given by If , the equation reduces to the equation of simple harmonic motion The van der Pol equation is , where is a constant. When the equation reduces to , and has the familiar solution . Usually the term in equation (1) should be regarded as friction or resistance, and this is the case when the coefficient is positive. However, if the coefficient is negative then we have the case of "negative resistance." In the age of "vacuum tube" radios, the " tetrode vacuum tube " (cathode, grid, plate), was used for a power amplifie...

Powell's Method

The essence of Powell's method is to add two steps to the process described in the preceding paragraph. The vector represents, in some sense, the average direction moved over the n intermediate steps in an iteration. Thus the point is determined to be the point at which the minimum of the function f occurs along the vector . As before, f is a function of one variable along this vector and the minimization could be accomplished with an application of the golden ratio or Fibonacci searches. Finally, since the vector was such a good direction, it replaces one of the direction vectors for the next iteration. The iteration is then repeated using the new set of direction vectors to generate a sequence of points . In one step of the iteration instead of a zig-zag path the iteration follows a "dog-leg" path. The process is outlined below. Let be an initial guess at the location of the minimum of the function . Let for be the ...

Newton-Raphson Method

This method is generally used to improve the result obtained by one of the previous methods. Let x o be an approximate root of f(x) = 0 and let x 1 = x o + h be the correct root so that f(x 1 ) = 0. Expanding f(x o + h) by Taylor ’s series, we obtain f(x o ) + hf’(x o ) + h 2 f’’(x o )/2! +-----= 0 Neglecting the second and higher order derivatives, we have f(x o ) + hf’(x o ) = 0 Which gives h = - f(x o )/f’(x o ) A better approximation than x o is therefore given be x 1 , where x 1 = x o – f(x o )/f’(x o ) Successive approximations are given by x 2 , x 3 ,----,x n+1 , where x n+1 = x n – f(x n )/f’(x n ) Which is the Newton-Raphson formula. Example: - Find a real root of the equation x 3 -5x + 3 = 0. Sol n : - Let, f(x) = x 3 -5x + 3 = 0 f’(x) = 3x 2 - 5 Choosing x o = 1 Step-1: f(x o ) = -1 f(x o ) = -2 So, x 1 =1 – ½ = 0.5 Step-2: f(x 1 ) = 0.625 f’(x 1 ) = -4.25 x 2 = 0.5 + 0.625/4.25 = 0.647 S...