Skip to main content

Power Method

We now describe the power method for computing the dominant eigenpair. Its extension to the inverse power method is practical for finding any eigenvalue provided that a good initial approximation is known. Some schemes for finding eigenvalues use other methods that converge fast, but have limited precision. The inverse power method is then invoked to refine the numerical values and gain full precision. To discuss the situation, we will need the following definitions.

If [Graphics:Images/PowerMethodMod_gr_1.gif] is an eigenvalue of A that is larger in absolute value than any other eigenvalue, it is called the dominant eigenvalue. An eigenvector [Graphics:Images/PowerMethodMod_gr_2.gif] corresponding to [Graphics:Images/PowerMethodMod_gr_3.gif] is called a dominant eigenvector.

An eigenvector V is said to be normalized if the coordinate of largest magnitude is equal to unity (i.e., the largest coordinate in the vector V is the number 1).

Remark:

It is easy to normalize an eigenvector [Graphics:Images/PowerMethodMod_gr_4.gif] by forming a new vector [Graphics:Images/PowerMethodMod_gr_5.gif] where [Graphics:Images/PowerMethodMod_gr_6.gif] and [Graphics:Images/PowerMethodMod_gr_7.gif].

Theorem (Power Method):

Assume that the n×n matrix A has n distinct eigenvalues [Graphics:Images/PowerMethodMod_gr_8.gif] and that they are ordered in decreasing magnitude; that is, [Graphics:Images/PowerMethodMod_gr_9.gif]. If [Graphics:Images/PowerMethodMod_gr_10.gif] is chosen appropriately, then the sequences [Graphics:Images/PowerMethodMod_gr_11.gif] and [Graphics:Images/PowerMethodMod_gr_12.gif] generated recursively by

[Graphics:Images/PowerMethodMod_gr_13.gif]
and
[Graphics:Images/PowerMethodMod_gr_14.gif]

where
[Graphics:Images/PowerMethodMod_gr_15.gif] and [Graphics:Images/PowerMethodMod_gr_16.gif], will converge to the dominant eigenvector [Graphics:Images/PowerMethodMod_gr_17.gif] and eigenvalue [Graphics:Images/PowerMethodMod_gr_18.gif], respectively. That is,

[Graphics:Images/PowerMethodMod_gr_19.gif] and [Graphics:Images/PowerMethodMod_gr_20.gif].

Remark:

If [Graphics:Images/PowerMethodMod_gr_21.gif] is an eigenvector and [Graphics:Images/PowerMethodMod_gr_22.gif], then some other starting vector must be chosen.

Speed of Convergence

In the iteration in the theorem uses the equation

[Graphics:Images/PowerMethodMod_gr_23.gif],

and the coefficient of
[Graphics:Images/PowerMethodMod_gr_24.gif] that is used to form [Graphics:Images/PowerMethodMod_gr_25.gif] goes to zero in proportion to [Graphics:Images/PowerMethodMod_gr_26.gif]. Hence, the speed of convergence of [Graphics:Images/PowerMethodMod_gr_27.gif] to [Graphics:Images/PowerMethodMod_gr_28.gif] is governed by the terms [Graphics:Images/PowerMethodMod_gr_29.gif]. Consequently, the rate of convergence is linear. Similarly, the convergence of the sequence of constants [Graphics:Images/PowerMethodMod_gr_30.gif] to [Graphics:Images/PowerMethodMod_gr_31.gif] is linear. The Aitken [Graphics:Images/PowerMethodMod_gr_32.gif] method can be used for any linearly convergent sequence [Graphics:Images/PowerMethodMod_gr_33.gif] to form a new sequence,

[Graphics:Images/PowerMethodMod_gr_34.gif],

that converges faster. The Aitken
[Graphics:Images/PowerMethodMod_gr_35.gif] can be adapted to speed up the convergence of the power method.

Shifted-Inverse Power Method

We will now discuss the shifted inverse power method. It requires a good starting approximation for an eigenvalue, and then iteration is used to obtain a precise solution. Other procedures such as the
QM and Givens’ method are used first to obtain the starting approximations. Cases involving complex eigenvalues, multiple eigenvalues, or the presence of two eigenvalues with the same magnitude or approximately the same
magnitude will cause computational difficulties and require more advanced methods. Our illustrations will focus on the case where the eigenvalues are distinct. The shifted inverse power method is based on the following three results (the proofs are left as exercises).

Theorem (Shifting Eigenvalues):

Suppose that [Graphics:Images/PowerMethodMod_gr_36.gif],V is an eigenpair of A. If [Graphics:Images/PowerMethodMod_gr_37.gif] is any constant, then [Graphics:Images/PowerMethodMod_gr_38.gif],V is an eigenpair of the matrix [Graphics:Images/PowerMethodMod_gr_39.gif].

Theorem (Inverse Eigenvalues):

Suppose that [Graphics:Images/PowerMethodMod_gr_40.gif],V is an eigenpair of A. If [Graphics:Images/PowerMethodMod_gr_41.gif], then [Graphics:Images/PowerMethodMod_gr_42.gif],V is an eigenpair of the matrix [Graphics:Images/PowerMethodMod_gr_43.gif].

Theorem (Shifted-Inverse Eigenvalues):

Suppose that [Graphics:Images/PowerMethodMod_gr_44.gif],V is an eigenpair of A. If [Graphics:Images/PowerMethodMod_gr_45.gif], then [Graphics:Images/PowerMethodMod_gr_46.gif],V is an eigenpair of the matrix [Graphics:Images/PowerMethodMod_gr_47.gif].

Theorem (Shifted-Inverse Power Method):

Assume that the n×n matrix A has distinct eigenvalues [Graphics:Images/PowerMethodMod_gr_48.gif] and consider the eigenvalue [Graphics:Images/PowerMethodMod_gr_49.gif]. Then a constant [Graphics:Images/PowerMethodMod_gr_50.gif] can be chosen so that [Graphics:Images/PowerMethodMod_gr_51.gif] is the dominant eigenvalue of [Graphics:Images/PowerMethodMod_gr_52.gif]. Furthermore, if [Graphics:Images/PowerMethodMod_gr_53.gif] is chosen appropriately, then the sequences [Graphics:Images/PowerMethodMod_gr_54.gif] and [Graphics:Images/PowerMethodMod_gr_55.gif] generated recursively by

[Graphics:Images/PowerMethodMod_gr_56.gif]
and
[Graphics:Images/PowerMethodMod_gr_57.gif]

where
[Graphics:Images/PowerMethodMod_gr_58.gif] and [Graphics:Images/PowerMethodMod_gr_59.gif], will converge to the dominant eigenpair [Graphics:Images/PowerMethodMod_gr_60.gif],[Graphics:Images/PowerMethodMod_gr_61.gif] of the matrix [Graphics:Images/PowerMethodMod_gr_62.gif]. Finally, the corresponding eigenvalue for the matrix A is given by the calculation

[Graphics:Images/PowerMethodMod_gr_63.gif]

Remark. For practical implementations of this Theorem, a linear system solver is used to compute [Graphics:Images/PowerMethodMod_gr_64.gif] in each step by solving the linear system [Graphics:Images/PowerMethodMod_gr_65.gif].

Comments

Popular Posts

Runge-Kutta-Fehlberg Method

One way to guarantee accuracy in the solution of an I.V.P. is to solve the problem twice using step sizes h and and compare answers at the mesh points corresponding to the larger step size. But this requires a significant amount of computation for the smaller step size and must be repeated if it is determined that the agreement is not good enough. The Runge-Kutta-Fehlberg method (denoted RKF45) is one way to try to resolve this problem. It has a procedure to determine if the proper step size h is being used. At each step, two different approximations for the solution are made and compared. If the two answers are in close agreement, the approximation is accepted. If the two answers do not agree to a specified accuracy, the step size is reduced. If the answers agree to more significant digits than required, the step size is increased. Each Runge-Kutta-Fehlberg step requires the use of the following six values: Then an approximation to the solution of the I.V.P.

Van Der Pol System

The van der Pol equation is an ordinary differential equation that can be derived from the Rayleigh differential equation by differentiating and setting . It is an equation describing self-sustaining oscillations in which energy is fed into small oscillations and removed from large oscillations. This equation arises in the study of circuits containing vacuum tubes and is given by If , the equation reduces to the equation of simple harmonic motion The van der Pol equation is , where is a constant. When the equation reduces to , and has the familiar solution . Usually the term in equation (1) should be regarded as friction or resistance, and this is the case when the coefficient is positive. However, if the coefficient is negative then we have the case of "negative resistance." In the age of "vacuum tube" radios, the " tetrode vacuum tube " (cathode, grid, plate), was used for a power amplifie

Powell's Method

The essence of Powell's method is to add two steps to the process described in the preceding paragraph. The vector represents, in some sense, the average direction moved over the n intermediate steps in an iteration. Thus the point is determined to be the point at which the minimum of the function f occurs along the vector . As before, f is a function of one variable along this vector and the minimization could be accomplished with an application of the golden ratio or Fibonacci searches. Finally, since the vector was such a good direction, it replaces one of the direction vectors for the next iteration. The iteration is then repeated using the new set of direction vectors to generate a sequence of points . In one step of the iteration instead of a zig-zag path the iteration follows a "dog-leg" path. The process is outlined below. Let be an initial guess at the location of the minimum of the function . Let for be the

Fibonacci Method

An approach for finding the minimum of in a given interval is to evaluate the function many times and search for a local minimum. To reduce the number of function evaluations it is important to have a good strategy for determining where is to be evaluated. Two efficient bracketing methods are the golden ratio and Fibonacci searches. To use either bracketing method for finding the minimum of , a special condition must be met to ensure that there is a proper minimum in the given interval. The function is unimodal on , if there exists a unique number such that is decreasing on , and is increasing on . In the golden ratio search two function evaluations are made at the first iteration and then only one function evaluation is made for each subsequent iteration. The value of remains constant on each subinterval and the search is terminated at the subinterval, provided that or where are the predefined tolerances. The Fibo