Skip to main content

Eigenvalues and Eigenvectors

We will now review some ideas from linear algebra. Proofs of the theorems are either left as exercises or can be found in any standard text on linear algebra. We know how to solve n linear equations in n unknowns. It was assumed that the determinant of the matrix was nonzero and hence that the solution was unique. In the case of a homogeneous system AX = 0, if [Graphics:Images/EigenvaluesMod_gr_1.gif], the unique solution is the trivial solution X = 0. If [Graphics:Images/EigenvaluesMod_gr_2.gif], there exist nontrivial solutions to AX = 0. Suppose that [Graphics:Images/EigenvaluesMod_gr_3.gif], and consider solutions to the homogeneous linear
system

[Graphics:Images/EigenvaluesMod_gr_4.gif]

A homogeneous system of equations always has the trivial solution [Graphics:Images/EigenvaluesMod_gr_5.gif]. Gaussian elimination can be used to obtain the reduced row echelon form which will be used to form a set of relationships between the variables, and a non-trivial solution.

Definition (Linearly Independent):

The vectors [Graphics:Images/EigenvaluesMod_gr_37.gif] are said to be linearly independent if the equation

[Graphics:Images/EigenvaluesMod_gr_38.gif]

implies that
[Graphics:Images/EigenvaluesMod_gr_39.gif]. If the vectors are not linearly independent they are said to be linearly dependent.

Two vectors in
[Graphics:Images/EigenvaluesMod_gr_40.gif] are linearly independent if and only if they are not parallel. Three vectors in [Graphics:Images/EigenvaluesMod_gr_41.gif] are linearly independent if and only if they do not lie in the same plane.

Definition (Linearly Dependent):

The vectors [Graphics:Images/EigenvaluesMod_gr_42.gif] are said to be linearly dependent if there exists a set of numbers [Graphics:Images/EigenvaluesMod_gr_43.gif] not all zero, such that

[Graphics:Images/EigenvaluesMod_gr_44.gif].

Theorem:

The vectors [Graphics:Images/EigenvaluesMod_gr_45.gif] are linearly dependent if and only if at least one of them is a linear combination of the others.

A desirable feature for a vector space is the ability to express each vector as s linear combination of vectors chosen from a small subset of vectors. This motivates the next definition.

Definition (Basis):

Suppose that [Graphics:Images/EigenvaluesMod_gr_46.gif] is a set of m vectors in [Graphics:Images/EigenvaluesMod_gr_47.gif]. The set S i s called a basis for [Graphics:Images/EigenvaluesMod_gr_48.gif] if for every vector [Graphics:Images/EigenvaluesMod_gr_49.gif] there exists a unique set
of scalars
[Graphics:Images/EigenvaluesMod_gr_50.gif] so that X can be expressed as the linear combination

[Graphics:Images/EigenvaluesMod_gr_51.gif]

Theorem:

In [Graphics:Images/EigenvaluesMod_gr_52.gif], any set of n linearly independent vectors forms a basis of [Graphics:Images/EigenvaluesMod_gr_53.gif]. Each vector [Graphics:Images/EigenvaluesMod_gr_54.gif] is uniquely expressed as a linear combination of the basis vectors.

Theorem:

Let [Graphics:Images/EigenvaluesMod_gr_55.gif] be vectors in [Graphics:Images/EigenvaluesMod_gr_56.gif].

(i) If m>n, then the vectors are linearly independent.

(ii) If m=n, then the vectors are linearly dependent if and only if [Graphics:Images/EigenvaluesMod_gr_57.gif], where [Graphics:Images/EigenvaluesMod_gr_58.gif].

Applications of mathematics sometimes encounter the following questions: What are the singularities of [Graphics:Images/EigenvaluesMod_gr_59.gif], where [Graphics:Images/EigenvaluesMod_gr_60.gif] is a parameter? What is the behavior of the sequence of vectors [Graphics:Images/EigenvaluesMod_gr_61.gif]? What are the geometric features of a linear transformation? Solutions for problems in many different disciplines, such as economics, engineering, and physics, can involve ideas related to these equations. The theory of eigenvalues and eigenvectors is powerful enough to help solve these otherwise intractable problems.

Let A be a square matrix of dimension n × n and let X be a vector of dimension n. The product Y = AX can be viewed as a linear transformation from n-dimensional space into itself. We want to find scalars [Graphics:Images/EigenvaluesMod_gr_62.gif] for which there exists a nonzero vector X such that

(1)
[Graphics:Images/EigenvaluesMod_gr_63.gif];

that is, the linear transformation T(X) = AX maps X onto the multiple [Graphics:Images/EigenvaluesMod_gr_64.gif]. When this occurs, we call X an eigenvector that corresponds to the eigenvalue [Graphics:Images/EigenvaluesMod_gr_65.gif], and together they form the eigenpair [Graphics:Images/EigenvaluesMod_gr_66.gif] for A. In general, the scalar [Graphics:Images/EigenvaluesMod_gr_67.gif] and vector X can involve complex numbers. For simplicity, most of our illustrations will involve real calculations. However, the techniques are easily extended to the complex case. The n × n identity matrix I can be used to write equation (1) in the form

(2)
[Graphics:Images/EigenvaluesMod_gr_68.gif].

The significance of equation (2) is that the product of the matrix [Graphics:Images/EigenvaluesMod_gr_69.gif] and the nonzero vector X is the zero vector! The theorem of homogeneous linear system says that (2) has nontrivial solutions if and only if the matrix [Graphics:Images/EigenvaluesMod_gr_70.gif] is singular, that is,

(3)
[Graphics:Images/EigenvaluesMod_gr_71.gif].

This determinant can be written in the form

(4)
[Graphics:Images/EigenvaluesMod_gr_72.gif]

Definition (Characteristic Polynomial):

When the determinant in (4) is expanded, it becomes a polynomial of degree n, which is called the characteristic polynomial

(5)
[Graphics:Images/EigenvaluesMod_gr_73.gif]

Exploration For p([Graphics:Images/EigenvaluesMod_gr_81.gif])

There exist exactly n roots (not necessarily distinct) of a polynomial of degree n. Each root [Graphics:Images/EigenvaluesMod_gr_82.gif] can be substituted into equation (3) to obtain an underdetermined system of equations that has a corresponding nontrivial solution vector X. If [Graphics:Images/EigenvaluesMod_gr_83.gif] is real, a real eigenvector X can be constructed. For emphasis, we state the following definitions.

Definition (Eigenvalue):

If A is and n × n real matrix, then its n eigenvalues [Graphics:Images/EigenvaluesMod_gr_84.gif] are the real and complex roots of the characteristic polynomial

[Graphics:Images/EigenvaluesMod_gr_85.gif].

Definition (Eigenvector):

If [Graphics:Images/EigenvaluesMod_gr_86.gif] is an eigenvalue of A and the nonzero vector V has the property that

[Graphics:Images/EigenvaluesMod_gr_87.gif]

then V is called an eigenvector of A corresponding to the eigenvalue [Graphics:Images/EigenvaluesMod_gr_88.gif]. Together, this eigenvalue [Graphics:Images/EigenvaluesMod_gr_89.gif] and eigenvector V is called an eigenpair [Graphics:Images/EigenvaluesMod_gr_90.gif].

The characteristic polynomial [Graphics:Images/EigenvaluesMod_gr_91.gif] can be factored in the form

[Graphics:Images/EigenvaluesMod_gr_92.gif]

where [Graphics:Images/EigenvaluesMod_gr_93.gif] is called the multiplicity of the eigenvalue [Graphics:Images/EigenvaluesMod_gr_94.gif]. The sum of the multiplicities of all eigenvalues is n; that is,

[Graphics:Images/EigenvaluesMod_gr_95.gif].

The next three results concern the existence of eigenvectors.

Theorem (Corresponding Eigenvectors):

Suppose that A is and n × n square matrix.
(a) For each distinct eigenvalue [Graphics:Images/EigenvaluesMod_gr_96.gif] there exists at least one eigenvector V corresponding to [Graphics:Images/EigenvaluesMod_gr_97.gif].
(b) If
[Graphics:Images/EigenvaluesMod_gr_98.gif] has multiplicity r, then there exist at most r linearly independent eigenvectors [Graphics:Images/EigenvaluesMod_gr_99.gif] that correspond to [Graphics:Images/EigenvaluesMod_gr_100.gif].

Theorem (Linearly Independent Eigenvectors):

Suppose that A is and n × n square matrix. If the eigenvalues [Graphics:Images/EigenvaluesMod_gr_101.gif] are distinct and [Graphics:Images/EigenvaluesMod_gr_102.gif] are the k eigenpairs, then [Graphics:Images/EigenvaluesMod_gr_103.gif] is a set of k linearly independent vectors.

Theorem (Complete Set of Eigenvectors):

Suppose that A is and n × n square matrix. If the eigenvalues of A are all distinct, then there exist n nearly independent eigenvectors [Graphics:Images/EigenvaluesMod_gr_104.gif].

Finding eigenpairs by hand computations is usually done in the following manner. The eigenvalue [Graphics:Images/EigenvaluesMod_gr_105.gif] of multiplicity r is substituted into the equation

[Graphics:Images/EigenvaluesMod_gr_106.gif].

Then Gaussian elimination can be performed to obtain the row reduced echelon form, which will involve
n-k equations in n unknowns, where [Graphics:Images/EigenvaluesMod_gr_107.gif]. Hence there are k free variables to choose. The free variables can be selected in a judicious manner to produce k linearly independent solution vectors [Graphics:Images/EigenvaluesMod_gr_108.gif] that correspond to [Graphics:Images/EigenvaluesMod_gr_109.gif].

Comments

Popular Posts

Runge-Kutta-Fehlberg Method

One way to guarantee accuracy in the solution of an I.V.P. is to solve the problem twice using step sizes h and and compare answers at the mesh points corresponding to the larger step size. But this requires a significant amount of computation for the smaller step size and must be repeated if it is determined that the agreement is not good enough. The Runge-Kutta-Fehlberg method (denoted RKF45) is one way to try to resolve this problem. It has a procedure to determine if the proper step size h is being used. At each step, two different approximations for the solution are made and compared. If the two answers are in close agreement, the approximation is accepted. If the two answers do not agree to a specified accuracy, the step size is reduced. If the answers agree to more significant digits than required, the step size is increased. Each Runge-Kutta-Fehlberg step requires the use of the following six values: Then an approximation to the solution of the I.V.P....

Van Der Pol System

The van der Pol equation is an ordinary differential equation that can be derived from the Rayleigh differential equation by differentiating and setting . It is an equation describing self-sustaining oscillations in which energy is fed into small oscillations and removed from large oscillations. This equation arises in the study of circuits containing vacuum tubes and is given by If , the equation reduces to the equation of simple harmonic motion The van der Pol equation is , where is a constant. When the equation reduces to , and has the familiar solution . Usually the term in equation (1) should be regarded as friction or resistance, and this is the case when the coefficient is positive. However, if the coefficient is negative then we have the case of "negative resistance." In the age of "vacuum tube" radios, the " tetrode vacuum tube " (cathode, grid, plate), was used for a power amplifie...

Powell's Method

The essence of Powell's method is to add two steps to the process described in the preceding paragraph. The vector represents, in some sense, the average direction moved over the n intermediate steps in an iteration. Thus the point is determined to be the point at which the minimum of the function f occurs along the vector . As before, f is a function of one variable along this vector and the minimization could be accomplished with an application of the golden ratio or Fibonacci searches. Finally, since the vector was such a good direction, it replaces one of the direction vectors for the next iteration. The iteration is then repeated using the new set of direction vectors to generate a sequence of points . In one step of the iteration instead of a zig-zag path the iteration follows a "dog-leg" path. The process is outlined below. Let be an initial guess at the location of the minimum of the function . Let for be the ...

Newton-Raphson Method

This method is generally used to improve the result obtained by one of the previous methods. Let x o be an approximate root of f(x) = 0 and let x 1 = x o + h be the correct root so that f(x 1 ) = 0. Expanding f(x o + h) by Taylor ’s series, we obtain f(x o ) + hf’(x o ) + h 2 f’’(x o )/2! +-----= 0 Neglecting the second and higher order derivatives, we have f(x o ) + hf’(x o ) = 0 Which gives h = - f(x o )/f’(x o ) A better approximation than x o is therefore given be x 1 , where x 1 = x o – f(x o )/f’(x o ) Successive approximations are given by x 2 , x 3 ,----,x n+1 , where x n+1 = x n – f(x n )/f’(x n ) Which is the Newton-Raphson formula. Example: - Find a real root of the equation x 3 -5x + 3 = 0. Sol n : - Let, f(x) = x 3 -5x + 3 = 0 f’(x) = 3x 2 - 5 Choosing x o = 1 Step-1: f(x o ) = -1 f(x o ) = -2 So, x 1 =1 – ½ = 0.5 Step-2: f(x 1 ) = 0.625 f’(x 1 ) = -4.25 x 2 = 0.5 + 0.625/4.25 = 0.647 S...