Skip to main content

Frobenius Series Solution

Consider the second order linear differential equation

(1) [Graphics:Images/FrobeniusSeriesMod_gr_1.gif].

Rewrite this equation in the form [Graphics:Images/FrobeniusSeriesMod_gr_2.gif], then use the substitutions [Graphics:Images/FrobeniusSeriesMod_gr_3.gif] and [Graphics:Images/FrobeniusSeriesMod_gr_4.gif] and rewrite the differential equation (1) in the form

(2) [Graphics:Images/FrobeniusSeriesMod_gr_5.gif].

Definition (Analytic):

The functions [Graphics:Images/FrobeniusSeriesMod_gr_6.gif] and [Graphics:Images/FrobeniusSeriesMod_gr_7.gif] are analytic at [Graphics:Images/FrobeniusSeriesMod_gr_8.gif] if they have Taylor series expansions with radius of convergence [Graphics:Images/FrobeniusSeriesMod_gr_9.gif] and [Graphics:Images/FrobeniusSeriesMod_gr_10.gif], respectively. That is

[Graphics:Images/FrobeniusSeriesMod_gr_11.gif] which converges for [Graphics:Images/FrobeniusSeriesMod_gr_12.gif]
and
[Graphics:Images/FrobeniusSeriesMod_gr_13.gif] which converges for [Graphics:Images/FrobeniusSeriesMod_gr_14.gif]

Definition (Ordinary Point):

If the functions [Graphics:Images/FrobeniusSeriesMod_gr_15.gif] and [Graphics:Images/FrobeniusSeriesMod_gr_16.gif] are analytic at [Graphics:Images/FrobeniusSeriesMod_gr_17.gif], then the point [Graphics:Images/FrobeniusSeriesMod_gr_18.gif] is called an ordinary point of the differential equation

[Graphics:Images/FrobeniusSeriesMod_gr_19.gif].

Otherwise, the point [Graphics:Images/FrobeniusSeriesMod_gr_20.gif] is called a singular point.

Definition (Regular Singular Point):

Assume that [Graphics:Images/FrobeniusSeriesMod_gr_21.gif] is a singular point of (1) and that [Graphics:Images/FrobeniusSeriesMod_gr_22.gif] and [Graphics:Images/FrobeniusSeriesMod_gr_23.gif] are analytic at [Graphics:Images/FrobeniusSeriesMod_gr_24.gif].

They will have Maclaurin series expansions with radius of convergence [Graphics:Images/FrobeniusSeriesMod_gr_25.gif] and [Graphics:Images/FrobeniusSeriesMod_gr_26.gif], respectively. That is

[Graphics:Images/FrobeniusSeriesMod_gr_27.gif] which converges for [Graphics:Images/FrobeniusSeriesMod_gr_28.gif]
and
[Graphics:Images/FrobeniusSeriesMod_gr_29.gif] which converges for [Graphics:Images/FrobeniusSeriesMod_gr_30.gif]

Then the point [Graphics:Images/FrobeniusSeriesMod_gr_31.gif] is called a regular singular point of the differential equation (1).

Method of Frobenius:

This method is attributed to the german mathemematican Ferdinand Georg Frobenius (1849-1917 ). Assume that [Graphics:Images/FrobeniusSeriesMod_gr_32.gif] is regular singular point of the differential equation

[Graphics:Images/FrobeniusSeriesMod_gr_33.gif].


A Frobenius series (generalized Laurent series) of the form

[Graphics:Images/FrobeniusSeriesMod_gr_34.gif]

can be used to solve the differential equation. The parameter [Graphics:Images/FrobeniusSeriesMod_gr_35.gif] must be chosen so that when the series is substituted into the D.E. the coefficient of the smallest power of [Graphics:Images/FrobeniusSeriesMod_gr_36.gif] is zero. This is called the indicial equation. Next, a recursive equation for the coefficients is obtained by setting the coefficient of [Graphics:Images/FrobeniusSeriesMod_gr_37.gif] equal to zero. Caveat: There are some instances when only one Frobenius solution can be constructed.

Definition (Indicial Equation):

The parameter [Graphics:Images/FrobeniusSeriesMod_gr_38.gif] in the Frobenius series is a root of the indicial equation

[Graphics:Images/FrobeniusSeriesMod_gr_39.gif].

Assuming that the singular point is [Graphics:Images/FrobeniusSeriesMod_gr_40.gif], we can calculate [Graphics:Images/FrobeniusSeriesMod_gr_41.gif] as follows:

[Graphics:Images/FrobeniusSeriesMod_gr_42.gif]
and
[Graphics:Images/FrobeniusSeriesMod_gr_43.gif]

The Recursive Formulas:

For each root [Graphics:Images/FrobeniusSeriesMod_gr_65.gif] of the indicial equation, recursive formulas are used to calculate the unknown coefficients [Graphics:Images/FrobeniusSeriesMod_gr_66.gif]. This is custom work because a numerical value for [Graphics:Images/FrobeniusSeriesMod_gr_67.gif] is easier use.

Comments

Popular Posts

Runge-Kutta-Fehlberg Method

One way to guarantee accuracy in the solution of an I.V.P. is to solve the problem twice using step sizes h and and compare answers at the mesh points corresponding to the larger step size. But this requires a significant amount of computation for the smaller step size and must be repeated if it is determined that the agreement is not good enough. The Runge-Kutta-Fehlberg method (denoted RKF45) is one way to try to resolve this problem. It has a procedure to determine if the proper step size h is being used. At each step, two different approximations for the solution are made and compared. If the two answers are in close agreement, the approximation is accepted. If the two answers do not agree to a specified accuracy, the step size is reduced. If the answers agree to more significant digits than required, the step size is increased. Each Runge-Kutta-Fehlberg step requires the use of the following six values: Then an approximation to the solution of the I.V.P....

Van Der Pol System

The van der Pol equation is an ordinary differential equation that can be derived from the Rayleigh differential equation by differentiating and setting . It is an equation describing self-sustaining oscillations in which energy is fed into small oscillations and removed from large oscillations. This equation arises in the study of circuits containing vacuum tubes and is given by If , the equation reduces to the equation of simple harmonic motion The van der Pol equation is , where is a constant. When the equation reduces to , and has the familiar solution . Usually the term in equation (1) should be regarded as friction or resistance, and this is the case when the coefficient is positive. However, if the coefficient is negative then we have the case of "negative resistance." In the age of "vacuum tube" radios, the " tetrode vacuum tube " (cathode, grid, plate), was used for a power amplifie...

Powell's Method

The essence of Powell's method is to add two steps to the process described in the preceding paragraph. The vector represents, in some sense, the average direction moved over the n intermediate steps in an iteration. Thus the point is determined to be the point at which the minimum of the function f occurs along the vector . As before, f is a function of one variable along this vector and the minimization could be accomplished with an application of the golden ratio or Fibonacci searches. Finally, since the vector was such a good direction, it replaces one of the direction vectors for the next iteration. The iteration is then repeated using the new set of direction vectors to generate a sequence of points . In one step of the iteration instead of a zig-zag path the iteration follows a "dog-leg" path. The process is outlined below. Let be an initial guess at the location of the minimum of the function . Let for be the ...

Newton-Raphson Method

This method is generally used to improve the result obtained by one of the previous methods. Let x o be an approximate root of f(x) = 0 and let x 1 = x o + h be the correct root so that f(x 1 ) = 0. Expanding f(x o + h) by Taylor ’s series, we obtain f(x o ) + hf’(x o ) + h 2 f’’(x o )/2! +-----= 0 Neglecting the second and higher order derivatives, we have f(x o ) + hf’(x o ) = 0 Which gives h = - f(x o )/f’(x o ) A better approximation than x o is therefore given be x 1 , where x 1 = x o – f(x o )/f’(x o ) Successive approximations are given by x 2 , x 3 ,----,x n+1 , where x n+1 = x n – f(x n )/f’(x n ) Which is the Newton-Raphson formula. Example: - Find a real root of the equation x 3 -5x + 3 = 0. Sol n : - Let, f(x) = x 3 -5x + 3 = 0 f’(x) = 3x 2 - 5 Choosing x o = 1 Step-1: f(x o ) = -1 f(x o ) = -2 So, x 1 =1 – ½ = 0.5 Step-2: f(x 1 ) = 0.625 f’(x 1 ) = -4.25 x 2 = 0.5 + 0.625/4.25 = 0.647 S...