Jump to content

Ordinary Differential Equations/Non Homogenous 1

From Wikibooks, open books for an open world

Higher Order Differential Equations

Non-Homogeneous Equations

[edit | edit source]

A non-homogeneous equation of constant coefficients is an equation of the form

where ci are all constants and f(x) is not 0.

Complementary Function

[edit | edit source]

Every non-homogeneous equation has a complementary function (CF), which can be found by replacing the f(x) with 0, and solving for the homogeneous solution. For example, the CF of

is the solution to the differential equation

Superposition Principle

[edit | edit source]

The superposition principle makes solving a non-homogeneous equation fairly simple. The final solution is the sum of the solutions to the complementary function, and the solution due to f(x), called the particular integral (PI). In other words,

General Solution = CF + PI

Method of Undetermined Coefficients

[edit | edit source]

The method of undetermined coefficients is an easy shortcut to find the particular integral for some f(x). The method works only if a finite number of derivatives of f(x) eventually reduces to 0, or if the derivatives eventually fall into a pattern in a finite number of derivatives. If this is true, we then know part of the PI - the sum of all derivatives before we hit 0 (or all the derivatives in the pattern) multiplied by arbitrary constants. This is the trial PI. We can then plug our trial PI into the original equation to solve it fully.

As we will see, we may need to alter this trial PI depending on the CF. If the trial PI contains a term that is also present in the CF, then the PI will be absorbed by the arbitrary constant in the CF, and therefore we will not have a full solution to the problem.

f(x) = Constant

[edit | edit source]

The simplest case is when f(x) is constant, for example

.

The solution to the CF is

We now need to find a trial PI. When we differentiate y=3, we get zero. Therefore, our trial PI is the sum of a functions of y before this, that is, 3 multiplied by an arbitrary constant, which gives another arbitrary constant, K.

We now set y equal to the PI and find the derivatives up to the order of the DE (here, the second).

We can now substitute these into the original DE:

By summing the CF and the PI, we can get the general solution to the DE:

f(x) = Polynomial

[edit | edit source]

This is the general method which includes the above example. A polynomial of order n reduces to 0 in exactly n+1 derivatives (so 1 for a constant as above, three for a quadratic, and so on). So we know that our PI is

For an example, lets take

First off, we know that our PI is

In order to plug in, we need to calculate the first two derivatives of this:

Plugging in we get:

Solving gives us

So, our PI is

However, we need to get the complementary function as well. To get that, set f(x) to 0 and solve just like we did in the last section. For this equation, the roots are -3 and -2. So that makes our CF

Summing gives us our general solution:

f(x) = Power of e

[edit | edit source]

Powers of e don't ever reduce to 0, but they do become a pattern. In fact it does so in only 1 differentiation, since it's its own derivative. So we know

where K is our constant and p is the power of e givin in the original DE.

For example, consider

.

We make our trial PI

.

Which then gives

Plugging in, we get

That's the particular integral. We found the CF earlier. So the general solution is


f(x) = Polynomial × Power of e

[edit | edit source]

Polynomials multiplied by powers of e also form a loop, in n derivatives (where n is the highest power of x in the polynomial). So we know that our trial PI is

where C is a constant and p is the power of e in the equation.

For example, lets try

We can now set our PI to

.

Plugging in, we get

Thats the particular solution. We found the homogeneous solution earlier. So the total solution is

f(x) = Trigonometric Functions

[edit | edit source]

Trig functions don't reduce to 0 either. But they do have a loop of 2 derivatives - the derivative of sin x is cos x, and the derivative of cos x is -sin x. So we put our PI as

where C is a constant and p is the term inside the trig. function in the original DE.

For example, lets try

We set our PI to

Plugging in, we get

Thats the particular solution. We found the homogeneous solution earlier. So the total solution is

f(x) = A Mixture

[edit | edit source]

Not only are any of the above solvable by the method of undetermined coefficients, so is the sum of one or more of the above. This is because the sum of two things whose derivatives either go to 0 or loop must also have a derivative that goes to 0 or loops. The would be the sum of the individual functions.

If the Trial PI shares terms with the CF

[edit | edit source]

When dealing with , or sometimes with polynomials (if the homogeneous equation has roots of 0) as f(x), you may get the same term in both the trial PI and the CF. If this happens, the PI will be absorbed into the arbitrary constants of the CF, which will not result in a full solution. To overcome this, multiply the affected terms by x as many times as needed until it no longer appears in the CF.

As an example, let's take

First, solve the homogeneous equation to get the CF.

The auxiliary polynomial is

Find the roots of the auxiliary polynomial. In this case, they are

The CF is

Now for the particular integral. Since f(x) is a polynomial of degree 1, we would normally use Ax+B. However, since both a term in x and a constant appear in the CF, we need to multiply by x² and use

.

We solve this as we normally do for A and B.

So the total solution is

Variation of Parameters

[edit | edit source]

Variation of parameters is a method for finding a particular solution to the equation if the general solution for the corresponding homogeneous equation is known. We will now derive this general method.

We already know the general solution of the homogenous equation: it is of the form . We will look for a particular solution of the non-homogenous equation of the form , with u and v functions of the independent variable x. Differentiating this we get

Now notice that there is currently only one condition on , namely that . We now impose another condition, that

This means that will have no second derivatives of and . Thus, these new parameters (hence the name "variation of parameters") will be the solutions to some first order differential equation, which can be solved. Let us finish the problem:

and

,

where the last step follows from the fact that and are solutions of the homogeneous equation.

Therefore, we have and . Multiplying the first equation by and the second by and adding gives

A similar procedure gives

.

Now it is only necessary to evaluate these expressions and integrate them with respect to to get the functions and , and then we have our particular solution . The general solution to the differential equation is therefore .

Note that the main difficulty with this method is that the integrals involved are often extremely complicated. If the integral does not work out well, it is best to use the method of undetermined coefficients instead.

The quantity that appears in the denominator of the expressions for and is called the Wronskian of and .


Laplace Transforms

[edit | edit source]

The Laplace transform is a very useful tool for solving nonhomogenous initial-value problems. It allows us to reduce the problem of solving the differential equation to that of solving an algebraic equation. We begin with some setup.

Definition of the Laplace Transform

[edit | edit source]

The Laplace transform of is defined as

.

This can also be written as . When writing this on paper, you may write a cursive capital "L" and it will be generally understood. There is also an inverse Laplace transform , but calculating it requires an integration with respect to a complex variable. Luckily, it is frequently possible to find without resorting to this integration, using a variety of tricks which will be described later. However, it is first necessary to prove some facts about the Laplace transform.

Two Properties of the Laplace Transform

[edit | edit source]

Property 1. The Laplace transform is a linear operator; that is, .

The proof of this property follows immediately from the definition of the Laplace transform and is left to the reader.

Property 2. If , then

Proof:
(integrating by parts)

It is property 2 that makes the Laplace transform a useful tool for solving differential equations. As a corollary of property 2, note that .

Laplace Transforms of Simple Functions

[edit | edit source]

The last two can be easily calculated using Euler's formula .

In order to find more Laplace transforms, in particular the transform of , we will derive two more properties of the transform.

Two More Properties of the Laplace Transform

[edit | edit source]

Property 3. If , then .

Property 4. If , then .

The proofs are straightforward and are left to the reader.

Now we can easily see that . Applying Property 3 multiple times, we can find that . At last we are ready to solve a differential equation using Laplace transforms.

Using Laplace Transforms to Solve Non-Homogeneous Initial-Value Problems

[edit | edit source]

In general, we solve a second-order linear non-homogeneous initial-value problem as follows: First, we take the Laplace transform of both sides. This immediately reduces the differential equation to an algebraic one. We then solve for . Finally, we take the inverse transform of both sides to find .

Let's begin by using this technique to solve the problem

.

We begin by taking the Laplace transform of both sides and using property 1 (linearity):

(using the initial conditions)

Now we isolate :

Here we have factored in preparation for the next step. We now attempt to take the inverse transform of both sides; in order to do this, we will have to break down the right hand side into partial fractions.

The first two fractions imply that . Setting gives , while setting gives . The other three fractions similarly give and . Therefore:

And finally we can take the inverse transform (by inspection, of course) to get

.

The convolution is a method of combining two functions to yield a third function. The convolution has applications in probability, statistics, and many other fields because it represents the "overlap" between the functions. We are not concerned with this property here; for us the convolution is useful as a quick method for calculating inverse Laplace transforms.

Definition. The convolution is defined as .

The convolution has several useful properties, which are stated below:

Property 1. (Associativity)

Property 2. (Commutativity)

Property 3. (Distribution over addition)

We now prove the result that makes the convolution useful for calculating inverse Laplace transforms.

Theorem.

Proof:
(changing order of integration)
Now let .

Let's solve another differential equation:

Taking Laplace transforms of both sides gives

We now have to find . To do this, we notice that , so by the Theorem above. Thus, the solution to our differential equation is the convolution of sine with itself. We proceed to calculate this:

Therefore, the solution to the original equation is

.

{{}}