Introduction to Chemical Engineering Processes/Numerical Root Finding Methods
Basics of Rootfinding
[edit | edit source]Rootfinding is the determination of solutions to single-variable equations or to systems of n equations in n unknowns (provided that such solutions exist). The basics of the method revolve around the determination of roots
A root of a function in any number of variables is defined as the solution to the equation . In order to use any of the numerical methods in this section, the equation should be put in a specific form, and this is one of the more common ones, used for all methods except the iterative method.
However, it is easy to put a function into this form. If you start with an equation of the form:
then subtracting will yield the required form. Do not forget to do this, even if there is only a constant on one side!
Example:
If you want to use the bisection method later in this section to find one of the solutions of the equation , you should rewrite the equation as so as to put it in the correct form.
Since any equation can be put into this form, the methods can potentially be applied to any function, though they work better for some functions than others.
Analytical vs. Numerical Solutions
[edit | edit source]An analytical solution to an equation or system is a solution which can be arrived at exactly using some mathematical tools. For example, consider the function , graphed below.
The root of this function is, by convention, when , or when this function crosses the x-axis. Hence, the root will occur when
The answer x=1 is an analytical solution because through the use of algebra, we were able to come up with an exact answer.
On the other hand, attempting to solve an equation like:
analytically is sure to lead to frustration because it is not possible with elementary methods. In such a case it is necessary to seek a numerical solution, in which guesses are made until the answer is "close enough", but you'll never know what the exact answer is.
All that the numerical methods discussed below do is give you a systematic method of guessing solutions so that you'll be likely (and in some cases guaranteed) to get closer and closer to the true answer. The problem with numerical methods is that most are not guaranteed to work without a good enough initial guess. Therefore, it is valuable to try a few points until you get somewhere close and then start with the numerical algorithm to get a more accurate answer. They are roughly in order from the easiest to use to the more difficult but faster-converging algorithms.
Rootfinding Algorithms
[edit | edit source]Iterative solution
[edit | edit source]Iterative solutions in their purest form will solve the desired function so that it is in the form:
Then, a value for x is guessed, and f(x) is calculated. The new value of x is then re-inserted into f(x), and the process is repeated until the value of x changes very little.
The following example illustrates this procedure.
Example:
Use an iterative solution to calculate the root of
Solution: Solve the equation for x:
First we need to guess an x to get it started. Let's try
Then we have:
Thus to two decimal places the root is . More iterations could be performed to get a more accurate answer if desired.
This method has some rather severe limitations as we'll see in this example:
Example:
Repeat the above but this time solve for x a different way. What do you find?
Solution: To illustrate the point, let's start with a guess of
The other way to solve for x is the more obvious way:
Clearly, even though we started with a very good guess, the solution is diverging!
This example shows that the success of the iteration method strongly depends on the properties of the function on the right-hand side. In particular, it has to do with how large the slope of the function is at the root. If the slope is too large, the method will not converge, and even if it is small the method converges slowly. Therefore, it is generally undesirable to use this method, though some more useful algorithms are based on it (which is why it is presented here).
Iterative Solution with Weights
[edit | edit source]Although the iterative solution method has its downfalls, it can be drastically improved through the use of averaging. In this method, the function is still solved for x in the form:
From the initial guess , the function f(x) is used to generate the second guess . However, rather than simply putting into f(x), a weighted average of and is made:
The term is called the weight. The most common value of the weight is one-half, in which case the next value to plug into f(x) is simply the average of and :
This new value is then plugged into f(x), averaged with the result, and this is repeated until convergence.
The following examples show that this method converges faster and with more reliability than normal iterative solution.
Example:
Find the root of using the iterative method with a weight of
Solution: Let's start with a guess of 0.5 like last time, and compare what happens this time from what happened with normal iteration.
Here, after only three evaluations of the function (which usually takes the longest time of all the steps), we have the root to the same accuracy as seven evaluations with the other method!
The method is not only faster-converging but also more stable, so that it can actually be used solving the equation the other way too.
Example:
Starting with an initial guess of and using and the weighted iteration method with , find the root of the equation.
Solution: Starting with we have:
Therefore we can (slowly) converge in this case using the weighted iteration method to the solution.
Notice that in this case, if we use regular iteration the result only converged if the equation was solved in a certain way. Using weighted iteration, it is possible to solve it either way and obtain a solution, but one way is clearly faster than the other. However, weighting will accelerate the algorithm in most cases and is relatively easy to implement, so it is a worthwhile method to use.
Bisection Method
[edit | edit source]Let us consider an alternative approach to rootfinding. Consider a function f(x) = 0 which we desire to find the roots of. If we let a second variable , then y will (almost always) change sign between the left-hand side of the root and the right-hand side. This can be seen in the above picture of , which changes from negative to the left of the root to positive to its right.
The bisection method works by taking the observation that a function changes sign between two points, and narrowing the interval in which the sign change occurs until the root contained within is tightly enclosed. This only works for a continuous function, in which there are no jumps or holes in the graph, but a large number of commonly-used functions are like this including logarithms (for positive numbers), sine and cosine, and polynomials.
As a more formalized explanation, consider a function that changes sign between and We can narrow the interval by:
- Evaluating the function at the midpoint
- Determining whether the function changes signs or not in each sub-interval
- If the continuous function changes sign in a sub-interval, that means it contains a root, so we keep the interval.
- If the function does not change sign, we discard it. This can potentially cause problems if there are two roots in the interval,so the bisection method is not guaranteed to find ALL of the roots.
Though the bisection method is not guaranteed to find all roots, it is guaranteed to find at least one if the original endpoints had opposite signs.
The process above is repeated until you're as close as you like to the root.
Example:
Find the root of using the bisection method
By plugging in some numbers, we can find that the function changes sign between and . Therefore, since the function is continuous, there must be at least one root in this interval.
- First Interval:
- Midpoint:
- y at midpoint: Therefore, the sign changes between 0.5 and 0.75 and does not between 0.75 and 1.
- New Interval:
- Midpoint:
- y at midpoint:
- New Interval:
- Midpoint:
- y at midpoint:
We could keep doing this, but since this result is very close to the root, lets see if there's a number smaller than 0.625 which gives a positive function value and save ourselves some time.
- x Value:
- y value:
Hence x lies between 0.5625 and 0.57 (since the function changes sign on this interval).
Note that convergence is slow but steady with this method. It is useful for refining crude approximations to something close enough to use a faster but non-guaranteed method such as weighted iteration.
Regula Falsi
[edit | edit source]The Regula Falsi method is similar the bisection method. You must again start with two x values between which the function f(x) you want to find the root of changes. However, this method attempts to find a better place than the midpoint of the interval to split it.It is based on the hypothesis that instead of arbitrarily using the midpoint of the interval as a guide, we should do one extra calculation to try and take into account the shape of the curve. This is done by finding the secant line between two endpoints and using the root of that line as the splitting point.
More formally:
- Draw or calculate the equation for the line between the two endpoints (a,f(a)) and (b,f(b)).
- Find where this line intersects the x-axis (or when y = 0), giving you x = c
- Use this x value to evaluate the function, giving you f(c)
- The sub-intervals are then treated as in the bisection method. If the sign changes between f(a) and f(c), keep the interval; otherwise, throw it away. Do the same between f(c) and f(b).
- Repeat until you're at a desired accuracy.
Use these two formulas to solve for the secant line y = mx + B:
(you can use either)
The regula falsi method is guaranteed to converge to a root, but it may or may not be faster than the bisection method, depending on how long it takes to calculate the slope of the line and the shape of the function.
Example:
Find the root of but this time use the regula falsi method.
Solution: Be careful with your bookkeeping with this one! It's more important to keep track of y values than it was with bisection, where all we cared about was the sign of the function, not it's actual value.
For comparison with bisection, let's choose the same initial guesses: and , for which and .
- First interval:
- Secant line:
- Root of secant line:
- Function value at root:
Notice that in this case, we can discard a MUCH larger interval than with the bisection method (which would use as the splitting point)
- Second interval:
- Secant line:
- Root of secant line:
- Function value at root:
We come up with practically the exact root after only two iterations!
In some cases, the regula falsi method will take longer than the bisection method, depending on the shape of the curve. However, it generally worth trying for a couple of iterations due to the drastic speed increases possible.
Tangent Method (Newton's Method)
[edit | edit source]In this method, we attempt to find the root of a function y = f(x) using the tangent lines to functions. This is similar to the secant method, except it "cuts loose" from the old point and only concentrates on the new one, thus hoping to avoid hang-ups such as the one experienced in the example.
Since this class assumes students have not taken calculus, the tangent will be approximated by finding the equation of a line between two very close points, which are denoted (x) and . The method works as follows:
- Choose one initial guess,
- Evaluate the function f(x) at and at where is a small number. These yield two points on your (approximate) tangent line.
- Find the equation for the tangent line using the formulas given above.
- Find the root of this line. This is
- Repeat steps 2-4 until you're as close as you like to the root.
This method is not guaranteed to converge unless you start off with a good enough first guess, which is why the guaranteed methods are useful for generating one. However, since this method, when it converges, is much faster than any of the others, it is preferable to use if a suitable guess is available.
Example:
Find the root of using the tangent method.
Solution: Let's guess for comparison with iteration. Choose
- Tangent line:
- Root of tangent line:
Already we're as accurate as any other method we've used so far after only one calculation!