Consider the system
.
|
Show that if the parameter is chosen sufficiently small, then this system has a unique solution within some rectangular region.
|
The system of equations may be expressed in matrix notation.
The Jacobian of
,
, is computed using partial derivatives
If
is sufficiently small and
are restricted to a bounded region
,
From the mean value theorem, for
, there exists
such that
Since
is bounded in the region
give sufficiently small
Therefore,
is a contraction and from the contraction mapping theorem there exists a unique fixed point (solution) in a rectangular region.
Derive a fixed point iteration scheme for solving the system and show that it converges.
|
To solve this problem, we can use the Newton Method. In fact, we want to find the zeros of
The Jacobian of
,
, is computed using partial derivatives
Then, the Newton method for solving this linear system of equations is given by
We want to show that
is a Lipschitz function. In fact,
and now, using that
is Lipschitz, we have
Since
is a contraction, the spectral radius of the Jacobian of
is less than 1 i.e.
.
On the other hand, we know that the eigenvalues of
are
.
Then, it follows that
or equivalently
is invertible.
Since
exists,
. Given a bounded region (bounded
), each entry of the above matrix is bounded. Therefore the norm is bounded.
since
and
is bounded.
Then, using a good enough approximation
, we have that the Newton's method is at least quadratically convergent, i.e,
Outline the derivation of the Adams-Bashforth methods for the numerical solution of the initial value problem .
|
We want to solve the following initial value problem:
.
First, we integrate this expression over
, to obtain
,
To approximate the integral on the right hand side, we approximate its integrand
using an appropriate interpolation polynomial of degree
at
.
This idea generates the Adams-Bashforth methods.
,
where,
denotes the approximated solution,
and
denotes the associated Lagrange polynomial.
Derive the Adams-Bashforth formula
![{\displaystyle y_{i+1}=y_{i}+h\left[-{\frac {1}{2}}f(x_{i-1},y_{i-1})+{\frac {3}{2}}f(x_{i},y_{i})\right]\qquad \qquad (1)\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bdf88b88a6c46efff56063d3ee3d3603e493391a)
|
From (a) we have
where ![{\displaystyle \int _{x_{i}}^{x_{i+1}}fdx\approx \int _{x_{i}}^{x_{i+1}}\left[{\frac {x-x_{i}}{x_{i-1}-x_{i}}}f_{i-1}+{\frac {x-x_{i-1}}{x_{i}-x_{i-1}}}f_{i}\right]dx\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/466e9d3b14eac67a715c9882145ecef47c81744a)
Then if we let
, where h is the fixed step size we get
![{\displaystyle h\int _{0}^{1}\left[{\frac {sh}{-h}}f_{i-1}+{\frac {(1+s)h}{h}}f_{i}\right]ds=h\left[-{\frac {1}{2}}f_{i-1}+{\frac {3}{2}}f_{i}\right]\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a745514c9f1e97c58dd997ce651e671a8ff8ac9c)
So we have the Adams-Bashforth method as desired
![{\displaystyle y_{i+1}=y_{i}+h\left[-{\frac {1}{2}}f(x_{i-1},y_{i-1})+{\frac {3}{2}}f(x_{i},y_{i})\right]\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a99692eb614b14fdae6ae48361c98bc7a03620d1)
Analyze the method (1). To be specific, find the local truncation error, prove convergence and find the order of convergence.
|
Note that
. Also, denote the uniform step size
as h. Hence,
Therefore, the given equation may be written as
Expanding about
, we get
Also expanding about
gives
A method is convergent if and only if it is both stable and consistent.
It is easy to show that the method is zero stable as it satisfies the root condition. So stable.
Truncation error is of order 2. Truncation error tends to zero as h tends to zeros. So the method is consistent.
Dahlquist principle: consistency + stability = convergent. Order of convergence will be 2.
Consider the problem

|
Give a variational formulation of (2), i.e. express (2) as

Define the Space H, the bilinear form B and the linear functional F, and state the relation between (2) and (3).
|
Multiplying (2) by a test function and using integration by parts we obtain:
Thus, the weak form or variational form associated with the problem (2) reads as follows: Find
such that
for all
where
.
Let be a mesh on with , and let
.
Define the finite element approximation, based on the approximation space . What can be said about the error on the Sobolev norm on ?
|
For our basis of
, we use the set of hat functions
, i.e., for
Since
is a basis for
, and
we have
.
Now, we can write the discrete problem: Find
such that
for all
If we consider that
is a basis of
and the linearity of the bilinear form
and the functional
, we obtain the equivalent problem:
Find
such that
This last problem can be formulated as a matrix problem as follows:
Find
such that
where
and
.
In general terms, we can use Cea's Lemma to obtain
In particular, we can consider
as the Lagrange interpolant, which we denote by
. Then,
.
It's easy to prove that the finite element solution is nodally exact. Then it coincides with the Lagrange interpolant, and we have the following punctual estimation:
Derive the estimate for , the error in . Hint: Let w solve
(#) :
We characterize variationally as
.
Let to get

Use formula (4) to estimate .
|
.
Hence,
From
, we have
Then,
Finally, from (#), we have that
. Then,
,
or equivalently,
.
Suppose is a basis for . Show that

where is the stiffness matrix.
|
We know that
where the substitution in the last lines come from the orthogonality of error.
Now,
Then, we have obtained