Given , let denote the best uniform approximation to among polynomials of degree , i.e.
![{\displaystyle \max _{x\in [a,b]}|f(x)-p_{n}(x)|\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/08e119989530536a3a3e862a529eac5507e3681c)
is minimized by the choice . Let . Prove that there are at least two points such that
![{\displaystyle |e(x_{1})|=|e(x_{2})|=\max _{x\in [a,b]}|f(x)-p_{n}^{*}(x)|\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ba18b9e09e39853426f41a675184e63b76b67014)
and
|
Since both
and
are continuous, so is the function

and therefore
attains both a maximum and a minimum on
.
Let
and
If
, then there exists a polynomial
(a vertical shift of the supposed best approximation
which is a better approximation to
.
Therefore,
Equality holds if and only if
.
Find the node and the weight so that the integration rule
is exact for linear functions. (No knowledge of orthogonal polynomials is required.)
|
Let
. Then
which implies after computation
Let
. Then
which implies after computation
Show that no one-point rule for approximating can be exact for quadratics.
|
Suppose there is a one-point rule for approximating
that is exact for quadratics.
Let
.
Then,
but
,
a contradiction.
In fact
Find .
|
Let
. Then
Using the values computed in part (b), we have
which implies
Let and be two polynomials of degree . Suppose and agree at , , and . Show
|
There exist
such that if
is a polynomial of degree
Hence,
Using the iteration
, we have that
Then, computing norms we obtain
.
In particular, considering
, we have that
.
Now, in order to study the convergence of the method, we need to analyze
. We use the following characterization:
Let's denote by
an eigenvalue of the matrix
. Then
implies
i.e.
The above equation is quadratic with respect to
and opens upward. Hence using the quadratic formula we can find the roots of the equation to be
Then, if
for all the eigenvalues of the matrix
, we can find
such that
, i.e., the method converges.
On the other hand, if the matrix
is symmetric, we have that
. Then using the Schur decomposition, we can find a unitary matrix
and a diagonal matrix
such that
, and in consequence,
This last expression is minimized if
, i.e, if
. With this optimal value of
, we obtain that
which implies that
.
Finally, we've obtained that
.
Show that if some eigenvalues of have negative real part and some have positive real part, then there is no real for which the iterations converge.
|
If
for
, then convergence occurs if
Hence
.
Similarly, if
for
, then convergence occurs if
Hence
.
If some eigenvalues are positive and some negative, there does not exist a real
for which the iterations converge since
cannot be both positive and negative simultaneously.
Let for a matrix norm associated to a vector norm. Show that the error can be expressed in terms of the difference between consecutive iterates, namely

(The proof of this is short but a little tricky)
|
Let be the tridiagonal matrix

Find a value of that guarantees convergence
|
By Gerschgorin's theorem, the magnitude of all the eigenvalues are between 1 and 5 i.e.
Therefore,
We want
Therefore,