Consider the definite integral

Let denote the approximation of by the composite midpoint rule with uniform subintervals. For every set
.
Let be defined by
.
Assume that .
|
Show that the quadrature error satisfies

Hint: Use integration by parts over each subinterval .
|
Integrating
by parts over arbitrary points
and
gives
![{\displaystyle \int _{p}^{q}K(x)f''(x)dx=\left[(x-x_{j})f(x)-{\frac {1}{2}}(x-x_{j})^{2}f'(x)\right]_{x=p}^{x=q}-\int _{p}^{q}f(x)dx\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7df591711a1cda13a43b3a8002f05970a845f317)
Since
is defined on
we use
![{\displaystyle [p,q]=[x_{j},x_{j+{\frac {1}{2}}}]\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e25650571af386ef8bb34a58525673d21bbc258a)
and
![{\displaystyle [p,q]=[x_{j-{\frac {1}{2}}},x_{j}]\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/43bd712f5e9a0fbb3ddfe37aa6020fa7d53fcde4)
Using the first interval we get
![{\displaystyle {\frac {1}{2}}\left[-{\frac {1}{4}}(x_{j+1}-x_{j})^{2}f'(x_{j+{\frac {1}{2}}})+(x_{j+1}-x_{j})f(x_{j+{\frac {1}{2}}})\right]\qquad \mathbf {(1)} \!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a77b45f4275e0a0334da4b67888c9dac5ed4151f)
And for the second we get
![{\displaystyle {\frac {1}{2}}\left[{\frac {1}{4}}(x_{j-1}-x_{j})^{2}f'(x_{j-{\frac {1}{2}}})+(x_{j}-x_{j-1})f(x_{j-{\frac {1}{2}}})\right]\qquad \mathbf {(2)} \!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b36255d725b37425eef05e3ede52eaa7d20bdd61)
Since these apply to arbitrary half-subintervals, we can rewrite equation
with the its indecies shifted by one unit. The equation for the interval
is
![{\displaystyle {\frac {1}{2}}\left[{\frac {1}{4}}(x_{j}-x_{j+1})^{2}f'(x_{j+{\frac {1}{2}}})+(x_{j+1}-x_{j})f(x_{j+{\frac {1}{2}}})\right]\qquad \mathbf {(2')} \!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/746afe1e0df3803bd0d546c71b0161c3471b6729)
Combining
and
and writing it in the same form as the integration by parts, we have

Then our last step is to sum this over all of our
subintervals, noting that

Applying the result of part (a), the triangle inequality, and pulling out the constant
, we have,
is some constant less than infinity since
is compact and
is continuous on each of the finite number of intervals for which it is defined.
The above inequality becomes an equality when

where
is any constant.
Consider the (unshifted) method for finding the eigenvalues of an invertible matrix
|
The QR algorithm produces a sequence of similar matrices
whose limit tends towards being upper triangular or nearly upper triangular. This is advantageous since the eigenvalues of an upper triangular matrix lie on its diagonal.
i = 0
A_1 = A
while ( error > tolerance )
A_i=Q_i R_i (QR decomposition/factorization)
A_{i+1}=R_i Q_i (multiply R and Q, the reverse multiplication)
i=i+1 (increment)
Show that each of the matrices generated by this algorithm are orthogonally similar to .
|
From the factor step (QR decomposition) of the algorithm, we have

which implies

Substituting into the reverse multiply step, we have
Show that if is upper Hessenberg then so are each of the matrices generated by this algorithm.
|
A series of Given's Rotations matrices pre-multiplying
, a upper Heisenberg matrix, yield an upper triangular matrix
i.e.
Since Givens Rotations matrices are each orthogonal, we can write

i.e.

If we let
, we have
,
or more generally for
.
In each case, the sequence of Given's Rotations matrices that compose
have the following structure
So
is upper Hessenberg.
From the algorithm, we have
We conclude
is upper Hessenberg because for
the
th column of
is a linear combination of the first
columns of
since
is also upper Hessenberg.
Let

For this the sequence has a limit. Find this limit. Give your reasoning.
|
The eigenvalues of
can be computed. They are
and
. Furthermore, the result of matrix multiplies in the algorithm shows that the diagonal difference,
, is constant for all
.
Since the limit of
is an upper triangular matrix with the eigenvalues of
on the diagonal, the limit is

Let be symmetric and positive definite. Let . Consider solving using the conjugate gradient method. The iterate then satisfies
for every ,
where denotes the vector A-norm, is the initial residual, and
.
|
We know that
for every
, so if we can choose a
such that
,
then we can solve part a.
First note from definition of
Therefore, we can rewrite
as follows:
We can then write
explicitly as follows:
We substitute
into the hypothesis inequality and apply a norm inequality of matrix norms to get the desired result.
Let denote the Chebyshev polynomial. Let and denote respectively the smallest and largest eigenvalues of . Apply the result of part (a) to

to show that
.
You can use without proof the fact that
,
where denotes the set of eigenvalues of , and the facts that for every the polynomial has degree , is positive for , and satisfies
.
|
We want to show that
By hypothesis,

Since only the numerator of
depends on
, we only need to maximize the numerator in order to maximize
. That is find
![{\displaystyle z:\max _{z\in [\lambda _{\min },\lambda _{\max }]}\left|T_{n}\left({\frac {\lambda _{\max }+\lambda _{\min }-2z}{\lambda _{\max }-\lambda _{\min }}}\right)\right|\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3c0fde02f1c56ae5d5284b7229692c9c0352bc86)
Let
. Then

Hence,
so

Denote the argument of
as
since the argument depends on
. Hence,
,
Then,


Thus
.
Now, since
is real for
,

Hence,
![{\displaystyle \max _{x\in [-1,1]}T_{n}(x)=\max _{x\in [-1,1]}|\cos(n\arccos(x))|=1\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/59b354323356c0de797da2093c06919d629f0757)
Let
, then
Using our formula we have,
In other words, if
,
achieves its maximum value of
.