A set of functions is a Chebyshev system if
- (i) The set is linearly independent.
- (ii) If
is a linear combination of which is not identically zero, has at most distinct zeros in .
|
Show that is a Chebyshev system if and only if for any distinct points the matrix with is non-singular.
|
We want to prove the following statement:
If
is a Chebyshev system, then for any
distinct points
the matrix
with
is non-singular.
Writing out the matrix
yields:
Since the set
is linearly independent, there does not exist any non-zero sets of constants of
such that
for any
. Hence, the columns of the matrix
are linearly independent which implies that
is non-singular.
Assume
is non-singular.
This implies the columns of
![{\displaystyle A={\begin{pmatrix}g_{1}(x_{1})&g_{2}(x_{1})&\cdots &g_{n}(x_{1})\\g_{1}(x_{2})&g_{2}(x_{2})&\cdots &g_{n}(x_{2})\\\vdots &\vdots &\ddots &\vdots \\g_{1}(x_{n})&g_{2}(x_{n})&\cdots &g_{n}(x_{n})\end{pmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9bae36e1bea7b2e2526f621064b41d956ef0e76f)
are linearly independent. Since
is non-singular for any choice of
,
is a linearly independent set and we have shown
.
By hypothesis,
is a linear combination of
i.e.
![{\displaystyle g(x)=\alpha _{1}g_{1}(x)+\alpha _{2}g_{2}(x)+\cdots +\alpha _{n}g_{n}(x)\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6e4179292d5d912310fdcb17fe378c4f1bcf159c)
for
not all zero.
Assume for the sake of contradiction that
has
zeros at
This implies the following
equations:
Rewriting the equations in matrix form yields
Since
are not all zero, this implies that the columns of
,
, are linearly dependent, a contradiction.
Hence,
has at most
zeros, and we have shown
.
Let be such that for all . Let . Show that is a Chebyshev system. For this, you may use results from polynomial interpolation without proof.
|
We have to prove:
(i)
is a linearly independent set
(ii)any linear combination of this set has at most
zeros.
If we evaluate
at
distinct points,
, we have the following Van Der Monde matrix whose determinant is non-zero.
Hence,
are linearly independent.
Assume for the sake of contradiction that
is a linear combination of
, that is
.
Hence,
is a polynomial of degree
. Taking
derivatives of
yields
![{\displaystyle f^{(m+1)}(x)=0\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c624b4b3b794d2093c8bc4aafa418ca8583fd9b0)
which contradicts the hypothesis. Therefore
is a linearly independent set.
Let
.
Suppose
has
(or more) zeros in
. By Rolle's Theorem, the (n+1)st derivative of f vanishes on the given interval, a contradiction.
(i) and (ii) show that
is a Chebyshev system.
Let
![{\displaystyle I_{n}(f)=\sum _{k=1}^{n}w_{n,k}f(x_{n},k),\quad \quad a\leq x_{n,k}\leq b\quad \quad (0)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2864d4727d9a6911ec7a8e722effe5656a3a9e4e)
be a sequence of integration rules.
|
Suppose
![{\displaystyle \lim _{n\rightarrow \infty }I_{n}(x^{k})=\int _{a}^{b}x^{k}dx,\quad \quad k=0,1,\ldots \quad \quad (1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/da8fe12cc79955e7c48591d5de8ced28f0d8ee33)
and
![{\displaystyle \sum _{k=1}^{n}|w_{n,k}|\leq M,\quad \quad n=1,2,\ldots \quad \quad (2)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ed101ac5a225c3a2b74b6cb880ba3209b44e63fe)
for some constant . Show that
![{\displaystyle \lim _{n\rightarrow \infty }I_{n}(f)=\int _{a}^{b}f(x)dx}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9e01acc392310467800f001fab535bb60ab2cda6)
for all
|
By the Weierstrass Approximation Theorem, given
, there exists a polynomial
such that
![{\displaystyle \max _{a\leq x\leq b}|f(x)-p(x)|\leq \min\{{\frac {\epsilon }{2}}\cdot {\frac {1}{M}},{\frac {\epsilon }{2}}\cdot {\frac {1}{b-a}}\}\quad \quad (3)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5e559cc0829bcf9993ab1e3cecb0c3c15526a343)
Let
![{\displaystyle I(f)=\int _{a}^{b}f(x)dx}](https://wikimedia.org/api/rest_v1/media/math/render/svg/43ffca648e8cb5000f3196f4eda39ed5681e147a)
Adding and subtracting
and
, yields,
![{\displaystyle I(f)-I_{n}(f)=[I(f)-I(p)]+[I(p)-I_{n}(p)]+[I_{n}(p)-I_{n}(f)]\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/863c6e8e5aa3525c5f8aecf2d8f0b7afd7a39992)
By the triangle inequality and equation (2) and (3),
By equation (1), when
,
![{\displaystyle |I(p)-I_{n}(p)|=0\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/029e72577a4f5675bc40aaaedea2991cb39f8e5d)
Hence for arbitrary small
as
,
![{\displaystyle |I(f)-I_{n}(f)|\leq \epsilon }](https://wikimedia.org/api/rest_v1/media/math/render/svg/aa79ee550d5143759c24f85924ac763e01e2111b)
i.e.
![{\displaystyle I(f)=\lim _{n\rightarrow \infty }I_{n}(f)\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d59811b0a175890b72cc3ae9387c4f19e2ea04d2)
Show that if all then (1) implies (2).
|
For
, equation (1) yields,
Letting
in equation (0) yields,
![{\displaystyle I_{n}(1)=\sum _{k=1}^{n}w_{n,k}\cdot 1=\sum _{k=1}^{n}w_{n,k}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4a149daa452b8fe69bb40f6df53f72f2198b5f4a)
Combining the two above results yields,
Since
is finite, there exists some number
such that
.
Since
,
![{\displaystyle \lim _{n\rightarrow \infty }\sum _{k=1}^{n}|w_{n,k}|\leq M}](https://wikimedia.org/api/rest_v1/media/math/render/svg/87d23cba80c34ae1c66e9f6e07dd27c9b39f9025)
i.e. equation (2).
Consider the real system of linear equations
![{\displaystyle Ax=b\quad \quad (1)\,\!}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1b7c9857c4ddf42968705908d0e2de01c7ee22f5)
where is non singular and satisfies
![{\displaystyle (v,Av)>0\,\!}](https://wikimedia.org/api/rest_v1/media/math/render/svg/080388b5a743c2bf1d69eae6425efe8b12637f3d)
for all real , where the Euclidean inner product is used here.
|
Substituting for
in
and expanding the inner product we have,
From properties of inner products we have,
![{\displaystyle (v,A^{T}v)=(Av,v)=(v,Av)\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/695aa9f24f66d4cc358bea933e455780905bfff7)
Hence,
Prove that
![{\displaystyle {\frac {(v,Av)}{(v,v)}}\geq \lambda _{\min }(M)>0\,\!}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7da60e05d283dc57bc687f8c8d71b7c9ab5ea88f)
where is the minimum eigenvalue of .
|
![{\displaystyle {\frac {(v,Av)}{(v,v)}}={\frac {(v,Mv)}{(v,v)}}={\frac {v^{T}Mv}{v^{T}v}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/63045fcf69ee74bce5b9ef4f4c0592529c80c65f)
Since
is symmetric, it has a eigenvalue decomposition
![{\displaystyle M=Q^{T}\Lambda Q\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a45b456d94e319c72c8b9e5dc6059883b1fd40f0)
where
is orthogonal and
is a diagonal matrix containing all the eigenvalues.
Substitution yields,
![{\displaystyle {\frac {(v,Av)}{(v,v)}}={\frac {v^{T}Q^{T}\Lambda Qv}{v^{T}v}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b3fd5bc3f94cda60e9c2469170accd98060ad156)
Let
![{\displaystyle Qv=x\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7face74e5f8c8344f0fe8fca9c1f4daaf43d98bd)
This implies the following three relationships:
![{\displaystyle {\begin{aligned}v&=Q^{T}x\\v^{T}Q^{T}&=x^{T}\\v^{T}&=x^{T}Q\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9a14f66d51cb789bd32bd704db89e7b51674b312)
Substituting,
![{\displaystyle {\begin{aligned}{\frac {v^{T}Q^{T}\Lambda Qv}{v^{T}v}}&={\frac {x^{T}\Lambda x}{x^{T}QQ^{T}x}}\\&={\frac {x^{T}\Lambda x}{x^{T}x}}\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f0ccd7a0c7a1e7bcbca4d3ab0293565164db8f7f)
Expanding the numerator we have,
Expanding the denominator yield
![{\displaystyle x^{T}x=\sum _{i=1}^{n}x_{i}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dd5d7ad02f999efd8babfe10c2072bba804603c2)
Substituting,
![{\displaystyle {\begin{aligned}{\frac {x^{T}\Lambda x}{x^{T}x}}&={\frac {\sum _{i=1}^{n}\lambda _{i}x_{i}^{2}}{\sum _{i=1}^{n}x_{i}^{2}}}\\&\geq \lambda _{\min }(M){\frac {\sum _{i=1}^{n}x_{i}^{2}}{\sum _{i=1}^{n}x_{i}^{2}}}\\&=\lambda _{\min }(M)\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f57e79f7dd2e9c76b7bce24b7b9acba12bd6cea1)
From part(a)
![{\displaystyle (v,Av)=(v,Mv)>0\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/93555322b452b8092f7bcf766504876000592d7c)
for all real
.
Therefore
is positive definite which implies all its eigenvalues are positive. In particular,
![{\displaystyle \lambda _{\min }(M)>0\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3e6f22c596508950dd0400acebc22ac6c83e11a4)
Now consider the iteration for computing a series of approximate solutions to (1),
![{\displaystyle x_{k+1}=x_{k}+\alpha r_{k}\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e6eeae66d7c1cd8c52aa0f19a52087e4b8ff4a5c)
where and is chosen to minimize as a function of . Prove that
![{\displaystyle {\frac {\|r_{k+1}\|_{2}}{\|r_{k}\|_{2}}}\leq \left(1-{\frac {\lambda _{\min }(M)^{2}}{\lambda _{\max }(A^{T}A)}}\right)^{\frac {1}{2}}\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2b3ad91fc8cc7f617a8ab39b91653e4aada487c9)
|
First, we want to write
as a function of
i.e.
![{\displaystyle f(\alpha )=\|r_{k+1}\|^{2}\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d144f5e3bfd9a6c258218e3f38824d1967b0674b)
Changing the indices of equation
from
to
,substituting for
, and applying the definition of norm yields,
From a property of inner products and since
is symmetric,
![{\displaystyle (r_{k},Ar_{k})=(A^{T}r_{k},r_{k})=(Ar_{k},r_{k})\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0bd13767a1934808f314d8b7790dadc4aedadff7)
Hence,
![{\displaystyle f(\alpha )=\|r_{k+1}\|^{2}=(r_{k},r_{k})-2\alpha (Ar_{k},r_{k})+\alpha ^{2}(Ar_{k},Ar_{k})\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/98d49a25b0d420436164934405d6087194156da8)
Next we want to minimize
as a function of
. Taking its derivative with respect to
yields,
![{\displaystyle f^{\prime }(\alpha )=-2(Ar_{k},r_{k})+2\alpha (Ar_{k},Ar_{k})\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/75fa1edd56018e14453578108059dfc60edc20a2)
Setting
and solving for
gives
Substituting for
into
gives,
By definition of norm,
![{\displaystyle \|r_{k}\|^{2}=(r_{k},r_{k})\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5bb8e8e0acf8d697ffa36309178ec8e831cf9fc0)
Hence
![{\displaystyle {\frac {\|r_{k+1}\|^{2}}{\|r_{k}\|^{2}}}=1-{\frac {(Ar_{k},r_{k})^{2}}{(r_{k},r_{k})(Ar_{k},Ar_{k})}}\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fff6e589e5aaf53614e3814dfc0910affaa5b91e)
Multiplying and dividing the second term on the right hand side of the above equation by
and applying a property of inner product yields,
![{\displaystyle {\frac {\|r_{k+1}\|^{2}}{\|r_{k}\|^{2}}}=1-{\frac {(Ar_{k},r_{k})^{2}(r_{k},r_{k})}{(r_{k},r_{k})^{2}(r_{k},A^{T}Ar_{k})}}\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c01558f1ed450895703e1c303ccb9092286451df)
From the result of part (b), we have our desired result:
![{\displaystyle {\frac {\|r_{k+1}\|_{2}}{\|r_{k}\|_{2}}}\leq \left(1-{\frac {\lambda _{\min }(M)^{2}}{\lambda _{\max }(A^{T}A)}}\right)^{\frac {1}{2}}\!\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2b3ad91fc8cc7f617a8ab39b91653e4aada487c9)