- Problem 1
Perform the Gram-Schmidt process on each of these bases
for .
-
-
-
Then turn those orthogonal bases into orthonormal bases.
- Answer
-
-
-
The corresponding orthonormal bases for the three parts of this
question are these.
- This exercise is recommended for all readers.
- Problem 2
Perform the Gram-Schmidt process on each of these bases
for .
-
-
Then turn those orthogonal bases into orthonormal bases.
- Answer
-
-
The corresponding orthonormal bases for the two parts of this
question are these.
- This exercise is recommended for all readers.
- Problem 3
Find an orthonormal basis for this subspace of : the
plane .
- Answer
The given space can be parametrized in this way.
So we take the basis
apply the Gram-Schmidt process
and then normalize.
- Problem 4
Find an orthonormal basis for this subspace of .
- Answer
Reducing the linear system
and parametrizing gives this description of the subspace.
So we take the basis,
go through the Gram-Schmidt process
and finish by normalizing.
- Problem 5
Show that any linearly independent subset of can be
orthogonalized without changing its span.
- Answer
A linearly independent subset of is a basis for its
own span.
Apply Theorem 2.7.
Remark.
Here's why the phrase "linearly independent" is in the question.
Dropping the phrase would require us to worry about two things.
The first thing to worry about is that
when we do the Gram-Schmidt process on a linearly dependent
set then we get some zero vectors.
For instance, with
we would get this.
This first thing is not so bad because the zero vector is by definition orthogonal to every other vector, so we could accept this situation as yielding an orthogonal set (although it of course can't be normalized), or we just could modify the Gram-Schmidt procedure to throw out any zero vectors. The second thing to worry about if we drop the phrase "linearly independent" from the question is that the set might be infinite. Of course, any subspace of the finite-dimensional must also be finite-dimensional so only finitely many of its members are linearly independent, but nonetheless, a "process" that examines the vectors in an infinite set one at a time would at least require some more elaboration in this question. A linearly independent subset of is automatically finite— in fact, of size or less— so the "linearly independent" phrase obviates these concerns.
- This exercise is recommended for all readers.
- Problem 6
What happens if we apply the Gram-Schmidt process to
a basis that is already orthogonal?
- Answer
The process leaves the basis unchanged.
- Problem 7
Let
be a set of mutually orthogonal vectors in .
- Prove that for any in the space, the vector
is orthogonal to each of , ..., .
- Illustrate the prior item in by using as
, using as , and
taking to have components , , and .
- Show that is the vector in the
span of the set of 's that is closest to .
Hint. To the illustration done for the prior part,
add a vector
and apply the Pythagorean Theorem to the resulting triangle.
- Answer
- The argument is as in the case of the proof
of Theorem 2.7.
The dot product
can be written as the sum of terms of the form
with ,
and the term
.
The first kind of term equals zero because the 's
are mutually orthogonal.
The other term is zero because this projection is orthogonal
(that is, the projection definition makes it zero:
equals, after all of the cancellation is done, zero).
- The vector is shown in black and the
vector is in gray.
The vector
lies on the dotted line connecting the black vector to the
gray one, that is, it is orthogonal to the -plane.
- This diagram is gotten by following the hint.
The dashed triangle has a right angle where
the gray vector
meets the vertical dashed line
; this is what was
proved in the first item of this question.
The Pythagorean theorem then gives that the hypoteneuse— the
segment from to any other vector— is longer than
the vertical dashed line.
More formally, writing as
,
consider any other vector in the span
.
Note that
-
and that
(because the first item shows the
is orthogonal to each and so it is orthogonal
to this linear combination of the 's).
Now apply the Pythagorean Theorem (i.e., the Triangle Inequality).
- This exercise is recommended for all readers.
- Problem 9
One advantage of orthogonal bases is that they simplify finding the
representation of a vector with respect to that basis.
- For this vector and this non-orthogonal basis for
first represent the vector with respect to the basis.
Then project the vector onto the span of each basis vector
and .
- With this orthogonal basis for
represent the same vector with respect to the basis.
Then project the vector onto the span of each basis vector.
Note that the coefficients in the representation and the projection
are the same.
- Let
be an orthogonal basis for some subspace of .
Prove that for any in the subspace,
the -th component of the representation
is the scalar coefficient
from .
- Prove that
.
- Answer
- The representation can be done by eye.
The two projections are also easy.
- As above, the representation can be done by eye
and the two projections are easy.
Note the recurrence of the and the .
- Represent with respect to the basis
so that
.
To determine ,
take the dot product of both sides with .
Solving for yields the desired coefficient.
- This is a restatement of the prior item.
- Problem 10
Bessel's Inequality.
Consider these orthonormal sets
along with the vector whose components are
, , , and .
-
Find the coefficient for the projection of
onto the span of the vector in .
Check that .
-
Find the coefficients and for the projection of
onto the spans of the two vectors in .
Check that .
- Find , , and associated with the vectors in
, and , , , and for the vectors in .
Check that
and
that
.
Show that this holds in general: where
is an orthonormal set and is coefficient of
the projection of a vector from the space
then
.
Hint. One way is to look at the inequality
and expand the 's.
- Answer
First, .
-
- ,
- , , ,
For the proof, we will do only the case
because the completely general case is messier but no more enlightening.
We follow the hint
(recall that for any vector we have
).
(The two mixed terms in the third part of the third line are zero because and are orthogonal.) The result now follows on gathering like terms and on recognizing that and because these vectors are given as members of an orthonormal set.
- Problem 11
Prove or disprove: every vector in is in some orthogonal
basis.
- Answer
It is true, except for the zero vector. Every vector in except the zero vector is in a basis, and that basis can be orthogonalized.
- Problem 12
Show that the columns of an matrix form an orthonormal
set if and only if the inverse of the matrix is its transpose.
Produce such a matrix.
- Answer
The case gives the idea.
The set
is orthonormal if and only if these nine conditions all hold
(the three conditions in the lower left are redundant but nonetheless
correct).
Those, in turn, hold if and only if
as required.
This is an example, the inverse of this matrix is its transpose.
- Problem 13
Does the proof of Theorem 2.2 fail to consider the
possibility that the set of vectors is empty (i.e., that )?
- Answer
If the set is empty then the summation on the left side is the linear combination of the empty set of vectors, which by definition adds to the zero vector. In the second sentence, there is not such , so the "if ... then ..." implication is vacuously true.
- Problem 14
Theorem 2.7 describes a change of basis
from any basis
to one that is orthogonal
.
Consider the change of basis matrix .
- Prove that the matrix
changing bases in the direction opposite to that of the theorem
has an upper triangular shape— all
of its entries below the main diagonal are zeros.
- Prove that the inverse of an upper triangular matrix is
also upper triangular (if the matrix is invertible, that is).
This shows that the matrix changing bases
in the direction described in the theorem is upper triangular.
- Answer
- Part of the induction argument proving
Theorem 2.7 checks that
is in the span of
.
(The case in the proof illustrates.)
Thus, in the change of basis matrix ,
the -th column
has components through that are zero.
- One way to see this is to recall the computational
procedure that we use to find the inverse.
We write the matrix, write the identity matrix next to
it, and then we do Gauss-Jordan reduction.
If the matrix starts out upper triangular then the Gauss-Jordan
reduction involves only the Jordan half and these steps,
when performed on the identity, will result in an upper triangular
inverse matrix.
- Problem 15
Complete the induction argument in the proof of
Theorem 2.7.
- Answer
For the inductive step, we assume that for all in ,
these three conditions are true of each :
(i) each is nonzero,
(ii) each is a linear combination of
the vectors ,
and (iii) each is orthogonal to all of the
's prior to it (that is, with ).
With those inductive hypotheses, consider .
By the inductive assumption (ii) we can expand each
into a linear combination of
-
The fractions are scalars so this is a linear combination of linear combinations of . It is therefore just a linear combination of . Now, (i) it cannot sum to the zero vector because the equation would then describe a nontrivial linear relationship among the 's that are given as members of a basis (the relationship is nontrivial because the coefficient of is ). Also, (ii) the equation gives as a combination of . Finally, for (iii), consider ; as in the case, the dot product of with can be rewritten to give two kinds of terms, (which is zero because the projection is orthogonal) and with and (which is zero because by the hypothesis (iii) the vectors and are orthogonal).