then the answer comes from retaining the part and dropping the
part.
When the bases
are concatenated, and the vector is represented,
then retaining only the part gives this answer.
With these bases
the representation with respect to the concatenation is this.
and so the projection is this.
This exercise is recommended for all readers.
Problem 2
Find .
Answer
As in Example 3.5, we can simplify the calculation
by just finding the space of vectors perpendicular to all the
vectors in 's basis.
Parametrizing to get
gives that
Parametrizing the one-equation linear system gives this
description.
As in the answer to the prior part, can be described as
a span
and then is the set of vectors perpendicular to the
one vector in this basis.
Parametrizing the linear requirement in the description
of gives this basis.
Now, is the set of vectors perpendicular to (the one
vector in) .
(By the way, this answer checks with the first item in this
question.)
Every vector in the space is perpendicular to the zero
vector so .
The appropriate description and basis for are routine.
Then
and so .
The description of is easy to find by parametrizing.
Finding here just requires solving a linear system
with two equations
and parametrizing.
Here, is one-dimensional
and as a result, is two-dimensional.
Problem 3
This subsection shows how to project orthogonally in two ways,
the method of Example 3.2
and 3.3,
and the method of Theorem 3.8.
To compare them,
consider the plane specified by in .
Find a basis for .
Find and a basis for .
Represent this vector with respect to the concatenation
of the two bases from the prior item.
Find the orthogonal projection of onto
by keeping only the part from the prior item.
Check that against the result from applying
Theorem 3.8.
Answer
Parametrizing the equation leads to this basis for
.
Because is three-dimensional
and is two-dimensional, the complement must be
a line.
Anyway, the calculation as in Example 3.5
gives this basis for .
The matrix of the projection
when applied to the vector, yields the expected result.
This exercise is recommended for all readers.
Problem 4
We have three ways to find the orthogonal projection of a
vector onto a line, the Definition 1.1 way from
the first subsection of this section, the
Example 3.2 and 3.3
way of representing
the vector with respect to a basis for the space and then keeping the
part, and the way of Theorem 3.8.
For these cases, do all three ways.
Answer
Parametrizing gives this.
For the first way, we take the vector spanning the line to be
and so
(as in Example 3.5 and 3.6,
we can just find the vectors perpendicular to all of the members
of the basis)
and representing the vector with respect to the concatenation
gives this.
Keeping the part yields the answer.
The third part is also a simple calculation
(there is a matrix in the middle,
and the inverse of it is also )
which of course gives the same answer.
Parametrization gives this.
With that, the formula for the first way gives this.
To proceed by the second method we find ,
find the representation of the given vector with respect to the
concatenation of the bases and
and retain only the part.
Finally, for the third method, the matrix calculation
followed by matrix-vector multiplication
gives the answer.
Problem 5
Check that the operation of Definition 3.1 is well-defined. That is, in Example 3.2 and 3.3, doesn't the answer depend on the choice of bases?
Answer
No, a decomposition of vectors into and does not depend on the bases chosen for the subspaces— this was shown in the Direct Sum subsection.
Problem 6
What is the orthogonal projection onto the trivial subspace?
Answer
The orthogonal projection of a vector onto a subspace is a member of that subspace. Since a trivial subspace has only one member, , the projection of any vector must equal .
Problem 7
What is the projection of onto along
if ?
Answer
The projection onto along of a is . Decomposing gives and , and dropping the part but retaining the part results in a projection of .
Problem 8
Show that if is a subspace with orthonormal basis
then
the orthogonal projection of onto is this.
Answer
The proof of Lemma 3.7 shows that
each vector
is the sum of its orthogonal projections
onto the lines spanned by the basis vectors.
Since the basis is orthonormal, the bottom of each fraction has .
This exercise is recommended for all readers.
Problem 9
Prove that the map is the projection onto
along if and only if the map
is the projection onto along .
(Recall the definition of the difference of two
maps: .)
Answer
If then every vector can be decomposed uniquely as . For all the map gives if and only if , as required.
This exercise is recommended for all readers.
Problem 10
Show that if a vector is perpendicular to every vector in a set then
it is perpendicular to every vector in the span of that set.
Answer
Let be perpendicular to every . Then .
Problem 11
True or false: the intersection of a subspace and its orthogonal
complement is trivial.
Answer
True; the only vector orthogonal to itself is the zero vector.
Problem 12
Show that the dimensions of orthogonal complements add to the dimension of the entire space.
Answer
This is immediate from the statement in Lemma 3.7 that the space is the direct sum of the two.
This exercise is recommended for all readers.
Problem 13
Suppose that are such that for all
complements , the projections of
and onto along are equal.
Must equal ?
(If so, what if we relax the condition to: all
orthogonal projections of the two are equal?)
Answer
The two must be equal, even only under the seemingly weaker condition that they yield the same result on all orthogonal projections. Consider the subspace spanned by the set . Since each is in , the orthogonal projection of onto is and the orthogonal projection of onto is . For their projections onto to be equal, they must be equal.
This exercise is recommended for all readers.
Problem 14
Let be subspaces of .
The perp operator acts on subspaces; we can ask how it interacts
with other such operations.
Show that two perps cancel: .
Prove that implies
that .
Show that .
Answer
We will show that the sets are mutually inclusive,
and .
For the first,
if then by the definition of the perp operation,
is perpendicular to every ,
and therefore (again by the definition of the perp operation)
.
For the other direction, consider .
Lemma 3.7's proof shows that
and that
we can give an orthogonal basis for the space
such that the first half
is a basis for
and the second half is a basis for .
The proof also checks that
each vector in the space is the sum of its orthogonal projections
onto the lines spanned by these basis vectors.
Because , it is perpendicular to
every vector in , and so the projections in
the second half are all zero.
Thus ,
which is a linear combination of vectors from , and
so .
(Remark.
Here is a slicker way to do the second half: write the space
both as and as
.
Because the first half showed that
and the prior sentence shows that the dimension of the two
subspaces and are equal, we can conclude
that equals .)
Because , any that is perpendicular
to every vector in is also perpendicular to every vector in
.
But that sentence simply says that .
We will again show that the sets are equal by mutual
inclusion.
The first direction is easy; any perpendicular to
every vector in
is perpendicular to every vector of the form
(that is, every vector in ) and every vector of the form
(every vector in ), and so
.
The second direction is also routine; any vector
is perpendicular to any vector of the form
because
.
This exercise is recommended for all readers.
Problem 15
The material in this subsection allows us to express a geometric
relationship that we have not yet seen between the rangespace and the
nullspace of a linear map.
Represent given by
with respect to the standard bases and show that
is a member of the perp of the nullspace.
Prove that is equal to the span of this
vector.
Generalize that to apply to any .
Represent
with respect to the standard bases and show that
are both members of the perp of the nullspace.
Prove that is the span of these two.
(Hint.
See the third item of Problem 14.)
Generalize that to apply to any .
This, and related results, is called the Fundamental Theorem of Linear Algebra
in (Strang 1993).
Answer
The representation of
is this.
By the definition of
and this second description exactly says this.
The generalization is that for any
there is a vector so that
and .
We can prove this by, as in the prior item,
representing with
respect to the standard bases and taking to be the
column vector gotten by transposing the one row of that
matrix representation.
Of course,
and so the nullspace is this set.
That description makes clear that
and since is a subspace of ,
the span of the two vectors is a subspace of the perp of
the nullspace.
To see that this containment is an equality, take
in the third item of Problem 14, as suggested
in the hint.
As above, generalizing from the specific case is easy: for
any
the matrix representing the map with respect to the standard
bases describes the action
and the description of the nullspace gives that
on transposing the rows of
we have .
(In (Strang 1993), this space is described as the
transpose of the row space of .)
Problem 16
Define a projection to be a linear transformation
with the property that
repeating the projection does nothing more than does the projection
alone:
for all .
Show that orthogonal projection onto a line has that property.
Show that projection along a subspace has that property.
Show that for any such
there is a basis for such that
where is the rank of .
Conclude that every projection is a projection along a subspace.
Also conclude that every projection has a representation
in block partial-identity form.
Answer
First note that if a vector is already in the
line then the orthogonal projection gives itself.
One way to verify this is to apply the
formula for projection onto the line spanned by a vector ,
namely .
Taking the line as
(the case is separate but easy) gives
,
which simplifies to , as required.
Now, that answers the question because after once projecting onto
the line, the result is in that line.
The prior paragraph says that projecting onto the same line again
will have no effect.
The argument here is similar to the one in the prior item.
With , the projection of
is .
Now repeating the projection will give ,
as required, because the decomposition of a member of into the
sum of a member of and a member of is
.
Thus, projecting twice onto along has the same effect as
projecting once.
As suggested by the prior items, the condition
gives that leaves vectors in the rangespace unchanged,
and hints that we should take
, ...,
to be basis vectors for the range, that is, that we should take
the range space of for (so that ).
As for the complement, we write for the nullspace of
and we will show that .
To show this, we can show that their intersection
is trivial and that they sum to
the entire space .
For the first, if a vector is in the rangespace
then there is a with ,
and the condition on gives that
, while
if that same vector is also in the nullspace then
and so the intersection of the rangespace and nullspace is trivial.
For the second, to write an arbitrary as the sum of
a vector from the rangespace and a vector from the nullspace,
the fact that the condition can be
rewritten as suggests taking
.
So we are finished on taking a basis
for where
is a basis for
the rangespace and
is a
basis for the nullspace .
Every projection (as defined in this exercise) is
a projection onto its rangespace and along its nullspace.
This also follows immediately from the third item.
Problem 17
A square matrix is symmetric if each entry equals the
entry (i.e., if the matrix equals its transpose).
Show that the projection matrix
is symmetric. (Strang 1980)
Hint. Find properties of transposes by looking in the index
under "transpose".
Answer
For any matrix we have that ,
and for any two matrices , we have that
(provided, of course,
that the inverse and product are defined).
Applying these two gives that the matrix equals its transpose.
Strang, Gilbert (1993), "The Fundamental Theorem of Linear Algebra", American Mathematical Monthly, American Mathematical Society: 848–855 {{citation}}: Unknown parameter |month= ignored (help).
Strang, Gilbert (1980), Linear Algebra and its Applications (2nd ed.), Hartcourt Brace Javanovich