A=(285)
- This exercise is recommended for all readers.
- This exercise is recommended for all readers.
- This exercise is recommended for all readers.
- This exercise is recommended for all readers.
- Problem 5
Find the system of equations resulting from starting with
and making this change of variable (i.e., substitution).
- Answer
We have
which, after expanding and regrouping about the 's yields this.
The starting system, and the system used for the substitutions,
can be expressed in matrix language.
With this, the substitution is .
- Problem 6
As Definition 2.3 points out, the matrix product
operation generalizes the dot product.
Is the dot product of a row vector
and a column vector the same as their
matrix-multiplicative product?
- Answer
Technically, no. The dot product operation yields a scalar while the matrix product yields a matrix. However, we usually will ignore the distinction.
- This exercise is recommended for all readers.
- Problem 7
Represent the derivative map on
with respect to where is the natural basis
.
Show that the product of this matrix with itself is defined;
what the map does it represent?
- Answer
The action of on is
, , , ... and so
this is its matrix representation.
The product of this matrix with itself is defined because the matrix is
square.
The map so represented is the composition
which is the second derivative operation.
- Problem 8
Show that composition of linear transformations on is
commutative.
Is this true for any one-dimensional space?
- Answer
It is true for all one-dimensional spaces.
Let and be transformations of a one-dimensional space.
We must show that
for all vectors.
Fix a basis for the space and then the transformations are
represented by matrices.
Therefore, the compositions can be represented as and .
These two matrices are equal and so the compositions have the same effect on each vector in the space.
- Problem 9
Why is matrix multiplication not defined as entry-wise multiplication?
That would be easier, and commutative too.
- Answer
It would not represent linear map composition; Theorem 2.6 would fail.
- This exercise is recommended for all readers.
- This exercise is recommended for all readers.
- Problem 11
- How does matrix multiplication interact with
scalar multiplication: is ?
Is ?
- How does matrix multiplication interact with
linear combinations: is ?
Is ?
- Answer
Although these can be done by going through the indices, they
are best understood in terms of the represented maps.
That is, fix spaces and bases so that the matrices
represent linear maps .
- Yes; we have both
and
(the second equality holds because of the linearity of ).
- Both answers are yes.
First, and
both send
to ; the
calculation is as in the prior item
(using the linearity of for the first one).
For the other,
and
both send
to .
- Problem 12
We can ask how the matrix product
operation interacts with the transpose operation.
- Show that .
- A square matrix is
symmetric if each
entry equals the
entry, that is, if the matrix equals its own transpose.
Show that
the matrices and are symmetric.
- Answer
We have not seen a map interpretation of the transpose operation, so
we will verify these by considering the entries.
- The entry of is the entry
of , which is the dot product of the
-th row of and the -th column of .
The entry of is the dot product of
the -th row of and the -th column of
, which is
the dot product of the -th column of and the
-th row of .
Dot product is commutative and so these two are equal.
- By the prior item each equals its transpose, e.g.,
.
- This exercise is recommended for all readers.
- Problem 14
In the proof of Theorem 2.12 some maps are used.
What are the domains and codomains?
- Answer
It doesn't matter (as long as the spaces have the appropriate
dimensions).
For associativity,
suppose that is , that is , and
that is .
We can take any dimensional space,
any dimensional space, any dimensional space, and any
dimensional space— for instance,
, , , and will do.
We can take any bases , , , and , for those spaces.
Then,
with respect to the matrix represents a linear map ,
with respect to the matrix represents a ,
and with respect to the matrix represents an .
We can use those maps in the proof.
The second half is done similarly, except that and are added
and so we must take them to represent maps with the same domain
and codomain.
- Problem 15
How does matrix rank interact with matrix multiplication?
- Can the product of rank matrices have rank less
than ?
Greater?
- Show that the rank of the product of two matrices is less
than or equal to the minimum of the rank of each factor.
- Answer
- The product of rank matrices can have rank less
than or equal to but not greater than .
To see that the rank can fall,
consider the maps projecting onto
the axes.
Each is rank one but their composition
, which is the zero map, is rank zero.
That can be translated over to matrices representing those
maps in this way.
To prove that the product of rank matrices cannot have rank
greater than , we can apply the map result that the image of a
linearly dependent set is linearly dependent.
That is, if and both have rank
then a set in the range
of size
larger than is the image under of a set in of
size larger than and so is linearly dependent
(since the rank of is ).
Now, the image of a linearly dependent set is dependent, so any set of
size larger than in the range is dependent.
(By the way, observe that the rank of was not mentioned.
See the next part.)
- Fix spaces and bases and consider the associated linear maps
and .
Recall that the dimension of the image of a map (the map's rank) is
less than or equal to the dimension of the domain, and consider
the arrow diagram.
First, the image of must have dimension
less than or equal to the dimension of ,
by the prior sentence.
On the other hand, is a subset of
the domain of , and thus its image has dimension less than
or equal the dimension of the domain of .
Combining those two,
the rank of a composition is less than or equal to the minimum
of the two ranks.
The matrix fact follows immediately.
- This exercise is recommended for all readers.
- Problem 17
(This will be used in the Matrix Inverses exercises.)
Here is another property of matrix multiplication that might be
puzzling at first sight.
- Prove that the composition of the projections
onto the and axes
is the zero map despite that neither one is itself the zero map.
- Prove that the composition of the derivatives
is the zero map despite that neither is the zero map.
- Give a matrix equation representing the first fact.
- Give a matrix equation representing the second.
When two things multiply to give zero despite that neither is zero, each is
said to be a zero divisor.
- Answer
- Either of these.
- The composition is the fifth derivative map
on the space of fourth-degree polynomials.
- With respect to the natural bases,
and their product (in either order) is the zero matrix.
- Where ,
and their product (in either order) is the zero matrix.
- This exercise is recommended for all readers.
- Problem 21
- Prove that for any matrix
there are scalars
that are not all such that the combination
is the zero matrix
(where is the identity matrix, with 's in its
and entries and zeroes elsewhere; see
Problem 19).
- Let be a polynomial
.
If is a square matrix we define to be the matrix
(where is the appropriately-sized identity matrix).
Prove that for any square matrix there is a polynomial such that
is the zero matrix.
- The minimal polynomial
of a square matrix is the
polynomial of least degree, and with leading coefficient ,
such that is the zero matrix.
Find the minimal polynomial of this matrix.
(This is the representation with respect to ,
the standard basis, of a rotation through radians
counterclockwise.)
- Answer
- The vector space has dimension
four.
The set has five
elements and thus is linearly dependent.
- Where is ,
generalizing the argument from the
prior item shows that there is such a polynomial of degree
or less,
since is a -member
subset of the -dimensional space .
- First compute the powers
(observe that rotating by three times results in a
rotation by , which is indeed what represents).
Then set
equal to the zero matrix
-
to get this linear system.
Apply Gaussian reduction.
Setting , , and to zero
makes and also come out to be zero so
no degree one or degree zero polynomial will do.
Setting and to zero (and to one)
gives a linear system
that can be solved with and .
Conclusion: the polynomial
is minimal for the matrix .
- Problem 23
Recall the notation for the sum of the sequence of numbers
.
In this notation, the entry of the product of and
is this.
Using this notation,
- reprove that matrix multiplication is associative;
- reprove Theorem 2.6.
- Answer
- Tracing through the remark at the end of the subsection
gives that the entry of is this
(the first equality comes from using
the distributive law to multiply through
the 's, the second equality is the associative law for real
numbers, the third is the commutative law for reals,
and the fourth equality follows on using the distributive law to
factor the 's out),
which is the entry of .
- The -th component of is
and so the -th component of
is this
-
(the first equality holds by using the distributive law to multiply
the 's through, the second equality represents the use of
associativity of reals, the third follows by commutativity of
reals, and the fourth comes from using
the distributive law to factor the 's out).