Introduction
"Matrix" may be more popularly known as a giant computer simulation, but in mathematics it is a totally different thing. To be more precise, a matrix (plural matrices) is a rectangular array of numbers. For example, below is a typical way to write a matrix, with numbers arranged in rows and columns and with round brackets around the numbers:
The above matrix has 4 rows and 4 columns, so we call it a 4 × 4 (4 by 4) matrix. Also, we can have matrices of many different shapes. The shape of a matrix is the name for the dimensions of the matrix (m by n, where m is the number of rows and n the number of columns). Here are some more examples of matrices:
This is an example of a 3 × 3 matrix:
This is an example of a 5 × 4 matrix:
This is an example of a 1 × 6 matrix:
The theory of matrices is intimately connected with that of (linear) simultaneous equations. The ancient Chinese had established a systematic way to solve simultaneous equations. The theory of simultaneous equations was furthered in the east by the Japanese mathematician, Seki and a little later by Leibniz, Newton's greatest rival. Later, Gauss (1777 - 1855), one of the three giants of modern mathematics, popularised the use of Gaussian elimination, which is a simple step by step algorithm for solving any number of linear simultaneous equations. By then the use of matrices to represent simultaneous equation neatly on paper (as discussed above) had become quite common1.
Consider the simultaneous equations:
it has the solution x = 7 and y = 3, and the usual way to solve it is to add the two equations together to eliminate the y. Matrix theory offers us another way to solve the above simultaneous equations via matrix multiplication (covered below). We will study the widely accepted way to multiply two matrices together. In theory with matrix multiplication we can solve any number of simultaneous equations, but we shall mainly restrict our attention to 2 × 2 matrices. But even with that restriction, we have opened up doors to topics simultaneous equations could never offer us. Two such examples are
- using matrices to solve linear recurrence relations which can be used to model population growth, and
- encrypting messages with matrices.
We shall commence our study by learning some of the more fundamental concepts of matrices. Once we have a firm grasp of the basics, we shall move on to study the real meat of this chapter, matrix multiplication.
Elements
An element of a matrix is a particular number inside the matrix, and it is uniquely located with a pair of numbers. E.g. let the following matrix be denoted by A, or symbolically:
the (2,2)th entry of A is 5; the (1,1)th entry of A is 1, the (3,3) entry of A is 9 and the (2,3)th entry of A is 8. The (i , j)th entry of A is usually denoted ai,j and the (i , j)th entry of a matrix B is usually denoted by bi,j and so on.
Summary
- A matrix is an array of numbers
- A m×n matrix has m rows and n columns
- The shape of a matrix is determined by its number of rows and columns
- The (i,j)th element of a matrix is located in ith row and jth column
Matrix addition & Multiplication by a scalar
Matrices can be added together. But only the matrices of the same shape can be added. This is very natural. E.g.
then
Similarly matrices can be multiplied by a number. We call the number a scalar to distinguish it from a matrix. The reader need not worry about the definition here, just remember that a scalar is simply a number.
in this case the scalar value is 5. In general, when we do s × A , where s is a scalar and A a matrix, we multiply each entry of A by s.
Matrix Multiplication
The widely accepted way to multiply two matrices together is definitely non-intuitive. As mentioned above, multiplication can help with solving simultaneous equations. We will now give a brief outline of how this can be done. Firstly, any system of linear simultaneous equations can be written as a matrix of coefficients multiplied by a matrix of unknowns equaling a matrix of results. This description may sound a little complicated, but in symbolic form it is quite clear. The previous statement simply says that if A, x and b are matrices, then Ax = b, can be used to represent some system of simultaneous equations. The beautiful thing about matrix multiplications is that some matrices can have multiplicative inverses, that is we can multiply both sides of the equation by A-1 to get x = A-1b, which effectively solves the simultaneous equations.
The reader will surely come to understand matrix multiplication better as this chapter progresses. For now we should consider the simplest case of matrix multiplication, multiplying vectors. We will see a few examples and then we will explain process of multiplication
then
Similarly if:
then
A matrix with just one row is called a row vector, similarly a matrix with just one column is called a column vector. When we multiply a row vector A, with a column vector B, we multiply the element in the first column of A by the element in the first row of B and add to that the product of the second column of A and second row of B and so on. More generally we multiply a1,i by bi,1 (where i ranges from 1 to n, the number of rows/columns) and sum up all of the products. Symbolically:
- (for information on the sign, see Summation_Sign)
- where n is the number of rows/columns.
- In words: the product of a column vector and a row vector is the sum of the product of item 1,i from the row vector and i,1 from the column vector where i is from 1 to the width/height of these vectors.
Note: The product of matrices is also a matrix. The product of a row vector and column vector is a 1 by 1 matrix, not a scalar.
Exercises
Multiply:
Multiplication of non-vector matrices
Suppose where A, B and C are matrices.
We multiply the ith row of A with the jth column of B as if they are vector-matrices. The resulting number is the (i,j)th element of C. Symbolically:
Example 1
Evaluate AB = C and BA= D, where
and
Solution
i.e.
i.e.
Example 2
Evaluate AB and BA where
Solution
Example 3
Evaluate AB and BA where
Solution
Example 4
Evaluate the following multiplication:
Solution
Note that:
is a 2 by 1 matrix and
is a 1 by 2 matrix. So the multiplication makes sense and the product should be a 2 by 2 matrix.
Example 5
Evaluate the following multiplication:
Solution
Example 6
Evaluate the following multiplication:
Solution
Example 7
Evaluate the following multiplication:
Solution
Note Multiplication of matrices is generally not commutative, i.e. generally AB ≠ BA.
Diagonal matrices
A diagonal matrix is a matrix with zero entries everywhere except possibly down the diagonal. Multiplying diagonal matrices is really convenient, as you need only to multiply the diagonal entries together.
Examples
The following are all diagonal matrices
Example 1
Example 2
The above examples show that if D is a diagonal matrix then Dk is very easy to compute, all we need to do is to take the diagonal entries to the kth power. This will be an extremely useful fact later on, when we learn how to compute the nth Fibonacci number using matrices.
Exercises
1. State the dimensions of C
- a) C = An×pBp×m
- b)
2. Evaluate. Please note that in matrix multiplication (AB)C = A(BC) i.e. the order in which you do the multiplications does not matter (proved later).
- a)
- b)
3. Performing the following multiplications:
What do you notice?
The Identity & multiplication laws
The exercise above showed us that the matrix:
is a very special. It is called the 2 by 2 identity matrix. An identity matrix is a square matrix, whose diagonal entries are 1's and all other entries are zero. The identity matrix, I, has the following very special properties
for all matrices A. We don't usually specify the shape of the identity because it's obvious from the context, and in this chapter we will only deal with the 2 by 2 identity matrix. In the real number system, the number 1 satisfies: r × 1 = r = 1 × r, so it's clear that the identity matrix is analogous to "1".
Associativity, distributivity and (non)-commutativity
Matrix multiplication is a great deal different to the multiplication we know from multiplying real numbers. So it is comforting to know that many of the laws the real numbers satisfy also carries over to the matrix world. But with one big exception, in general AB ≠ BA.
Let A, B, and C be matrices. Associativity means
- (AB)C = A(BC)
i.e. the order in which you multiply the matrices is unimportant, because the final result you get is the same regardless of the order which you do the multiplications.
On the other hand, distributivity means
- A(B + C) = AB + AC
and
- (A + B)C = AC + BC
Note: The commutative property of the real numbers (i.e. ab = ba), does not carry over to the matrix world.
Convince yourself
For all 2 by 2 matrices A, B and C. And I the identity matrix.
1. Convince yourself that in the 2 by 2 case:
- A(B + C) = AB + AC
and
- (A + B)C = AC + BC
2. Convince yourself that in the 2 by 2 case:
- A(BC) = (AB)C
3. Convince yourself that:
in general. When does AB = BA? Name at least one case.
Note that all of the above are true for all matrices (of any dimension/shape).
Determinant and Inverses
We shall consider the simultaneous equations:
- ax + by = α (1)
- cx + dy = β (2)
where a, b, c, d, α and β are constants. We want to determine the necessary conditions for (1) and (2) to have a unique solution for x and y. We proceed:
- Let (1') = (1) × c
- Let (2') = (2) × a
i.e.
- acx + bcy = cα (1')
- acx + ady = aβ (2')
Now
- let (3) = (2') - (1')
- (ad - bc)y = aβ - cα (3)
Now y can be uniquely determined if and only if (ad - bc) ≠ 0. So the necessary condition for (1) and (2) to have a unique solution depends on all four of the coefficients of x and y. We call this number (ad - bc) the determinant, because it tells us whether there is a unique solution to two simultaneous equations of 2 variables.
In summary
- if (ad - bc) = 0 then there is no unique solution
- if (ad - bc) ≠ 0 then there is a unique solution.
Note: Unique, we can not emphasise this word enough. If the determinant is zero, it doesn't necessarily mean that there is no solution to the simultaneous equations! Consider:
- x + y = 2
- 7x + 7y = 14
the above set of equations has determinant zero, but there is obviously a solution, namely x = y = 1. In fact there are infiinitely many solutions! On the other hand consider also:
- x + y = 1
- x + y = 2
this set of equations has determinant zero, and there is no solution at all. So if determinant is zero then there is either no solution or infinitely many solutions.
Determinant of a matrix
We define the determinant of a 2 × 2 matrix
to be
Inverses
It is perhaps, at this stage, not very clear what the use is of the det(A). But it's intimately connected with the idea of an inverse. Consider in the real number system a number b, it has (multiplicative) inverse 1/b, i.e. b(1/b) = (1/b)b = 1. We know that 1/b does not exist when b = 0.
In the world of matrices, a matrix A may or may not have an inverse depending on the value of the determinant det(A)! How is this so? Let's suppose A (known) does have an inverse B (i.e. AB = I = BA). So we aim to find B. Let's suppose further that
and
we need to solve four simultaneous equations to get the values of w, x, y and z in terms of a, b, c, d and det(A).
- aw + by = 1
- cw + dy = 0
- ax + bz = 0
- cx + dz = 1
the reader can try to solve the above by him/herself. The required answer is
In here we assumed that A has an inverse, but this doesn't make sense if det(A) = 0, as we can not divide by zero. So A-1 (the inverse of A) exists if and only if det(A) ≠ 0.
Summary
If AB = BA = I, then we say B is the inverse of A. We denote the inverse of A by A-1. The inverse of a 2 × 2 matrix
is
provided the determinant of A is not zero.
Solving simultaneous equations
Suppose we are to solve:
- ax + by = α
- cx + dy = β
We let
we can translate it into matrix form
i.e
If A's determinant is not zero, then we can pre-multiply both sides by A-1 (the inverse of A)
i.e.
which implies that x and y are unique.
Examples
Find the inverse of A, if it exists
- a)
- b)
- c)
- d)
Solutions
- a)
- b)
- c) No solution, as det(A) = 3ab - 3ab = 0
- d)
Exercises
1. Find the determinant of
- . Using the determinant of A, decide whether there's a unique solution to the following simultaneous equations
2. Suppose
- C = AB
show that
- det(C) = det(A)det(B)
for the 2 × 2 case. Note: it's true for all cases.
3. Show that if you swap the rows of A to get A' , then det(A) = -det(A' )
4. Using the result of 2
a) Prove that if:
then det(A) = det(B)
b) Prove that if:
- Ak = 0
for some positive integer k, then det(A) = 0.
5. a) Compute A5, i.e. multiply A by itself 5 times, where
b) Find the inverse of P where
c) Verify that
d) Compute A5 by using part (b) and (c).
e) Compute A100
Other Sections
Next Section > High_School_Mathematics_Extensions/Matrices/Linear_Recurrence_Relations_Revisited
Problem Set > High_School_Mathematics_Extensions/Matrices/Problem Set
Project > High_School_Mathematics_Extensions/Matrices/Project/Elementary_Matrices
*Linear recurrence relations revisited*
We have already discussed linear recurrence relations in the Counting and Generating functions chapter. We shall study it again using matrices. Consider the Fibonacci numbers
- 1, 1, 2, 3, 5, 8, 13, 21...
where each number is the sum of two preceding numbers. Let xn be the (n + 1)th Fibonacci number, we can write:
In fact many linear recurrence relations can be expressed in matrix form , e.g.
can be expressed as
and therefore
So if we knew how to compute the powers of matrices quickly then we can work out the (n + 1)th Fibonacci number rather quickly!
Computing Powers Quickly
Note that from now on we emphasise if a matrix is a vector by writing an arrow on top of it.
Consider
Something interesting happens when you multiply A by either or (Try it). In fact
and
- .
Generally for a matrix B, if a vector w ≠ 0 (the matrix with all entries zero) such that
for some scalar λ, then is called a eigenvector of B and λ the eigenvalue of B (corresponding to w).
This is a feature of matrices that be exploited to compute powers easily. Here's how, using A, x and y from above, we write the two pieces of information together in matrix form:
or written completely in numeral form
you are encouraged to check the above is correct. What we did was we merged and into a matrix using each vector as a column, next we multiplied it by the diagonal matrix whose entries are the eigenvalue of each eigenvector correspondingly.
How to now exploit this matrix form to calculate powers of A quickly? We require a simple but ingenius step -- post-multiply (i.e. multiply from the right) both sides by the inverse of
we have
Now to calculate An, we need only to do
but inverses multiply to give I, so we are left with
which is very easy to compute since powers of a diagonal matrix are easy to compute (just take each entry to the power).
Example 1
Compute A5 where A is given above.
Solution
We do
Example 2
Let
and its eigenvectors are
- and
Calculate B5 directly (optional), and again using the method above.
Solution
We need to first determine its eigenvalues. We do
so the eigenvalue corresponding to
is 1.
Similarly,
so the other eigenvalue is 3.
Now we write them in the form:
now make B the subject
Now
- so multiplying the right hand side out, we get
Summary -- compute powers quickly
Given eigenvectors of a matrix A
- Compute the eigenvalues (if not given)
- Write in the form A = PDP-1, where D is a diagonal matrix of the eigenvalues, and P the eigenvectors as columns
- Compute An using the right hand side equivalent
Exercises
1. The eigenvectors of
are
- and
calculate B5
2. The eigenvectors of
are
- and
calculate B5
3. The eigenvectors of
are
- and
calculate B5
Eigenvector and eigenvalue
We know from the above section that for a matrix if we are given its eigenvectors, we can find the corresponding eigenvalues, and then we can compute its powers quickly. So the last hurdle becomes finding the eigenvectors.
An eigenvectors of a matrix A and its corresponding eigenvalue λ are related by the following expression:
where x ≠ 0 where 0 is the zero matrix (all entries zero). We can safely assume that A is given so there are two unknowns -- and λ. We have enough information now to be able to work out the eigenvalues (and from that the eigenvectors):
The matrix (A - λI) must NOT have an inverse, because if it does then = 0. Therefore det(A - λI) = 0.
Suppose
then
Now we see det(A-λI) is a polynomial in λ and det(A-λI) = 0. We are already well-trained in solving quadratics, so it's easy to work out the values of λ. Once we've worked out the values of λ, we can work out (see examples).
Example 1
Find the eigenvalues and eigenvectors of
and then find D and P such that A = P-1DP.
Solution
We aim to find and λ such that
- A = λ
we proceed
- (**)
- det(A - λI) =
- 0 = (-4 - λ)(7 - λ) + 30
- 0 = -28 - 3λ + λ2 + 30
- 0 = λ2 - 3λ + 2
- 0 = (λ - 1)(λ - 2)
- λ = 1, 2
Now for each eigenvalue we will get a different corresponding eigenvector. So we consider the case λ = 1 and λ = 2 separately.
Consider first λ = 1, from (**) we get
i.e.
where
since det(A - λI) = 0, we know that there is no unique solution to the above equation. But we note that:
for any real number t is a solution, and we choose t = 1 as our solution because it's the simpliest. Therefore
is the eigenvector corresponding to λ = 1. (***)
Similarly, if λ = 2, from (**) we get
i.e.
where
we note that:
for any real number t is a solution, as before we choose t = 1 as our solution. Therefore
- is the eigenvector corresponding to λ = 2. (****)
We summarise the result of (***) and (****), we have
we combine the results into one
and so
Example 2
- a) Diagonalize A, i.e find P (invertible) and B (diagonal) such that AP = PB
- b) Compute A5
Solution a) We are solving Ax = λx, where λ is a constant and x′ a column vector.
Firstly
since 'x′ ≠ 0 we have
i.e.
For λ = 3,
Clearly
is a solution. Note that we do not accept x = 0 as an solution, because we assume x ≠ 0. Note also that
for some constant t is also a solution. Indeed we could use x = y = 2, 3 or 4 as a solution, but for convenience we choose the simplest i.e. x = y = 1.
For λ = 2,
Clearly
is a solution.
Therefore
is a solution and
is also a solution.
- b)
Example 3
Solve the linear recurrence relation
Solution
We need to diagonalize
we proceed:
we get
- λ = 2, 3
For λ = 2
For λ = 3
Therefore
Now
Therefore
i.e
Exercises
1. Compute A5 where
2. Compute A5 where
3. Solve the following recurrence relations
Solutions
1.
2.
More Applications
...more to come
Problem Set > High_School_Mathematics_Extensions/Matrices/Problem Set
Project > High_School_Mathematics_Extensions/Matrices/Project/Elementary_Matrices
Feedback
What do you think? Too easy or too hard? Too much information or not enough? How can we improve? Please let us know by leaving a comment in the discussion tab. Better still, edit it yourself and make it better.