Jump to content

High School Mathematics Extensions/Matrices

From Wikibooks, open books for an open world
HSME
Content
100% developed Matrices
100% developed Recurrence Relations
Problems & Projects
100% developed Problem Set
100% developed Project
Soultions
100% developed Exercises Solutions
50% developed Problem Set Solutions
Misc.
100% developed Definition Sheet
100% developed Full Version

Introduction

[edit source]

"Matrix" may be more popularly known as a giant computer simulation, but in mathematics it is a totally different thing. To be more precise, a matrix (plural matrices) is a rectangular array of numbers. For example, below is a typical way to write a matrix, with numbers arranged in rows and columns and with round brackets around the numbers:

The above matrix has 4 rows and 4 columns, so we call it a 4 × 4 (4 by 4) matrix. Also, we can have matrices of many different shapes. The shape of a matrix is the name for the dimensions of the matrix (m by n, where m is the number of rows and n the number of columns). Here are some more examples of matrices:

This is an example of a 3 × 3 matrix:

This is an example of a 5 × 4 matrix:

This is an example of a 1 × 6 matrix:

The theory of matrices is intimately connected with that of (linear) simultaneous equations. The ancient Chinese had established a systematic way to solve simultaneous equations. The theory of simultaneous equations was furthered in the east by the Japanese mathematician, Seki and a little later by Leibniz, Newton's greatest rival. Later, Gauss (1777 - 1855), one of the three giants of modern mathematics, popularised the use of Gaussian elimination, which is a simple step by step algorithm for solving any number of linear simultaneous equations. By then the use of matrices to represent simultaneous equation neatly on paper (as discussed above) had become quite common1.

Consider the simultaneous equations:

it has the solution x = 7 and y = 3, and the usual way to solve it is to add the two equations together to eliminate the y. Matrix theory offers us another way to solve the above simultaneous equations via matrix multiplication (covered below). We will study the widely accepted way to multiply two matrices together. In theory with matrix multiplication we can solve any number of simultaneous equations, but we shall mainly restrict our attention to 2 × 2 matrices. But even with that restriction, we have opened up doors to topics simultaneous equations could never offer us. Two such examples are

  1. using matrices to solve linear recurrence relations which can be used to model population growth, and
  2. encrypting messages with matrices.

We shall commence our study by learning some of the more fundamental concepts of matrices. Once we have a firm grasp of the basics, we shall move on to study the real meat of this chapter, matrix multiplication.

Elements

[edit source]

An element of a matrix is a particular number inside the matrix, and it is uniquely located with a pair of numbers. E.g. let the following matrix be denoted by A, or symbolically:

the (2,2)th entry of A is 5; the (1,1)th entry of A is 1, the (3,3) entry of A is 9 and the (2,3)th entry of A is 8. The (i , j)th entry of A is usually denoted ai,j and the (i , j)th entry of a matrix B is usually denoted by bi,j and so on.

Summary

[edit source]
  • A matrix is an array of numbers
  • A m×n matrix has m rows and n columns
  • The shape of a matrix is determined by its number of rows and columns
  • The (i,j)th element of a matrix is located in ith row and jth column

Matrix addition & Multiplication by a scalar

[edit source]

Matrices can be added together. But only the matrices of the same shape can be added. This is very natural. E.g.

then

Similarly matrices can be multiplied by a number. We call the number a scalar to distinguish it from a matrix. The reader need not worry about the definition here, just remember that a scalar is simply a number.

in this case the scalar value is 5. In general, when we do s × A , where s is a scalar and A a matrix, we multiply each entry of A by s.

Matrix Multiplication

[edit source]

The widely accepted way to multiply two matrices together is definitely non-intuitive. As mentioned above, multiplication can help with solving simultaneous equations. We will now give a brief outline of how this can be done. Firstly, any system of linear simultaneous equations can be written as a matrix of coefficients multiplied by a matrix of unknowns equaling a matrix of results. This description may sound a little complicated, but in symbolic form it is quite clear. The previous statement simply says that if A, x and b are matrices, then Ax = b, can be used to represent some system of simultaneous equations. The beautiful thing about matrix multiplications is that some matrices can have multiplicative inverses, that is we can multiply both sides of the equation by A-1 to get x = A-1b, which effectively solves the simultaneous equations.

The reader will surely come to understand matrix multiplication better as this chapter progresses. For now we should consider the simplest case of matrix multiplication, multiplying vectors. We will see a few examples and then we will explain process of multiplication

then

Similarly if:

then

A matrix with just one row is called a row vector, similarly a matrix with just one column is called a column vector. When we multiply a row vector A, with a column vector B, we multiply the element in the first column of A by the element in the first row of B and add to that the product of the second column of A and second row of B and so on. More generally we multiply a1,i by bi,1 (where i ranges from 1 to n, the number of rows/columns) and sum up all of the products. Symbolically:

(for information on the sign, see Summation_Sign)
where n is the number of rows/columns.
In words: the product of a column vector and a row vector is the sum of the product of item 1,i from the row vector and i,1 from the column vector where i is from 1 to the width/height of these vectors.

Note: The product of matrices is also a matrix. The product of a row vector and column vector is a 1 by 1 matrix, not a scalar.

Exercises

[edit source]

Multiply:

Multiplication of non-vector matrices

[edit source]

Suppose where A, B and C are matrices. We multiply the ith row of A with the jth column of B as if they are vector-matrices. The resulting number is the (i,j)th element of C. Symbolically:

Example 1

Evaluate AB = C and BA= D, where

and

Solution

i.e.


i.e.

Example 2 Evaluate AB and BA where

Solution

Example 3 Evaluate AB and BA where

Solution

Example 4 Evaluate the following multiplication:

Solution

Note that:

is a 2 by 1 matrix and

is a 1 by 2 matrix. So the multiplication makes sense and the product should be a 2 by 2 matrix.

Example 5 Evaluate the following multiplication:

Solution

Example 6 Evaluate the following multiplication:

Solution

Example 7 Evaluate the following multiplication:

Solution

Note Multiplication of matrices is generally not commutative, i.e. generally ABBA.

Diagonal matrices

[edit source]

A diagonal matrix is a matrix with zero entries everywhere except possibly down the diagonal. Multiplying diagonal matrices is really convenient, as you need only to multiply the diagonal entries together.

Examples

The following are all diagonal matrices

Example 1

Example 2

The above examples show that if D is a diagonal matrix then Dk is very easy to compute, all we need to do is to take the diagonal entries to the kth power. This will be an extremely useful fact later on, when we learn how to compute the nth Fibonacci number using matrices.

Exercises

[edit source]

1. State the dimensions of C

a) C = An×pBp×m
b)

2. Evaluate. Please note that in matrix multiplication (AB)C = A(BC) i.e. the order in which you do the multiplications does not matter (proved later).

a)
b)

3. Performing the following multiplications:

What do you notice?

The Identity & multiplication laws

[edit source]

The exercise above showed us that the matrix:

is a very special. It is called the 2 by 2 identity matrix. An identity matrix is a square matrix, whose diagonal entries are 1's and all other entries are zero. The identity matrix, I, has the following very special properties

for all matrices A. We don't usually specify the shape of the identity because it's obvious from the context, and in this chapter we will only deal with the 2 by 2 identity matrix. In the real number system, the number 1 satisfies: r × 1 = r = 1 × r, so it's clear that the identity matrix is analogous to "1".

Associativity, distributivity and (non)-commutativity

Matrix multiplication is a great deal different to the multiplication we know from multiplying real numbers. So it is comforting to know that many of the laws the real numbers satisfy also carries over to the matrix world. But with one big exception, in general ABBA.

Let A, B, and C be matrices. Associativity means

(AB)C = A(BC)

i.e. the order in which you multiply the matrices is unimportant, because the final result you get is the same regardless of the order which you do the multiplications.

On the other hand, distributivity means

A(B + C) = AB + AC

and

(A + B)C = AC + BC

Note: The commutative property of the real numbers (i.e. ab = ba), does not carry over to the matrix world.

Convince yourself

For all 2 by 2 matrices A, B and C. And I the identity matrix.

1. Convince yourself that in the 2 by 2 case:

A(B + C) = AB + AC

and

(A + B)C = AC + BC

2. Convince yourself that in the 2 by 2 case:

A(BC) = (AB)C

3. Convince yourself that:

in general. When does AB = BA? Name at least one case.

Note that all of the above are true for all matrices (of any dimension/shape).

Determinant and Inverses

[edit source]

We shall consider the simultaneous equations:

ax + by = α (1)
cx + dy = β (2)

where a, b, c, d, α and β are constants. We want to determine the necessary conditions for (1) and (2) to have a unique solution for x and y. We proceed:

Let (1') = (1) × c
Let (2') = (2) × a

i.e.

acx + bcy = cα (1')
acx + ady = aβ (2')

Now

let (3) = (2') - (1')
(ad - bc)y = aβ - cα (3)

Now y can be uniquely determined if and only if (ad - bc) ≠ 0. So the necessary condition for (1) and (2) to have a unique solution depends on all four of the coefficients of x and y. We call this number (ad - bc) the determinant, because it tells us whether there is a unique solution to two simultaneous equations of 2 variables. In summary

if (ad - bc) = 0 then there is no unique solution
if (ad - bc) ≠ 0 then there is a unique solution.

Note: Unique, we can not emphasise this word enough. If the determinant is zero, it doesn't necessarily mean that there is no solution to the simultaneous equations! Consider:

x + y = 2
7x + 7y = 14

the above set of equations has determinant zero, but there is obviously a solution, namely x = y = 1. In fact there are infiinitely many solutions! On the other hand consider also:

x + y = 1
x + y = 2

this set of equations has determinant zero, and there is no solution at all. So if determinant is zero then there is either no solution or infinitely many solutions.

Determinant of a matrix

We define the determinant of a 2 × 2 matrix

to be

Inverses

[edit source]

It is perhaps, at this stage, not very clear what the use is of the det(A). But it's intimately connected with the idea of an inverse. Consider in the real number system a number b, it has (multiplicative) inverse 1/b, i.e. b(1/b) = (1/b)b = 1. We know that 1/b does not exist when b = 0.

In the world of matrices, a matrix A may or may not have an inverse depending on the value of the determinant det(A)! How is this so? Let's suppose A (known) does have an inverse B (i.e. AB = I = BA). So we aim to find B. Let's suppose further that

and

we need to solve four simultaneous equations to get the values of w, x, y and z in terms of a, b, c, d and det(A).

aw + by = 1
cw + dy = 0
ax + bz = 0
cx + dz = 1

the reader can try to solve the above by him/herself. The required answer is

In here we assumed that A has an inverse, but this doesn't make sense if det(A) = 0, as we can not divide by zero. So A-1 (the inverse of A) exists if and only if det(A) ≠ 0.

Summary

If AB = BA = I, then we say B is the inverse of A. We denote the inverse of A by A-1. The inverse of a 2 × 2 matrix

is

provided the determinant of A is not zero.

Solving simultaneous equations

[edit source]

Suppose we are to solve:

ax + by = α
cx + dy = β

We let

we can translate it into matrix form

i.e

If A's determinant is not zero, then we can pre-multiply both sides by A-1 (the inverse of A)

i.e.

which implies that x and y are unique.

Examples

Find the inverse of A, if it exists

a)
b)
c)
d)

Solutions

a)
b)
c) No solution, as det(A) = 3ab - 3ab = 0
d)

Exercises

1. Find the determinant of

. Using the determinant of A, decide whether there's a unique solution to the following simultaneous equations

2. Suppose

C = AB

show that

det(C) = det(A)det(B)

for the 2 × 2 case. Note: it's true for all cases.

3. Show that if you swap the rows of A to get A' , then det(A) = -det(A' )

4. Using the result of 2

a) Prove that if:

then det(A) = det(B)

b) Prove that if:

Ak = 0

for some positive integer k, then det(A) = 0.

5. a) Compute A5, i.e. multiply A by itself 5 times, where

b) Find the inverse of P where

c) Verify that

d) Compute A5 by using part (b) and (c).

e) Compute A100


Other Sections

[edit source]

Next Section > High_School_Mathematics_Extensions/Matrices/Linear_Recurrence_Relations_Revisited

Problem Set > High_School_Mathematics_Extensions/Matrices/Problem Set

Project > High_School_Mathematics_Extensions/Matrices/Project/Elementary_Matrices