Jump to content

Control Systems/Matrix Operations

From Wikibooks, open books for an open world
For more about this subject, see:
Linear Algebra
and
Engineering Analysis

Laws of Matrix Algebra

[edit | edit source]

Matrices must be compatible sizes in order for an operation to be valid:

Addition
Matrices must have the same dimensions (same number of rows, same number of columns). Matrix addition is commutative:
Multiplication
Matrices must have the same inner dimensions (the number of columns of the first matrix must equal the number of rows in the second matrix). For instance, if matrix A is n × m, and matrix B is m × k, then we can multiply:
Where C is an n × k matrix. Matrix multiplication is not commutative:
Because it is not commutative, the differentiation must be made between "multiplication on the left", and "multiplication on the right".
Division
There is no such thing as division in matrix algebra, although multiplication of the matrix inverse performs the same basic function. To find an inverse, a matrix must be nonsingular, and must have a non-zero determinant.

Transpose Matrix

[edit | edit source]

The transpose of a matrix, denoted by:

is the matrix where the rows and columns of X are interchanged. In some instances, the transpose of a matrix is denoted by:

This shorthand notation is used when the superscript T applied to a large number of matrices in a single equation, and the notation would become too crowded otherwise. When this notation is used in the book, derivatives will be denoted explicitly with:

Determinant

[edit | edit source]

The determinant of a matrix it is a scalar value. It is denoted similarly to absolute-value in scalars:

A matrix has an inverse if the matrix is square, and if the determinant of the matrix is non-zero.

Inverse

[edit | edit source]

The inverse of a matrix A, which we will denote here by "B" is any matrix that satisfies the following equation:

Matrices that have such a companion are known as "invertible" matrices, or "non-singular" matrices. Matrices which do not have an inverse that satisfies this equation are called "singular" or "non-invertable".

An inverse can be computed in a number of different ways:

  1. Append the matrix A with the Identity matrix of the same size. Use row-reductions to make the left side of the matrice an identity. The right side of the appended matrix will then be the inverse:
  2. The inverse matrix is given by the adjoint matrix divided by the determinant. The adjoint matrix is the transpose of the cofactor matrix.
  3. The inverse can be calculated from the Cayley-Hamilton Theorem.

Eigenvalues

[edit | edit source]

The eigenvalues of a matrix, denoted by the Greek letter lambda λ, are the solutions to the characteristic equation of the matrix:

Eigenvalues only exist for square matrices. Non-square matrices do not have eigenvalues. If the matrix X is a real matrix, the eigenvalues will either be all real, or else there will be complex conjugate pairs.

Eigenvectors

[edit | edit source]

The eigenvectors of a matrix are the nullspace solutions of the characteristic equation:

There is at least one distinct eigenvector for every distinct eigenvalue. Multiples of an eigenvector are also themselves eigenvectors. However, eigenvalues that are not linearly independent are called "non-distinct" eigenvectors, and can be ignored.

Left-Eigenvectors

[edit | edit source]

Left Eigenvectors are the right-hand nullspace solutions to the characteristic equation:

These are also the rows of the inverse transition matrix.

Generalized Eigenvectors

[edit | edit source]

In the case of repeated eigenvalues, there may not be a complete set of n distinct eigenvectors (right or left eigenvectors) associated with those eigenvalues. Generalized eigenvectors can be generated as follows:

Because generalized eigenvectors are formed in relation to another eigenvector or generalize eigenvectors, they constitute an ordered set, and should not be used outside of this order.

Transformation Matrix

[edit | edit source]

The transformation matrix is the matrix of all the eigenvectors, or the ordered sets of generalized eigenvectors:

The inverse transition matrix is the matrix of the left-eigenvectors:

A matrix can be diagonalized by multiplying by the transition matrix:

Or:

If the matrix has an incomplete set of eigenvectors, and therefore a set of generalized eigenvectors, the matrix cannot be diagonalized, but can be converted into Jordan canonical form:

MATLAB

[edit | edit source]

The MATLAB programming environment was specially designed for matrix algebra and manipulation. The following is a brief refresher about how to manipulate matrices in MATLAB:

Addition
To add two matrices together, use a plus sign ("+"):
C = A + B;
Multiplication
To multiply two matrices together use an asterisk ("*"):
C = A * B;
If your matrices are not the correct dimensions, MATLAB will issue an error.
Transpose
To find the transpose of a matrix, use the apostrophe (" ' "):
C = A';
Determinant
To find the determinant, use the det function:
d = det(A);
Inverse
To find the inverse of a matrix, use the function inv:
C = inv(A);
Eigenvalues and Eigenvectors
To find the eigenvalues and eigenvectors of a matrix, use the eig command:
[E, V] = eig(A);
Where E is a square matrix with the eigenvalues of A in the diagonal entries, and V is the matrix comprised of the corresponding eigenvectors. If the eigenvalues are not distinct, the eigenvectors will be repeated. MATLAB will not calculate the generalized eigenvectors.
Left Eigenvectors
To find the left eigenvectors, assuming there is a complete set of distinct right-eigenvectors, we can take the inverse of the eigenvector matrix:
[E, V] = eig(A);
C = inv(V);

The rows of C will be the left-eigenvectors of the matrix A.

For more information about MATLAB, see the wikibook MATLAB Programming.