Section 2.6 Matrix Inverses
Subsection 2.6.1 The Assignment
- Read section 2.5 of Strang.
- Watch a video from the YouTube series Essence of Linear Algebra by user 3Blue1Brown.
- Read the following and complete the exercises below.
Subsection 2.6.2 Learning Goals
Before class, a student should be able to:
- State the definition of invertible matrix.
- Solve an equation \(Ax = b\) using the inverse of \(A\) if it exists.
- State how inverses and multiplication interact.
- Use Gauss-Jordan elimination to compute the inverse of a matrix.
- State a test for invertibility of square matrices using pivots.
Some time after class, a student should be able to:
- Describe the connection between Gauss-Jordan elimination and solving \(n\) different systems of equations.
- Describe the connection between Gauss-Jordan elimination, computing matrix inverses, and the process of elimination by matrix multiplication.
- State the definition of the determinant of a square matrix.
- State the connection between the determinant of a square matrix and invertibility.
- State the distinction between a matrix being invertible and a matrix being singular.
Subsection 2.6.3 Discussion
Subsubsection 2.6.3.1 Matrix Inverses
The main point of this section is to start focusing on the first big problem in linear algebra. How can you tell, in advance, that a system of \(n\) equations in \(n\) unknowns will have a solution?
Of course, like all things we have been studying, this will have several different faces, all of which are equivalent. The one front and center right now is this: When does an \(n \times n\) square matrix have an inverse?
Subsubsection 2.6.3.2 Finding an Inverse: Gauss-Jordan Elimination
There is an effective method for finding the inverse, and it is Gauss-Jordan elimination. (This is sometimes just called Gaussian elimination.) Essentially, you wish to solve \(n\) different systems \(Ax= b\) of size \(n\times n\) all at the same time, with specially chosen right hand sides.
The process is an algorithm, so it is very specific. If you do this some other way, you aren't doing Gauss-Jordan Elimination. The name is applied to the process.
Gauss-Jordan Elimination
- Augment: Tack on a copy of the identity matrix of the same size to the right hand side of your matrix. It should now look like \((A \mid I)\text{.}\)
-
Forward Pass:
- preparation: If necessary, use a row swap to make a non-zero entry in the upper left entry.
- make zeros: The upper left entry is our first pivot. Use the operation of adding a multiple of the first row to the other rows to kill the entries below this first pivot.
- step down: Step down to the second row and repeat the above, but ignoring rows and columns above and to the left. Repeat as necessary till you run out of rows.
- Backward Pass: This is also nested, like the forward pass, except that instead of working down and to the right, you begin at the lower right with the last pivot and work up and to the left. When complete, the matrix should have at most one non-zero entry in each row. This entry will be a pivot.
- Rescale: rescale rows to make the pivots into \(1\)'s.
At the end of the whole process, you should have something that looks like this: \((I \mid B)\text{.}\) The wonderful part: \(B\) is the inverse of \(A\text{.}\) Well, almost. The process can fail! If along the line you find that the left hand block of your big augmented matrix doesn't have \(n\) pivots in it, then your matrix was not invertible.
What you have computed in the left hand block with the Gauss-Jordan elimination is the reduced row-echelon form of your original matrix.
Subsubsection 2.6.3.3 The Big Theorem: invertibility, singularity, and the determinant
What is the key?
Theorem 2.6.1.
An \(n\times n\) matrix \(A\) is invertible exactly when it has \(n\) pivots. Equivalently, its reduced row-echelon form has \(n\) non-zero entries down the diagonal. The inverse will be computed by Gauss-Jordan elimination.This is huge. The algorithm is not difficult, and it answers an important question exactly.
Note that we said a square matrix was singular when it did not have enough pivots. So what the above says is that a matrix is invertible if and only if it is non-singular.
Subsubsection 2.6.3.4 A simple test
We can use the above to make a simple numerical test of when a matrix is invertible. First do the forward pass of elimination to obtain an upper triangular matrix. Take the product of the diagonal entries. This will be zero if and only if one of the diagonal entries is zero, which will only happen if there are fewer than \(n\) pivots. This product is then helpful enough to test for invertibility, and so it deserves its own name: the determinant. We shall learn more about this quantity later.
Subsection 2.6.4 SageMath and Gauss-Jordan Elimination
We have already seen that SageMath has commands for constructing matrices and performing row operations. Those are the operations used to perform Gauss-Jordan Elimination. But there are several interesting and useful commands in this neighborhood we have not yet discussed.
Let us construct my favorite matrix so we have something to play with.
We can use the .is_invertible() method to check that A is invertible. In general, this method returns True or False.
And we can get SageMath to just compute the inverse for us.
Just so we can see what happens if the matrix is not invertible, we try another matrix.
We can also ask SageMath to compute determinants with the .determinant() method.
SageMath is also capable of computing the reduced row echelon form (the “rref”) of a matrix with the appropriately named .rref() method.
The method .rref() does not change the matrix A. There is another command which will work the same way for our purposes, .echelon_form().
There is a related command which will find the rref and then update the matrix. It is called .echelonize(). Because I don't really want to mess with A, we will make a copy first.
Now we ask sage to print those out for us.
Now, we can be just a bit more hands-on with Gauss-Jordan elimination if we do it this way. We will combine commands we have used before to do this.
Now we do the algorithm.
That was good. But we only need the right-hand submatrix. We can get SageMath to report just that!
It is often convenient to chain methods together like this. Then you can read what happens from left to right.
Subsection 2.6.5 Exercises
Keep this in mind. The computations are simple, but tedious. Perhaps you want to use an appropriate tool.
(a)
Use Gauss-Jordan elimination to find the inverse of the matrix \(A\) below.(b)
Use Gauss-Jordan elimination to find the inverse of the matrix \(X\) below.(c)
Use Gauss-Jordan elimination to find the inverse of the matrix \(B\) below.(d)
Use Gauss-Jordan elimination to find the inverse of the matrix \(B\) below.(e)
Use Gauss-Jordan elimination to find the inverse of the matrix \(D\) below.(f)
Suppose that for the matrix \(D\) in the last exercise we imagine solving the matrix equation \(Dx = b\) for some vector \(b\) of the appropriate size. What might one mean by the row picture in this case? What might the column picture mean?(g)
Design a \(6 \times 6\) matrix which has the following properties:- no entry equal to zero
- the reduced row echelon form should have exactly 5 pivots
- the 5 pivots should be different numbers
- no pair of rows should be scalar multiples of one another