Section 4.1 Orthogonality and the Four Subspaces
Subsection 4.1.1 The Assignment
- Read section 4.1 of Strang.
- Read the following and complete the exercises below.
Subsection 4.1.2 Learning Goals
Before class, a student should be able to:
- Find the four subspaces associated to a matrix and verify that they are orthogonal subspaces.
- Draw the “Big Picture” associated to a matrix as a schematic drawing, with the four subspaces properly located and their dimensions identified.
Some time after class, a student should be able to:
- Find the orthogonal complement to a subspace.
- Reason about the structure of a matrix as a transformation using information about its four subspaces.
Subsection 4.1.3 Discussion: Orthogonality for subspaces
Previously, we had the notion of orthogonality for two vectors in Euclidean space. In this section, the concept gets extended to subspaces.
Definition 4.1.1.
Let \(V\) and \(W\) be subspaces of \(\mathbb{R}^n\text{.}\) We say that \(\mathbb{R}^n\) and \(\mathbb{R}^m\) are orthogonal when for each vector \(v \in V\) and each vector \(w \in W\) we have \(v \cdot w =0\text{.}\)Two orthogonal subspaces always have as intersection the trivial subspace \(\{ 0\}\text{.}\) The reason for this is that if some vector \(x\) lay in both \(V\) and \(W\text{,}\) then we must have that \(x \cdot x =0\text{.}\) (Think of the first \(x\) as lying in \(V\text{,}\) and the second in \(W\text{.}\)) But the properties of the dot product then mean that \(x\) is the zero vector.
There is a further concept:
Definition 4.1.2.
Let \(V\) be a vector subspace of \(\mathbb{R}^n\text{.}\) The orthogonal complement of \(V\) is the setThe basic idea is that two spaces are orthogonal complements if they are orthogonal, and together they contain enough vectors to span the entire space. The definition looks like it is a one-directional thing: for a subspace, you find its orthogonal complement. But really it is a complementary relationship. If \(W\) is the orthogonal complement to \(V\text{,}\) then \(V\) is the orthogonal complement to \(W\text{.}\)
Recall the four fundamental subspaces associated to an \(m\times n\) matrix \(A\text{.}\) The column space, \mathrm{col}(A), spanned by all of the columns of A. This is a subspace of \mathbb{R}^m. The row space, \mathrm{row}(A), spanned by all of the rows of A. This is a subspace of \mathbb{R}^n. This also happens to be the column space of A^T. The nullspace (or kernel), \mathrm{null}(A), consisting of all those vectors x for which Ax=0. This is a subspace of \mathbb{R}^n. The left nullspace, which is just the nullspace of A^T. This is a subspace of \mathbb{R}^m.
And we have another big result, which is a sharpening of the Fundamental Theorem of Linear Algebra from the end of Chapter Three.
Theorem 4.1.3.
If \(A\) is an \(m\times n\) matrix, then- The nullspace of \(A\) and the row space of \(A\) are orthogonal complements of one another.
- The column space of \(A\) and the left nullspace of \(A\) are orthogonal complements of one another.
Subsection 4.1.4 Sage and the Orthogonal Complement
It is not hard to find the four subspaces associated to a matrix with Sage's built-in commands. But Sage also has a general purpose .complement() method available for vector subspaces which can be used.
Those are the easy ones to find. The left nullspace is just a touch trickier. What is the deal? It is just the nullspace of the matrix \(A^T\text{,}\) of course. But by the Fundamental Theorem of Linear Algebra, the left nullspace is the orthogonal complement of the column space. Let's see if they agree.
That is good news! It looks like the three ways we have of computing the left nullspace agree. As a check for understanding, you should be able to ask Sage if the rowspace of \(A\) is the orthogonal complement to the nullspace of \(A\text{.}\)
Subsection 4.1.5 Exercises
(a)
(Strang ex. 4.1.2) Draw the “Big Picture” for a \(3\times 2\) matrix of rank \(2\text{.}\) Which subspace has to be the zero subspace?(b)
(Strang ex. 4.1.3) For each of the following, give an example of a matrix with the required properties, or explain why that is impossible.- The column space contains \(\begin{pmatrix} 1 \\ 2 \\ -3\end{pmatrix}\) and \(\begin{pmatrix} 2 \\ -3 \\ 5 \end{pmatrix}\text{,}\) and the nullspace contains \(\begin{pmatrix} 1\\1\\1 \end{pmatrix}\text{.}\)
- the row space contains \(\begin{pmatrix} 1 \\ 2 \\ -3\end{pmatrix}\) and \(\begin{pmatrix} 2 \\ -3 \\ 5 \end{pmatrix}\text{,}\) and the nullspace contains \(\begin{pmatrix} 1\\1\\1 \end{pmatrix}\text{.}\)
- \(Ax = \begin{pmatrix}1 \\ 1\\ 1\end{pmatrix}\) has a solution, and \(A^T\begin{pmatrix}1 \\ 0 \\ 0 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}\text{.}\)
- Every row is orthogonal to every column, but \(A\) is not the zero matrix.
- The columns add up to a column of zeros, and the rows add up to a row of \(1\)'s.