Skip to main content
Logo image

Introduction to Differential Equations

Section A.4 Subspaces, dimension, and the kernel

Note: 1 lecture

Subsection A.4.1 Subspaces, basis, and dimension

We often find ourselves looking at the set of solutions of a linear equation \(L\vec{x} = \vec{0}\) for some matrix \(L\text{,}\) that is, we are interested in the kernel of \(L\text{.}\) The set of all such solutions has a nice structure: It looks and acts a lot like some euclidean space \({\mathbb R}^k\text{.}\)
We say that a set \(S\) of vectors in \({\mathbb R}^n\) is a subspace if whenever \(\vec{x}\) and \(\vec{y}\) are members of \(S\) and \(\alpha\) is a scalar, then
\begin{equation*} \vec{x} + \vec{y}, \qquad \text{and} \qquad \alpha \vec{x} \end{equation*}
are also members of \(S\text{.}\) That is, we can add and multiply by scalars and we still land in \(S\text{.}\) So every linear combination of vectors of \(S\) is still in \(S\text{.}\) That is really what a subspace is. It is a subset where we can take linear combinations and still end up being in the subset. Consequently the span of a number of vectors is automatically a subspace.

Example A.4.1.

If we let \(S = {\mathbb R}^n\text{,}\) then this \(S\) is a subspace of \({\mathbb R}^n\text{.}\) Adding any two vectors in \({\mathbb R}^n\) gets a vector in \({\mathbb R}^n\text{,}\) and so does multiplying by scalars.
The set \(S' = \{ \vec{0} \}\text{,}\) that is, the set of the zero vector by itself, is also a subspace of \({\mathbb R}^n\text{.}\) There is only one vector in this subspace, so we only need to verify the definition for that one vector, and everything checks out: \(\vec{0}+\vec{0} = \vec{0}\) and \(\alpha \vec{0} = \vec{0}\text{.}\)
The set \(S''\) of all the vectors of the form \((a,a)\) for any real number \(a\text{,}\) such as \((1,1)\text{,}\) \((3,3)\text{,}\) or \((-0.5,-0.5)\) is a subspace of \({\mathbb R}^2\text{.}\) Adding two such vectors, say \((1,1)+(3,3) = (4,4)\) again gets a vector of the same form, and so does multiplying by a scalar, say \(8(1,1) = (8,8)\text{.}\)
If \(S\) is a subspace and we can find \(k\) linearly independent vectors in \(S\)
\begin{equation*} \vec{v}_1, \vec{v}_2, \ldots, \vec{v}_k , \end{equation*}
such that every other vector in \(S\) is a linear combination of \(\vec{v}_1, \vec{v}_2,\ldots, \vec{v}_k\text{,}\) then the set \(\{ \vec{v}_1, \vec{v}_2, \ldots, \vec{v}_k \}\) is called a basis of \(S\text{.}\) In other words, \(S\) is the span of \(\{ \vec{v}_1, \vec{v}_2, \ldots, \vec{v}_k \}\text{.}\) We say that \(S\) has dimension \(k\text{,}\) and we write
\begin{equation*} \dim S = k . \end{equation*}
Just as a vector in \({\mathbb R}^k\) is represented by a \(k\)-tuple of numbers, so is a vector in a \(k\)-dimensional subspace of \({\mathbb R}^n\) represented by a \(k\)-tuple of numbers. At least once we have fixed a basis. A different basis would give a different \(k\)-tuple of numbers for the same vector.
We should reiterate that while \(k\) is unique (a subspace cannot have two different dimensions), the set of basis vectors is not at all unique. There are lots of different bases for any given subspace. Finding just the right basis for a subspace is a large part of what one does in linear algebra. In fact, that is what we spend a lot of time on in linear differential equations, although at first glance it may not seem like that is what we are doing.

Example A.4.2.

The standard basis
\begin{equation*} \vec{e}_1, \vec{e}_2, \ldots, \vec{e}_n , \end{equation*}
is a basis of \({\mathbb R}^n\text{,}\) (hence the name). So as expected
\begin{equation*} \dim {\mathbb R}^n = n . \end{equation*}
On the other hand the subspace \(\{ \vec{0} \}\) is of dimension \(0\text{.}\)
The subspace \(S''\) from Example A.4.1, that is, the set of vectors \((a,a)\text{,}\) is of dimension 1. One possible basis is simply \(\{ (1,1) \}\text{,}\) the single vector \((1,1)\text{:}\) every vector in \(S''\) can be represented by \(a (1,1) = (a,a)\text{.}\) Similarly another possible basis would be \(\{ (-1,-1) \}\text{.}\) Then the vector \((a,a)\) would be represented as \((-a) (1,1)\text{.}\)
Row and column spaces of a matrix are also examples of subspaces, as they are given as the span of vectors. We can use what we know about rank, row spaces, and column spaces from the previous section to find a basis.

Example A.4.3.

In the last section, we considered the matrix
\begin{equation*} A = \begin{bmatrix} 1 & 2 & 3 & 4 \\ 2 & 4 & 5 & 6 \\ 3 & 6 & 7 & 8 \end{bmatrix} . \end{equation*}
Using row reduction to find the pivot columns, we found
\begin{equation*} \text{column space of $A$} \left( \begin{bmatrix} 1 & 2 & 3 & 4 \\ 2 & 4 & 5 & 6 \\ 3 & 6 & 7 & 8 \end{bmatrix} \right) = \operatorname{span} \left\{ \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix} , \begin{bmatrix} 3 \\ 5 \\ 7 \end{bmatrix} \right\} . \end{equation*}
What we did was we found the basis of the column space. The basis has two elements, and so the column space of \(A\) is two dimensional. Notice that the rank of \(A\) is two.
We would have followed the same procedure if we wanted to find the basis of the subspace \(X\) spanned by
\begin{equation*} \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix} , \begin{bmatrix} 2 \\ 4 \\ 6 \end{bmatrix} , \begin{bmatrix} 3 \\ 5 \\ 7 \end{bmatrix} , \begin{bmatrix} 4 \\ 6 \\ 8 \end{bmatrix} . \end{equation*}
We would have simply formed the matrix \(A\) with these vectors as columns and repeated the computation above. The subspace \(X\) is then the column space of \(A\text{.}\)

Example A.4.4.

Consider the matrix
\begin{equation*} L = \begin{bmatrix} {1} & 2 & 0 & 0 & 3 \\ 0 & 0 & {1} & 0 & 4 \\ 0 & 0 & 0 & {1} & 5 \end{bmatrix} . \end{equation*}
Conveniently, the matrix is in reduced row echelon form. The matrix is of rank 3. The column space is the span of the pivot columns. It is the 3-dimensional space
\begin{equation*} \text{column space of $L$} = \operatorname{span} \left\{ \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} , \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} , \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} \right\} = {\mathbb{R}}^3 . \end{equation*}
The row space is the 3-dimensional space
\begin{equation*} \text{row space of $L$} = \operatorname{span} \left\{ \begin{bmatrix} 1 & 2 & 0 & 0 & 3 \end{bmatrix} , \begin{bmatrix} 0 & 0 & 1 & 0 & 4 \end{bmatrix} , \begin{bmatrix} 0 & 0 & 0 & 1 & 5 \end{bmatrix} \right\} . \end{equation*}
As these vectors have 5 components, we think of the row space of \(L\) as a subspace of \({\mathbb{R}}^5\text{.}\)
The way the dimensions worked out in the examples is not an accident. Since the number of vectors that we needed to take was always the same as the number of pivots, and the number of pivots is the rank, we get the following result.

Subsection A.4.2 Kernel

The set of solutions of a linear equation \(L\vec{x} = \vec{0}\text{,}\) the kernel of \(L\text{,}\) is a subspace: If \(\vec{x}\) and \(\vec{y}\) are solutions, then
\begin{equation*} L(\vec{x}+\vec{y}) = L\vec{x}+L\vec{y} = \vec{0}+\vec{0} = \vec{0} , \qquad \text{and} \qquad L(\alpha \vec{x}) = \alpha L \vec{x} = \alpha \vec{0} = \vec{0}. \end{equation*}
So \(\vec{x}+\vec{y}\) and \(\alpha \vec{x}\) are solutions. The dimension of the kernel is called the nullity of the matrix.
The same sort of idea governs the solutions of linear differential equations. We try to describe the kernel of a linear differential operator, and as it is a subspace, we look for a basis of this kernel. Much of this book is dedicated to finding such bases.
The kernel of a matrix is the same as the kernel of its reduced row echelon form. For a matrix in reduced row echelon form, the kernel is rather easy to find. If a vector \(\vec{x}\) is applied to a matrix \(L\text{,}\) then each entry in \(\vec{x}\) corresponds to a column of \(L\text{,}\) the column that the entry multiplies. To find the kernel, pick a non-pivot column make a vector that has a \(-1\) in the entry corresponding to this non-pivot column and zeros at all the other entries corresponding to the other non-pivot columns. Then for all the entries corresponding to pivot columns make it precisely the value in the corresponding row of the non-pivot column to make the vector be a solution to \(L \vec{x} = \vec{0}\text{.}\) This procedure is best understood by example.

Example A.4.5.

Consider
\begin{equation*} L = \begin{bmatrix} \mybxsm{1} & 2 & 0 & 0 & 3 \\ 0 & 0 & \mybxsm{1} & 0 & 4 \\ 0 & 0 & 0 & \mybxsm{1} & 5 \end{bmatrix} . \end{equation*}
This matrix is in reduced row echelon form, the pivots are marked. There are two non-pivot columns, so the kernel has dimension 2, that is, it is the span of 2 vectors. Let us find the first vector. We look at the first non-pivot column, the \(2^{\text{nd}}\) column, and we put a \(-1\) in the \(2^{\text{nd}}\) entry of our vector. We put a \(0\) in the \(5^{\text{th}}\) entry as the \(5^{\text{th}}\) column is also a non-pivot column:
\begin{equation*} \begin{bmatrix} ? \\ -1 \\ ? \\ ? \\ 0 \end{bmatrix} . \end{equation*}
Let us fill the rest. When this vector hits the first row, we get a \(-2\) and \(1\) times whatever the first question mark is. So make the first question mark \(2\text{.}\) For the second and third rows, it is sufficient to make it the question marks zero. We are really filling in the non-pivot column into the remaining entries. Let us check while marking which numbers went where:
\begin{equation*} \begin{bmatrix} 1 & \mybxsm{2} & 0 & 0 & 3 \\ 0 & \mybxsm{0} & 1 & 0 & 4 \\ 0 & \mybxsm{0} & 0 & 1 & 5 \end{bmatrix} \begin{bmatrix} \mybxsm{2} \\ -1 \\ \mybxsm{0} \\ \mybxsm{0} \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} . \end{equation*}
Yay! How about the second vector. We start with
\begin{equation*} \begin{bmatrix} ? \\ 0 \\ ? \\ ? \\ -1 . \end{bmatrix} \end{equation*}
We set the first question mark to 3, the second to 4, and the third to 5. Let us check, marking things as previously,
\begin{equation*} \begin{bmatrix} 1 & 2 & 0 & 0 & \mybxsm{3} \\ 0 & 0 & 1 & 0 & \mybxsm{4} \\ 0 & 0 & 0 & 1 & \mybxsm{5} \end{bmatrix} \begin{bmatrix} \mybxsm{3} \\ 0 \\ \mybxsm{4} \\ \mybxsm{5} \\ -1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} . \end{equation*}
There are two non-pivot columns, so we only need two vectors. We have found the basis of the kernel. So,
\begin{equation*} \text{kernel of $L$} = \operatorname{span} \left\{ \begin{bmatrix} 2 \\ -1 \\ 0 \\ 0 \\ 0 \end{bmatrix} , \begin{bmatrix} 3 \\ 0 \\ 4 \\ 5 \\ -1 \end{bmatrix} \right\} \end{equation*}
What we did in finding a basis of the kernel is we expressed all solutions of \(L \vec{x} = \vec{0}\) as a linear combination of some given vectors.
The procedure to find the basis of the kernel of a matrix \(L\text{:}\)
  1. Find the reduced row echelon form of \(L\text{.}\)
  2. Write down the basis of the kernel as above, one vector for each non-pivot column.
The rank of a matrix is the dimension of the column space, and that is the span on the pivot columns, while the kernel is the span of vectors one for each non-pivot column. So the two numbers must add to the number of columns.
The theorem is immensely useful in applications. It allows one to compute the rank \(r\) if one knows the nullity \(k\) and vice versa, without doing any extra work.
Let us consider an example application, a simple version of the so-called Fredholm alternative. A similar result is true for differential equations. Consider
\begin{equation*} A \vec{x} = \vec{b} , \end{equation*}
where \(A\) is a square \(n \times n\) matrix. There are then two mutually exclusive possibilities:
  1. A nonzero solution \(\vec{x}\) to \(A \vec{x} = \vec{0}\) exists.
  2. The equation \(A \vec{x} = \vec{b}\) has a unique solution \(\vec{x}\) for every \(\vec{b}\text{.}\)
How does the Rank–Nullity theorem come into the picture? Well, if \(A\) has a nonzero solution \(\vec{x}\) to \(A \vec{x} = \vec{0}\text{,}\) then the nullity \(k\) is positive. But then the rank \(r = n-k\) must be less than \(n\text{.}\) It means that the column space of \(A\) is of dimension less than \(n\text{,}\) so it is a subspace that does not include everything in \({\mathbb{R}}^n\text{.}\) So \({\mathbb{R}}^n\) has to contain some vector \(\vec{b}\) not in the column space of \(A\text{.}\) In fact, most vectors in \({\mathbb{R}}^n\) are not in the column space of \(A\text{.}\)

Subsection A.4.3 Exercises

Exercise A.4.1.

For the following sets of vectors, find a basis for the subspace spanned by the vectors, and find the dimension of the subspace.
  1. \(\displaystyle \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} , \quad \begin{bmatrix} -1 \\ -1 \\ -1 \end{bmatrix}\)
  2. \(\displaystyle \begin{bmatrix} 1 \\ 0 \\ 5 \end{bmatrix} , \quad \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} , \quad \begin{bmatrix} 0 \\ -1 \\ 0 \end{bmatrix}\)
  3. \(\displaystyle \begin{bmatrix} -4 \\ -3 \\ 5 \end{bmatrix} , \quad \begin{bmatrix} 2 \\ 3 \\ 3 \end{bmatrix} , \quad \begin{bmatrix} 2 \\ 0 \\ 2 \end{bmatrix}\)
  4. \(\displaystyle \begin{bmatrix} 1 \\ 3 \\ 0 \end{bmatrix} , \quad \begin{bmatrix} 0 \\ 2 \\ 2 \end{bmatrix} , \quad \begin{bmatrix} -1 \\ -1 \\ 2 \end{bmatrix}\)
  5. \(\displaystyle \begin{bmatrix} 1 \\ 3 \end{bmatrix} , \quad \begin{bmatrix} 0 \\ 2 \end{bmatrix} , \quad \begin{bmatrix} -1 \\ -1 \end{bmatrix}\)
  6. \(\displaystyle \begin{bmatrix} 3 \\ 1 \\ 3 \end{bmatrix} , \quad \begin{bmatrix} 2 \\ 4 \\ -4 \end{bmatrix} , \quad \begin{bmatrix} -5 \\ -5 \\ -2 \end{bmatrix}\)

Exercise A.4.2.

For the following matrices, find a basis for the kernel (nullspace).
  1. \(\displaystyle \begin{bmatrix} 1 & 1 & 1 \\ 1 & 1 & 5 \\ 1 & 1 & -4 \end{bmatrix}\)
  2. \(\displaystyle \begin{bmatrix} 2 & -1 & -3 \\ 4 & 0 & -4 \\ -1 & 1 & 2 \end{bmatrix}\)
  3. \(\displaystyle \begin{bmatrix} -4 & 4 & 4 \\ -1 & 1 & 1 \\ -5 & 5 & 5 \end{bmatrix}\)
  4. \(\displaystyle \begin{bmatrix} -2 & 1 & 1 & 1 \\ -4 & 2 & 2 & 2 \\ 1 & 0 & 4 & 3 \end{bmatrix}\)

Exercise A.4.3.

Suppose a \(5 \times 5\) matrix \(A\) has rank 3. What is the nullity?

Exercise A.4.4.

Suppose that \(X\) is the set of all the vectors of \({\mathbb{R}}^3\) whose third component is zero. Is \(X\) a subspace? And if so, find a basis and the dimension.

Exercise A.4.5.

Consider a square matrix \(A\text{,}\) and suppose that \(\vec{x}\) is a nonzero vector such that \(A \vec{x} = \vec{0}\text{.}\) What does the Fredholm alternative say about invertibility of \(A\text{.}\)

Exercise A.4.6.

Consider
\begin{equation*} M = \begin{bmatrix} 1 & 2 & 3 \\ 2 & ? & ? \\ -1 & ? & ? \end{bmatrix} . \end{equation*}
If the nullity of this matrix is 2, fill in the question marks. Hint: What is the rank?

Exercise A.4.101.

For the following sets of vectors, find a basis for the subspace spanned by the vectors, and find the dimension of the subspace.
  1. \(\displaystyle \begin{bmatrix} 1 \\ 2 \end{bmatrix} , \quad \begin{bmatrix} 1 \\ 1 \end{bmatrix}\)
  2. \(\displaystyle \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} , \quad \begin{bmatrix} 2 \\ 2 \\ 2 \end{bmatrix} , \quad \begin{bmatrix} 1 \\ 1 \\ 2 \end{bmatrix}\)
  3. \(\displaystyle \begin{bmatrix} 5 \\ 3 \\ 1 \end{bmatrix} , \quad \begin{bmatrix} 5 \\ -1 \\ 5 \end{bmatrix} , \quad \begin{bmatrix} -1 \\ 3 \\ -4 \end{bmatrix}\)
  4. \(\displaystyle \begin{bmatrix} 2 \\ 2 \\ 4 \end{bmatrix} , \quad \begin{bmatrix} 2 \\ 2 \\ 3 \end{bmatrix} , \quad \begin{bmatrix} 4 \\ 4 \\ -3 \end{bmatrix}\)
  5. \(\displaystyle \begin{bmatrix} 1 \\ 0 \end{bmatrix} , \quad \begin{bmatrix} 2 \\ 0 \end{bmatrix} , \quad \begin{bmatrix} 3 \\ 0 \end{bmatrix}\)
  6. \(\displaystyle \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} , \quad \begin{bmatrix} 2 \\ 0 \\ 0 \end{bmatrix} , \quad \begin{bmatrix} 0 \\ 1 \\ 2 \end{bmatrix}\)
Answer.
a) \(\begin{bmatrix} 1 \\ 2 \end{bmatrix} , \begin{bmatrix} 1 \\ 1 \end{bmatrix}\) dimension 2,     b) \(\begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} , \begin{bmatrix} 1 \\ 1 \\ 2 \end{bmatrix}\) dimension 2,     c) \(\begin{bmatrix} 5 \\ 3 \\ 1 \end{bmatrix} , \begin{bmatrix} 5 \\ -1 \\ 5 \end{bmatrix} , \begin{bmatrix} -1 \\ 3 \\ -4 \end{bmatrix}\) dimension 3,     d) \(\begin{bmatrix} 2 \\ 2 \\ 4 \end{bmatrix} , \begin{bmatrix} 2 \\ 2 \\ 3 \end{bmatrix}\) dimension 2,     e) \(\begin{bmatrix} 1 \\ 1 \end{bmatrix}\) dimension 1,     f) \(\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} , \begin{bmatrix} 0 \\ 1 \\ 2 \end{bmatrix}\) dimension 2

Exercise A.4.102.

For the following matrices, find a basis for the kernel (nullspace).
  1. \(\displaystyle \begin{bmatrix} 2 & 6 & 1 & 9 \\ 1 & 3 & 2 & 9 \\ 3 & 9 & 0 & 9 \end{bmatrix}\)
  2. \(\displaystyle \begin{bmatrix} 2 & -2 & -5 \\ -1 & 1 & 5 \\ -5 & 5 & -3 \end{bmatrix}\)
  3. \(\displaystyle \begin{bmatrix} 1 & -5 & -4 \\ 2 & 3 & 5 \\ -3 & 5 & 2 \end{bmatrix}\)
  4. \(\displaystyle \begin{bmatrix} 0 & 4 & 4 \\ 0 & 1 & 1 \\ 0 & 5 & 5 \end{bmatrix}\)
Answer.
a) \(\begin{bmatrix} 3 \\ -1 \\ 0 \\ 0 \end{bmatrix}\text{,}\) \(\begin{bmatrix} 3 \\ 0 \\ 3 \\ -1 \end{bmatrix}\)     b) \(\begin{bmatrix} -1 \\ -1 \\ 0 \end{bmatrix}\)     c) \(\begin{bmatrix} 1 \\ 1 \\ -1 \end{bmatrix}\)     d) \(\begin{bmatrix} -1 \\ 0 \\ 0 \end{bmatrix}\text{,}\) \(\begin{bmatrix} 0 \\ 1 \\ -1 \end{bmatrix}\)

Exercise A.4.103.

Suppose the column space of a \(9 \times 5\) matrix \(A\) of dimension 3. Find
  1. Rank of \(A\text{.}\)
  2. Nullity of \(A\text{.}\)
  3. Dimension of the row space of \(A\text{.}\)
  4. Dimension of the nullspace of \(A\text{.}\)
  5. Size of the maximum subset of linearly independent rows of \(A\text{.}\)
Answer.
a) 3     b) 2     c) 3     d) 2     e) 3
For a higher quality printout use the PDF version: https://www.jirka.org/diffyqs/diffyqs.pdf