Section 4.4 Matrix inverses
In Theorem 4.1.19 we saw that the system of linear equations \([A|\vec{b}]\) can be equivalently expressed as a matrix equation \(A\vec{x} = \vec{b}\text{.}\) If this equation was about numbers instead of matrices and vectors then we would easily solve the equation \(ax=b\) by dividing to get \(x = \frac{b}{a}\) (as long as \(a \neq 0\)). Dividing by \(a\) is really the same as multiplying by \(a^{-1}\text{,}\) so we could write our solution as \(x=a^{-1}b\text{.}\) Since we know how to multiply matrices, this brings up a natural question: Is there a matrix \(A^{-1}\) so that the equations \(A\vec{x}=\vec{b}\) and \(\vec{x} = A^{-1}\vec{b}\) are equivalent? Solving this problem is the goal of this section.
Subsection 4.4.1 The inverse of a square matrix
Definition 4.4.1.
Let \(A\) be an \(n \times n\) matrix. An inverse of \(A\) is an \(n \times n\) matrix \(B\) such that \(AB=I_n\) and \(BA = I_n\text{.}\)
If there is an inverse of \(A\) then we say that \(A\) is invertible.
Notice that in the definition we require that multiplication in both orders gives us the identity matrix. Remember that, in general, \(AB \neq BA\text{!}\) Nevertheless, later (Theorem 4.4.14) we will see that it turns out that if \(A\) and \(B\) are square matrices and \(AB=I_n\) then \(BA=I_n\) happens automatically.
In the definition we speak of an inverse, rather than the inverse, because at this point we do not know if there could be a matrix \(A\) for which many different matrices can behave as the definition requires. After we prove Theorem 4.4.5 we will know that a matrix cannot have multiple inverses, and at that point we will start speaking of the inverse of a matrix (when an inverse exists).
Observe that if \(B\) is an inverse for \(A\) then we can accomplish the goal described in the introduction to this section: Multiplying both sides of \(A\vec{x}=\vec{b}\) by \(B\) on the left will give \(BA\vec{x} = B\vec{b}\text{,}\) and \(BA=I_n\text{,}\) so we get \(I_n\vec{x} = B\vec{b}\text{,}\) which is the same as \(\vec{x} = B\vec{b}\)
Example 4.4.2.
Let \(A = \begin{bmatrix}1 \amp 1 \\ 1 \amp 0\end{bmatrix}\) and \(B = \begin{bmatrix}0 \amp 1 \\ 1 \amp -1\end{bmatrix}\text{.}\) By direct calculation you can find that \(AB = BA = I_2\text{,}\) so \(B\) is an inverse of \(A\text{,}\) and the matrix \(A\) is invertible.
In Subsection 4.4.4 we will see the techniques needed to find this \(B\) when given \(A\text{.}\)
Example 4.4.3.
You will probably not be surprised to learn that the zero matrix is not invertible. Indeed, for any \(n \times n\) matrix \(B\) we have \(0_{n \times n}B = 0_{n\times n} \neq I_n\text{,}\) so \(B\) cannot be an inverse for \(0_{n \times n}\text{.}\)
Example 4.4.4.
Let \(A = \begin{bmatrix}2 \amp 1 \\ 6 \amp 3\end{bmatrix}\text{.}\) We will show that \(A\) is not invertible - this is more surprising than the previous example, because not only is \(A\) not the zero matrix, it doesn't even have any zero entries! Suppose that we did have an inverse, say \(B = \begin{bmatrix}a \amp b \\ c \amp d\end{bmatrix}\text{.}\) Then we would have \(AB = I_2\text{.}\) When we carry out the multiplication on the left side of this equation it becomes
Setting corresponding entries equal, we obtain a system of four linear equations in four variables.
If we attempt to solve this system we find that there are no solutions - you could do this by setting up an augmented matrix, or by noticing that the third equation is \(3(2a+c)=0\text{,}\) which contradicts the first equation. In any case, there are no \(a, b, c, d\) satisfying the requirements we have found, so there is no matrix \(B\) that is an inverse for \(A\text{.}\) The matrix \(A\) is not invertible.
The last example gives us a method for trying to check if a matrix is invertible: Multiply it by an arbitrary matrix, set the result equal to the identity matrix, and try to solve. Unfortunately, this method is horrendously inefficient. Even for a \(2 \times 2\) matrix we ended up with a system of \(4\) equations in \(4\) variables. For a \(3 \times 3\) matrix the system would have had \(9\) equations and \(9\) variables! Fortunately, after we develop a bit more machinery, in Subsection 4.4.4 we will be able to describe a much more efficient method for testing if a matrix is invertible (and if so, calculating the inverse).
So far we have been careful to speak of an inverse for a matrix \(A\text{,}\) because as far as we know it is possible that a single matrix could have many different inverses. In fact, that isn't true, as we now prove.
Theorem 4.4.5.
Any \(n \times n\) matrix has at most one inverse.
Proof.
Suppose that \(A\) is an \(n \times n\) matrix and that both \(B_1\) and \(B_2\) are inverses for \(A\text{.}\) We will prove that \(B_1 = B_2\text{.}\) By definition of being inverses for \(A\) we have \(AB_1 = B_1A = AB_2 = B_2A = I_n\text{.}\) We calculate as follows:
Therefore any two inverses of \(A\) are actually equal, so \(A\) has at most one inverse.
Since we have shown that any given square matrix has at most one inverse, when \(A\) is invertible we will speak of the inverse of \(A\text{,}\) and we will name it \(A^{-1}\text{.}\)
Here are some elementary properties of inverses. We will use these frequently, and usually without explicitly referring back to this theorem.
Theorem 4.4.6.
Suppose that \(A\) and \(B\) are invertible \(n \times n\) matrices. Then:
\(A^{-1}\) is invertible, and \((A^{-1})^{-1} = A\text{.}\)
\(AB\) is invertible, and \((AB)^{-1} = B^{-1}A^{-1}\text{.}\)
\(A^t\) is invertible, and \((A^t)^{-1} = (A^{-1})^t\text{.}\)
Proof.
We prove only the second claim, leaving the first and third as exercises. We just need to calculate:
and
Therefore \(B^{-1}A^{-1} = (AB)^{-1}\text{.}\)
Just like with numbers, knowing that \(A\) and \(B\) are invertible tells you nothing about whether or not \(A+B\) is invertible.
Subsection 4.4.2 Elementary matrices
Our next major goal is to find an efficient way of determining whether or not a matrix is invertible, and if so, finding the inverse. Both of those goals will be accomplished in Subsection 4.4.4, but in order to do so we need some preliminary material that gives us a way of connecting row operations to matrix multiplication.
Definition 4.4.7.
An \(n\times n\) elementary matrix is any matrix that can be obtained from \(I_n\) by performing exactly one row operation.
Example 4.4.8.
Here are some elementary matrices (see if you can work out which row operation was performed on \(I_2\) to get each of these!):
\(\displaystyle \begin{bmatrix}0 \amp 1 \\ 1 \amp 0\end{bmatrix}\)
\(\displaystyle \begin{bmatrix}-2 \amp 0 \\ 0 \amp 1\end{bmatrix}\)
\(\displaystyle \begin{bmatrix}1 \amp 0 \\ 5 \amp 1\end{bmatrix}\)
By contrast, the matrix \(A = \begin{bmatrix}1 \amp 1 \\ 2 \amp 1\end{bmatrix}\) is not an elementary matrix, because there is no single row operation that takes \(I_2\) to \(A\) (we could do several row operations to get from \(I_2\) to \(A\text{,}\) but the definition of elementary matrices requires that we only use one operation).
Theorem 4.4.9.
Suppose that \(A\) is any \(n \times n\) matrix, and \(E\) is an \(n \times n\) elementary matrix. Then \(EA\) is the same as the matrix obtained from \(A\) by performing the same row operation used to obtain \(E\) from \(I_n\text{.}\)
Example 4.4.10.
Let \(A = \begin{bmatrix}1 \amp 2 \\ 3 \amp 4\end{bmatrix}\text{.}\) If we perform the row operation \(R_1 - 2R_2\) on \(A\) then we get \(B = \begin{bmatrix}-5 \amp -6 \\ 2 \amp 3\end{bmatrix}\text{.}\) If we do the same row operation to \(I_2\) then we get the elementary matrix \(E = \begin{bmatrix}1 \amp -2 \\ 0 \amp 1\end{bmatrix}\text{.}\) If we calculate the product \(EA\text{,}\) we get:
as predicted by the theorem.
The purpose of elementary matrices is that they allow us to transform questions about row operations into questions about matrix multiplication, which allows us to use the tools of matrix algebra that we have been developing.
Theorem 4.4.11.
Every elementary matrix is invertible, and the inverse of an elementary matrix is another elementary matrix.
Proof.
Suppose that \(E\) is an elementary matrix, so \(E\) was obtained from \(I_n\) by a single row operation. We know that every row operation can be reversed, that is, there is some row operation that takes \(E\) back to \(I_n\text{.}\) Let \(F\) be the elementary matrix corresponding to this "reversing" row operation. By Theorem 4.4.9 the matrix \(FE\) is the matrix obtained from \(E\) by the row operation that created \(F\text{;}\) by our choice of \(F\) this means that \(FE = I_n\text{.}\)
On the other hand, the row operation that created \(E\) is also the "reverse" of the operation that created \(F\text{,}\) so by a very similar argument we also have that \(EF = I_n\text{.}\) Thus \(F = E^{-1}\text{.}\)
Subsection 4.4.3 The fundamental theorem
Recall Theorem 3.3.14, which gave us several equivalences relating to solving systems of linear equations. We are now prepared to add some very important items to that list of equivalences.
Theorem 4.4.12. Fundamental Theorem - Version 2.
Let \(A\) be an \(n \times n\) matrix. The following are equivalent:
\(\RREF(A) = I_n\text{.}\)
\(A\) is invertible.
The system \([A|\vec{0}]\) has a unique solution.
The equation \(A\vec{x} = \vec{0}\) has a unique solution.
For every vector \(\vec{b}\) in \(\mathbb{R}^n\text{,}\) the system \([A|\vec{b}]\) has a unique solution.
For every vector \(\vec{b}\) in \(\mathbb{R}^n\text{,}\) the equation \(A\vec{x} = \vec{b}\) has a unique solution.
The columns of \(A\) are linearly independent.
The span of the columns of \(A\) is \(\mathbb{R}^n\text{.}\)
\(\rank(A) = n\text{.}\)
\(A\) can be written as a product of a finite collection of elementary matrices.
Proof.
In Theorem 3.3.14 we proved the equivalences between (1), (3), (5), (7), (8), and (9). The equivalences of (3) with (4) and (5) with (6) both follow immediately from Theorem 4.1.19. To complete the proof we will prove that (1) implies (10), (10) implies (2), and (2) implies (4).
\(1 \implies 10\text{:}\) Suppose that \(\RREF(A) = I_n\text{.}\) Then there is a sequence of row operations that takes \(A\) to \(I_n\text{.}\) Let \(E_1\) be the elementary matrix corresponding to the first row operation used, \(E_2\) the elementary matrix for the second row operation used, and so on, up to \(E_k\) for the last row operation. Then by Theorem 4.4.9 we have \(E_k\cdots E_2E_1A = I_n\text{.}\) By Theorem 4.4.11 each \(E_j\) is invertible, and each \(E_j^{-1}\) is also an elementary matrix. By Theorem 4.4.6 \((E_k\cdots E_2E_1)^{-1} = E_1^{-1}E_2^{-1} \cdots E_k^{-1}\text{,}\) so multiplying both sides of \(E_k\cdots E_2E_1A = I_n\) on the left by this expression we obtain
which is a product of elementary matrices.
\(10 \implies 2\text{:}\) If \(A\) can be written as a product of elementary matrices then since each elementary matrix is invertible (Theorem 4.4.11) and products of invertible matrices are invertible (Theorem 4.4.6) we conclude that \(A\) is invertible.
\(2 \implies 4\text{:}\) Suppose that \(A\) is invertible. Then the equation \(A\vec{x} = \vec{0}\) is equivalent to the equation \(\vec{x} = A^{-1}\vec{0} = \vec{0}\text{,}\) meaning that the unique solution to \(A\vec{x} = \vec{0}\) is \(\vec{x}=\vec{0}\text{.}\)
Subsection 4.4.4 Calculating matrix inverses
It might not be obvious at first glance, but the Fundamental Theorem can be used to give us a method for finding the inverse of a matrix. We will need a preliminary result, which is helpful in its own right.
Theorem 4.4.14.
Let \(A\) be an \(n \times n\) matrix, and suppose that \(B\) is an \(n \times n\) matrix such that \(BA = I_n\text{.}\) Then \(A\) is invertible, and \(B = A^{-1}\text{.}\)
Proof.
Consider the equation \(A\vec{x} = \vec{0}\text{.}\) Multiplying both sides on the left by \(B\) we obtain \(BA\vec{x} = B\vec{0} = \vec{0}\text{,}\) and since \(BA = I_n\) this gives us \(\vec{x} = \vec{0}\text{.}\) That is, the equation \(A\vec{x} = \vec{0}\) has a unique solution. By Theorem 4.4.12 the matrix \(A\) is invertible. Now, to show that \(B = A^{-1}\text{,}\) we calculate:
Theorem 4.4.15.
Suppose that \(A\) is an \(n \times n\) matrix. If a sequence of row operations takes \(A\) to \(I_n\) then the same sequence of row operations takes \(I_n\) to \(A^{-1}\text{.}\)
Proof.
Suppose that \(E_1, \ldots, E_k\) are the elementary matrices corresponding to the sequence of row operations taking \(A\) to \(I_n\text{,}\) so we have
If we let \(B = E_k \cdots E_1\) then this equation says \(BA=I_n\text{,}\) which by Theorem 4.4.14 is enough to let us conclude that \(B = A^{-1}\text{.}\) That is,
which means that performing the sequence of row operations on \(I_n\) gives us \(A^{-1}\text{.}\)
Combining Theorem 4.4.12 with Theorem 4.4.15 yields an algorithm for both checking the invertibility of a matrix and finding its inverse: Set up the large augmented matrix \([A|I_n]\) and row reduce, aiming to get the left side into reduced row echelon form. If \(\RREF(A) = I_n\) then (since we are performing the same row operations on \(I_n\)), we will have \([A|I_n] \to [I_n|A^{-1}]\text{.}\) On the other hand, if \(\RREF(A) \neq I_n\) then \(A\) is not invertible.
Example 4.4.16.
Let \(A = \begin{bmatrix}3 \amp 3 \amp 1 \\ 0 \amp 0 \amp 1 \\ 2 \amp 2 \amp 1\end{bmatrix}\text{.}\) Determine whether or not \(A\) is invertible, and if it is, find \(A^{-1}\text{.}\)
Solution.We set up the augmented matrix \([A|I_3]\) and row reduce.
We see that \(\RREF(A) \neq I_3\text{,}\) so \(A\) is not invertible. The matrix appearing on the right side of the augmentation line has no particular meaning for us in this case.
Example 4.4.17.
Let \(A = \begin{bmatrix}1 \amp 1 \amp 1 \\ 1 \amp 2 \amp 1 \\ 0 \amp 0 \amp 1\end{bmatrix}\text{.}\) Determine whether or not \(A\) is invertible, and if it is, find \(A^{-1}\text{.}\)
Solution.We set up the augmented matrix \([A|I_3]\) and row-reduce.
This calculation shows that \(\RREF(A) = I_3\text{,}\) so \(A\) is invertible. It also shows that \(A^{-1} = \begin{bmatrix}2 \amp -1 \amp -1 \\ -1 \amp 1 \amp 0 \\ 0 \amp 0 \amp 1\end{bmatrix}\text{.}\)
Subsection 4.4.5 The inverse of a linear transformation
Definition 4.4.18.
Let \(T : \mathbb{R}^n \to \mathbb{R}^n\) be a linear transformation. The inverse for \(T\text{,}\) if it exists, is a function \(T^{-1} : \mathbb{R}^n \to \mathbb{R}^n\) such that for every \(\vec{v}\) in \(\mathbb{R}^n\text{,}\)
This is the same definition of "inverse function" that you have likely encountered in other mathematics courses. If you have believed the slogan that linear transformations and matrices are fundamentally the same thing, then the next result is probably not surprising.
Theorem 4.4.19.
Let \(T : \mathbb{R}^n \to \mathbb{R}^n\) be a linear transformation. The transformation \(T\) is invertible if and only if the matrix \([T]\) is invertible. In that case \(T^{-1}\) is a linear transformation, and \([T^{-1}] = [T]^{-1}\text{.}\)
Proof.
Suppose first that \([T]\) is invertible, and let \(S : \mathbb{R}^n \to \mathbb{R}^n\) be defined by \(S(\vec{v}) = [T]^{-1}\vec{v}\text{.}\) By Theorem 4.1.18 \(S\) is a linear transformation, and \([S] = [T]^{-1}\text{.}\) Using Theorem 4.1.13 we have, for any \(\vec{v}\text{,}\)
and likewise
Thus \(T\) is invertible.
Now suppose that \(T\) is invertible, with inverse function \(T^{-1}\text{.}\) To show that \(T^{-1}\) is a linear transformation, suppose that \(\vec{v}\) and \(\vec{w}\) are vectors in \(\mathbb{R}^n\text{,}\) and \(c\) is a scalar. Then:
and applying \(T^{-1}\) on both sides then gives us
Similarly,
and applying \(T^{-1}\) to both sides gives
Now since we have shown that \(T^{-1}\) is a linear transformation it has a matrix \([T^{-1}]\text{.}\) By Theorem 4.1.13, for any \(\vec{v}\text{,}\)
from which it follows that \([T^{-1}][T] = I_n\text{.}\) This is enough to prove that \([T]\) is invertible, and also that \([T^{-1}] = [T]^{-1}\text{,}\) by Theorem 4.4.14.
Example 4.4.20.
Let \(T : \mathbb{R}^3 \to \mathbb{R}^3\) be given by \(T\left(\begin{bmatrix}x\\y\\z\end{bmatrix}\right) = \begin{bmatrix}x+y+z\\x-y\\x\end{bmatrix}\text{.}\) Show that \(T\) is invertible, and find a formula for \(T^{-1}\left(\begin{bmatrix}x\\y\\z\end{bmatrix}\right)\text{.}\)
Solution.The matrix of \(T\) is \([T] = \begin{bmatrix}1 \amp 1 \amp 1 \\ 1 \amp -1 \amp 0 \\ 1 \amp 0 \amp 0\end{bmatrix}\text{.}\) Using our method for finding the inverse,
Thus \([T]^{-1} = \begin{bmatrix}0 \amp 0 \amp 1 \\ 0 \amp -1 \amp 1 \\ 1 \amp 1 \amp -2\end{bmatrix}\text{.}\) By Theorem 4.4.19 we see that \(T\) is invertible, and moreover that \([T^{-1}] = [T]^{-1}\text{.}\) Thus, using Theorem 4.1.13 we have that for any \(\begin{bmatrix}x\\y\\z\end{bmatrix}\) in \(\mathbb{R}^3\text{,}\)
Exercises 4.4.6 Exercises
\(\displaystyle \begin{bmatrix} 2 \amp 1 \\ -1 \amp 3 \end{bmatrix} \) \(\displaystyle \begin{bmatrix} 0 \amp 1 \\ 5 \amp 3 \end{bmatrix} \) \(\displaystyle \begin{bmatrix} 2 \amp 1 \\ 3 \amp 0 \end{bmatrix} \) \(\displaystyle \begin{bmatrix} 2 \amp 1 \\ 4 \amp 2 \end{bmatrix} \) \(\displaystyle \begin{bmatrix} 0 \amp 1 \amp 2 \\ 1 \amp 2 \amp 5 \end{bmatrix} \)1.
For each of the following matrices, find the inverse if possible. If it doesn't exist, explain why.
2.
Let \(A \) be a \(2 \times 2 \) invertible matrix, with \(A = \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix}. \) Find a formula for \(A^{-1} \) in terms of \(a,b,c,d. \)
3.
Using the inverse of the matrix, find the solution to the systems:
- \(\begin{bmatrix} x \\ y \end{bmatrix}=\begin{bmatrix} \frac{7}{2} \\ \frac{-3}{2} \end{bmatrix}\)
- \(\begin{bmatrix} x \\ y \end{bmatrix}=\begin{bmatrix} -1 \\ 1 \end{bmatrix}\)
- \(\begin{bmatrix} x \\ y \end{bmatrix}=\begin{bmatrix} \frac{-a}{2} + 2 b \\ \frac{a}{2} - b \end{bmatrix}\)
- Note that\begin{equation*} \begin{bmatrix} 2 \amp 4 \\ 1 \amp 1 \end{bmatrix} \begin{bmatrix}x \\ y \end{bmatrix} = \begin{bmatrix} 1 \\ 2 \end{bmatrix} \iff \begin{bmatrix}x \\ y \end{bmatrix} = \begin{bmatrix} 2 \amp 4 \\ 1 \amp 1 \end{bmatrix}^{-1} \begin{bmatrix} 1 \\ 2 \end{bmatrix}. \end{equation*}We compute:\begin{equation*} \begin{bmatrix} 2 \amp 4 \\ 1 \amp 1 \end{bmatrix}^{-1} \begin{bmatrix} 1 \\ 2 \end{bmatrix} = \frac{1}{2} \begin{bmatrix} -1 \amp 4 \\ 1 \amp -2 \end{bmatrix} \begin{bmatrix} 1 \\ 2 \end{bmatrix} = \frac{1}{2} \begin{bmatrix} -1 + 4\cdot 2 \\ 1 - 2 \cdot 2 \end{bmatrix} = \begin{bmatrix} \frac{7}{2} \\ \frac{-3}{2} \end{bmatrix} . \end{equation*}The solution is therefore given by \(\begin{bmatrix} x \\ y \end{bmatrix}=\begin{bmatrix} \frac{7}{2} \\ \frac{-3}{2} \end{bmatrix}.\)
- Note that\begin{equation*} \begin{bmatrix} 2 \amp 4 \\ 1 \amp 1 \end{bmatrix} \begin{bmatrix}x \\ y \end{bmatrix} = \begin{bmatrix} 2 \\ 0 \end{bmatrix} \iff \begin{bmatrix}x \\ y \end{bmatrix} = \begin{bmatrix} 2 \amp 4 \\ 1 \amp 1 \end{bmatrix}^{-1}\begin{bmatrix} 2 \\ 0 \end{bmatrix} . \end{equation*}We compute:\begin{equation*} \begin{bmatrix} 2 \amp 4 \\ 1 \amp 1 \end{bmatrix}^{-1}\begin{bmatrix} 2 \\ 0 \end{bmatrix} = \frac{1}{2} \begin{bmatrix} -1 \amp 4 \\ 1 \amp -2 \end{bmatrix} \begin{bmatrix} 2 \\ 0 \end{bmatrix} = \frac{1}{2} \begin{bmatrix} -2 + 4\cdot 0 \\ 2 -2\cdot 0 \end{bmatrix} = \begin{bmatrix} -1 \\ 1 \end{bmatrix} . \end{equation*}The solution is therefore given by \(\begin{bmatrix} x \\ y \end{bmatrix}=\begin{bmatrix} -1 \\ 1 \end{bmatrix}\text{.}\)
- Just as before, we see that\begin{equation*} \begin{bmatrix} 2 \amp 4 \\ 1 \amp 1 \end{bmatrix} \begin{bmatrix}x \\ y \end{bmatrix} = \begin{bmatrix} a \\ b \end{bmatrix} \iff \begin{bmatrix}x \\ y \end{bmatrix} = \begin{bmatrix} 2 \amp 4 \\ 1 \amp 1 \end{bmatrix}^{-1}\begin{bmatrix} a \\ b \end{bmatrix}. \end{equation*}We compute:\begin{equation*} \begin{bmatrix} 2 \amp 4 \\ 1 \amp 1 \end{bmatrix}^{-1} \begin{bmatrix} a \\ b \end{bmatrix} = \frac{1}{2} \begin{bmatrix} -1 \amp 4 \\ 1 \amp -2 \end{bmatrix} \begin{bmatrix} a \\ b \end{bmatrix} = \frac{1}{2} \begin{bmatrix} -a + 4 b \\ a - 2b \end{bmatrix}. \end{equation*}The solution is therefore given by \(\begin{bmatrix} x \\ y \end{bmatrix}=\begin{bmatrix} \frac{-a}{2} + 2 b \\ \frac{a}{2} - b \end{bmatrix}.\)
4.
For each of the following matrices, determine whether \(B \) is an inverse of \(A. \)
- For this example, we use Answer 4.4.6.2.1 to compute \(A^{-1}\text{:}\)\begin{align*} A^{-1} =\amp \begin{bmatrix} 2 \amp 4 \\ 1 \amp 4 \end{bmatrix}^{-1} = \frac{1}{2\cdot 4 - 1\cdot 4} \begin{bmatrix} 4 \amp -4 \\ -1 \amp 2 \end{bmatrix}\\ =\amp \frac{1}{4} \begin{bmatrix} 4 \amp -4 \\ -1 \amp 2 \end{bmatrix} \neq \frac{1}{2}\begin{bmatrix} 3 \amp -4 \\ -1 \amp 2 \end{bmatrix} = B. \end{align*}We conclude that, no, \(B\) is not the inverse of \(A\text{.}\)
- For this example, we will actually check whether \(B\) behaves like the inverse of \(A\text{,}\) so we compute:\begin{align*} AB \amp= \begin{bmatrix} 1 \amp -2 \\ 4 \amp -7 \end{bmatrix} \begin{bmatrix} -1 \amp 2 \\ -4 \amp 7 \end{bmatrix}\\ \amp= \begin{bmatrix} 1\cdot (-1) - 2 \cdot (-4) \amp 1 \cdot 2 - 2 \cdot 7 \\ 4\cdot (-1) - 7 \cdot (-4) \amp 4 \cdot 2 - 7 \cdot 7 \end{bmatrix} = \begin{bmatrix} 7 \amp -12 \\ 24 \amp -41 \end{bmatrix}. \end{align*}Since \(AB \neq I\text{,}\) we conclude that, no, \(B\) is not the inverse of \(A\text{.}\) (We note that it was not necessary to compute all entries of \(AB\text{;}\) it suffices to see that its \((1,1)\)-entry is not equal to \(1\text{.}\))
- We compute:\begin{align*} AB \amp= \begin{bmatrix} 4 \amp 1 \amp 3 \\ 2 \amp 1 \amp 2 \\ 1 \amp 0 \amp 1 \end{bmatrix} \begin{bmatrix} 1 \amp -1 \amp -1 \\ 0 \amp 1 \amp -2 \\ -1 \amp 1 \amp 2 \end{bmatrix}\\ \amp= \begin{bmatrix} 4 +0 -3 \amp -4+1+3 \amp -4-2+6 \\ 2+0-2 \amp -2+1+2 \amp -2-2+4 \\ 1+0-1 \amp -1+0+1 \amp -1+0+2 \end{bmatrix} = \begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 1 \end{bmatrix}. \end{align*}At this point, there is hope that \(B\) is the inverse of \(A\text{,}\) but we still need to check the other equation, so we compute:\begin{align*} BA \amp= \begin{bmatrix} 1 \amp -1 \amp -1 \\ 0 \amp 1 \amp -2 \\ -1 \amp 1 \amp 2 \end{bmatrix} \begin{bmatrix} 4 \amp 1 \amp 3 \\ 2 \amp 1 \amp 2 \\ 1 \amp 0 \amp 1 \end{bmatrix}\\ \amp= \begin{bmatrix} 4-2-1 \amp 1-1+0 \amp 3-2-1 \\ 0+2-2 \amp 0+1+0 \amp 0+2-2 \\ -4+2+2 \amp -1+1+0 \amp -3+2+2 \end{bmatrix} = \begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 1 \end{bmatrix}. \end{align*}We conclude that, indeed, \(B\) is the inverse of \(A\text{.}\)
5.
Suppose \(A, B\) are two invertible matrices of the same size. Show that \((AB)^{-1} = B^{-1}A^{-1} \) by verifying that
Show that \(A^{-1} + B^{-1} = A^{-1}(A + B) B^{-1}. \)6.
Let \(A \) and \(B \) denote \(n \times n \) invertible matrices.
If \(A + B \) is also invertible, show that \(A^{-1} + B^{-1} \) is invertible and find a formula for \((A^{-1} + B^{-1})^{-1}. \)
7.
In each case, find the elementary matrix \(E \) such that \(B = EA. \)
Find the elementary matrix \(E \) such that \(EA = B. \) Find the inverse of \(E, E^{-1},\) such that \(E^{-1}B = A. \) \(\displaystyle E=\begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp \frac{1}{2} \end{bmatrix} \) \(\displaystyle E^{-1}=\begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 2 \end{bmatrix}\)8.
Let \(A = \begin{bmatrix} 1 \amp 2 \amp 1 \\ 0 \amp 5 \amp 1 \\ 2 \amp -1 \amp 4 \end{bmatrix}. \) and \(B = \begin{bmatrix} 1 \amp 2 \amp 1 \\ 0 \amp 5 \amp 1 \\ 1 \amp -\frac{1}{2} \amp 2 \end{bmatrix}.\)
9.
Suppose \(AB = AC \) and \(A \) is an invertible \(n \times n \) matrix. Does it follow that \(B = C? \) Explain why or why not.
10.
Suppose \(AB = AC \) and \(A \) is a non-invertible \(n \times n \) matrix. Does it follow that \(B = C? \) Explain why or why not.
11.
Construct an example to demonstrate that \((A + B)^{-1} = A^{-1} + B^{-1} \) is not true for all invertible square matrices \(A \) and \(B \) of the same size.
12.
Let
13.
If \(c \ne 0, \) find the inverse of \(A=\begin{bmatrix} 1 \amp -1 \amp 1 \\ 2 \amp -1 \amp 2 \\ 0 \amp 2 \amp c \end{bmatrix}\) in terms of \(c. \)