To do basic geometry, we need length, and we need angles. We have already seen the euclidean length, so let us figure out how to compute angles. Mostly, we are worried about the right angle 1 .
Given two (column) vectors in \({\mathbb{R}}^n\text{,}\) we define the (standard) inner product as the dot product:
Why do we seemingly give a new notation for the dot product? Because there are other possible inner products, which are not the dot product, although we will not worry about others here. An inner product can even be defined on spaces of functions as we do in Chapter 5:
Anything that satisfies the properties above can be called an inner product, although in this section we are concerned with the standard inner product in \({\mathbb{R}}^n\text{.}\)
The standard inner product gives the euclidean length:
You may recall from multivariable calculus, that in two or three dimensions, the standard inner product (the dot product) gives you the angle between the vectors:
That is, \(\theta\) is the angle that \(\vec{x}\) and \(\vec{y}\) make when they are based at the same point.
In \({\mathbb{R}}^n\) (any dimension), we are simply going to say that \(\theta\) from the formula is what the angle is. This makes sense as any two vectors based at the origin lie in a 2-dimensional plane (subspace), and the formula works in 2 dimensions. In fact, one could even talk about angles between functions this way, and we do in Chapter 5, where we talk about orthogonal functions (functions at right angle to each other).
Our angles are always in radians. We are computing the cosine of the angle, which is really the best we can do. Given two vectors at an angle \(\theta\text{,}\) we can give the angle as \(-\theta\text{,}\)\(2\pi-\theta\text{,}\) etc., see Figure A.5. Fortunately, \(\cos \theta = \cos (-\theta) = \cos(2\pi - \theta)\text{.}\) If we solve for \(\theta\) using the inverse cosine \(\cos^{-1}\text{,}\) we can just decree that \(0 \leq \theta \leq \pi\text{.}\)
ExampleA.5.1.
Let us compute the angle between the vectors \((3,0)\) and \((1,1)\) in the plane. Compute
As we said, the most important angle is the right angle. A right angle is \(\nicefrac{\pi}{2}\) radians, and \(\cos (\nicefrac{\pi}{2}) = 0\text{,}\) so the formula is particularly easy in this case. We say vectors \(\vec{x}\) and \(\vec{y}\) are orthogonal if they are at right angles, that is if
The vectors \((1,0,0,1)\) and \((1,2,3,-1)\) are orthogonal. So are \((1,1)\) and \((1,-1)\text{.}\) However, \((1,1)\) and \((1,2)\) are not orthogonal as their inner product is \(3\) and not 0.
SubsectionA.5.2Orthogonal projection
A typical application of linear algebra is to take a difficult problem, write everything in the right basis, and in this new basis the problem becomes simple. A particularly useful basis is an orthogonal basis, that is a basis where all the basis vectors are orthogonal. When we draw a coordinate system in two or three dimensions, we almost always draw our axes as orthogonal to each other.
Generalizing this concept to functions, it is particularly useful in Chapter 5 to express a function using a particular orthogonal basis, the Fourier series.
To express one vector in terms of an orthogonal basis, we need to first project one vector onto another. Given a nonzero vector \(\vec{v}\text{,}\) we define the orthogonal projection of \(\vec{w}\) onto \(\vec{v}\) as
For the geometric idea, see Figure A.6. That is, we find the “shadow of \(\vec{w}\)” on the line spanned by \(\vec{v}\) if the direction of the sun’s rays were exactly perpendicular to the line. Another way of thinking about it is that the tip of the arrow of \(\operatorname{proj}_{\vec{v}}(\vec{w})\) is the closest point on the line spanned by \(\vec{v}\) to the tip of the arrow of \(\vec{w}\text{.}\) In terms of euclidean distance, \(\vec{u} = \operatorname{proj}_{\vec{v}}(\vec{w})\) minimizes the distance \(\lVert \vec{w} - \vec{u} \rVert\) among all vectors \(\vec{u}\) that are multiples of \(\vec{v}\text{.}\) Because of this, this projection comes up often in applied mathematics in all sorts of contexts we cannot solve a problem exactly: We can’t always solve “Find \(\vec{w}\) as a multiple of \(\vec{v}\text{,}\)” but \(\operatorname{proj}_{\vec{v}}(\vec{w})\) is the best “solution.”
The formula follows from basic trigonometry. The length of \(\operatorname{proj}_{\vec{v}}(\vec{w})\) should be \(\cos \theta\) times the length of \(\vec{w}\text{,}\) that is \((\cos \theta)\lVert\vec{w}\rVert\text{.}\) We take the unit vector in the direction of \(\vec{v}\text{,}\) that is, \(\frac{\vec{v}}{\lVert \vec{v} \rVert}\) and we multiply it by the length of the projection. In other words,
Let us double check that the projection is orthogonal. That is \(\vec{w}-\operatorname{proj}_{\vec{v}}(\vec{w})\) ought to be orthogonal to \(\vec{v}\text{,}\) see the right angle in Figure A.6. That is,
As we said, a basis \(\vec{v}_1,\vec{v}_2,\ldots,\vec{v}_n\) is an orthogonal basis if all vectors in the basis are orthogonal to each other, that is, if
for all choices of \(j\) and \(k\) where \(j \not= k\) (a nonzero vector cannot be orthogonal to itself). A basis is furthermore called an orthonormal basis if all the vectors in a basis are also unit vectors, that is, if all the vectors have magnitude 1. For example, the standard basis \(\{ (1,0,0), (0,1,0), (0,0,1) \}\) is an orthonormal basis of \({\mathbb{R}}^3\text{:}\) Any pair is orthogonal, and each vector is of unit magnitude.
The reason why we are interested in orthogonal (or orthonormal) bases is that they make it really simple to represent a vector (or a projection onto a subspace) in the basis. The simple formula for the orthogonal projection onto a vector gives us the coefficients. In Chapter 5, we use the same idea by finding the correct orthogonal basis for the set of solutions of a differential equation. We are then able to find any particular solution by simply applying the orthogonal projection formula, which is just a couple of a inner products.
Let us come back to linear algebra. Suppose that we have a subspace and an orthogonal basis \(\vec{v}_1, \vec{v}_2, \ldots, \vec{v}_n\text{.}\) We wish to express \(\vec{x}\) in terms of the basis. If \(\vec{x}\) is not in the span of the basis (when it is not in the given subspace), then of course it is not possible, but the following formula gives us at least the orthogonal projection onto the subspace, or in other words, the best approximation in the subspace.
First suppose that \(\vec{x}\) is in the span. Then it is the sum of the orthogonal projections:
Another way to derive this formula is to work in reverse. Suppose that \(\vec{x} =
a_1 \vec{v}_1 +
a_2 \vec{v}_2 + \cdots +
a_n \vec{v}_n\text{.}\) Take an inner product with \(\vec{v}_j\text{,}\) and use the properties of the inner product:
As the basis is orthogonal, then \(\langle \vec{v}_k , \vec{v}_j \rangle = 0\) whenever \(k \not= j\text{.}\) That means that only one of the terms, the \(j^{\text{th}}\) one, on the right hand side is nonzero and we get
Solving for \(a_j\) we find \(a_j =
\frac{\langle \vec{x}, \vec{v}_j \rangle}{
\langle \vec{v}_j, \vec{v}_j \rangle
}\) as before.
ExampleA.5.3.
The vectors \((1,1)\) and \((1,-1)\) form an orthogonal basis of \({\mathbb{R}}^2\text{.}\) Suppose we wish to represent \((3,4)\) in terms of this basis, that is, we wish to find \(a_1\) and \(a_2\) such that
If the basis is orthonormal rather than orthogonal, then all the denominators are one. It is easy to make a basis orthonormal—divide all the vectors by their size. If you want to decompose many vectors, it may be better to find an orthonormal basis. In the example above, the orthonormal basis we would thus create is
Maybe the example is not so awe inspiring, but given vectors in \({\mathbb{R}}^{20}\) rather than \({\mathbb{R}}^2\text{,}\) then surely one would much rather do 20 inner products (or 40 if we did not have an orthonormal basis) rather than solving a system of twenty equations in twenty unknowns using row reduction of a \(20 \times 21\) matrix.
As we said above, the formula still works even if \(\vec{x}\) is not in the subspace, although then it does not get us the vector \(\vec{x}\) but its projection. More concretely, suppose that \(S\) is a subspace that is the span of \(\vec{v}_1,\vec{v}_2,\ldots,\vec{v}_n\) and \(\vec{x}\) is any vector. Let \(\operatorname{proj}_{S}(\vec{x})\) be the vector in \(S\) that is the closest to \(\vec{x}\text{.}\) Then
Of course, if \(\vec{x}\) is in \(S\text{,}\) then \(\operatorname{proj}_{S}(\vec{x}) =
\vec{x}\text{,}\) as the closest vector in \(S\) to \(\vec{x}\) is \(\vec{x}\) itself. But true utility is obtained when \(\vec{x}\) is not in \(S\text{.}\) In much of applied mathematics, we cannot find an exact solution to a problem, but we try to find the best solution out of a small subset (subspace). The partial sums of Fourier series from Chapter 5 are one example. Another example is least square approximation to fit a curve to data. Yet another example is given by the most commonly used numerical methods to solve partial differential equations, the finite element methods.
ExampleA.5.4.
The vectors \((1,2,3)\) and \((3,0,-1)\) are orthogonal, and so they are an orthogonal basis of a subspace \(S\text{:}\)
\begin{equation*}
S =
\operatorname{span} \bigl\{ (1,2,3), (3,0,-1) \bigr\} .
\end{equation*}
Let us find the vector in \(S\) that is closest to \((2,1,0)\text{.}\) That is, let us find \(\operatorname{proj}_{S}\bigl((2,1,0)\bigr)\text{.}\)
Before leaving orthogonal bases, let us note a procedure for manufacturing them out of any old basis. It may not be difficult to come up with an orthogonal basis for a 2-dimensional subspace, but for a 20-dimensional subspace, it seems a daunting task. Fortunately, the orthogonal projection can be used to “project away” the bits of the vectors that are making them not orthogonal. It is called the Gram–Schmidt process.
We start with a basis of vectors \(\vec{v}_1,\vec{v}_2, \ldots,
\vec{v}_n\text{.}\) We construct an orthogonal basis \(\vec{w}_1, \vec{w}_2,
\ldots, \vec{w}_n\) as follows.
What we do is at the \(k^{\text{th}}\) step, we take \(\vec{v}_k\) and we subtract the projection of \(\vec{v}_k\) to the subspace spanned by \(\vec{w}_1,\vec{w}_2,\ldots,\vec{w}_{k-1}\text{.}\)
ExampleA.5.5.
Consider the vectors \((1,2,-1)\text{,}\) and \((0,5,-2)\) and call \(S\) the span of the two vectors. Let us find an orthogonal basis of \(S\text{:}\)
So \((1,2,-1)\) and \((-2,1,0)\) span \(S\) and are orthogonal. Let us check: \(\langle (1,2,-1) , (-2,1,0) \rangle = 0\text{.}\)
Suppose we wish to find an orthonormal basis, not just an orthogonal one. Well, we simply make the vectors into unit vectors by dividing them by their magnitude. The two vectors making up the orthonormal basis of \(S\) are:
Consider the vectors \((1,2,3)\text{,}\)\((-3,0,1)\text{,}\)\((1,-5,3)\text{.}\)
Check that the vectors are linearly independent and so form a basis.
Check that the vectors are mutually orthogonal, and are therefore an orthogonal basis.
Represent \((1,1,1)\) as a linear combination of this basis.
Make the basis orthonormal.
ExerciseA.5.6.
Let \(S\) be the subspace spanned by \((1,3,-1)\text{,}\)\((1,1,1)\text{.}\) Find an orthogonal basis of \(S\) by the Gram-Schmidt process.
ExerciseA.5.7.
Starting with \((1,2,3)\text{,}\)\((1,1,1)\text{,}\)\((2,2,0)\text{,}\) follow the Gram-Schmidt process to find an orthogonal basis of \({\mathbb{R}}^3\text{.}\)
ExerciseA.5.8.
Find an orthogonal basis of \({\mathbb{R}}^3\) such that \((3,1,-2)\) is one of the vectors. Hint: First find two extra vectors to make a linearly independent set.
ExerciseA.5.9.
Using cosines and sines of \(\theta\text{,}\) find a unit vector \(\vec{u}\) in \({\mathbb{R}}^2\) that makes angle \(\theta\) with \(\vec{\imath} = (1,0)\text{.}\) What is \(\langle \vec{\imath} , \vec{u} \rangle\text{?}\)
ExerciseA.5.101.
Find the \(s\) that makes the following vectors orthogonal: \((1,1,1)\text{,}\)\((1,s,1)\text{.}\)
Starting with \((1,1,-1)\text{,}\)\((2,3,-1)\text{,}\)\((1,-1,1)\text{,}\) follow the Gram-Schmidt process to find an orthogonal basis of \({\mathbb{R}}^3\text{.}\)