Section 2.2 Geometry: The dot product
Subsection 2.2.1 The dot product
Perhaps surprisingly, it turns out that length and angle are closely related by a single algebraic construction. For now, this definition is somewhat unmotivated. The remainder of this section will explain the usefulness of this concept.
Definition 2.2.1.
Suppose that \(\vec{v} = \begin{bmatrix}v_1\\ \vdots \\ v_n\end{bmatrix}\) and \(\vec{w} = \begin{bmatrix}w_1 \\ \vdots \\ w_n \end{bmatrix}\) are vectors in \(\mathbb{R}^n\text{.}\) We define their dot product (or inner product) to be:
Note 2.2.2.
There are several important things to notice in the definition above:
The vectors \(\vec{v}\) and \(\vec{w}\) must both be from \(\mathbb{R}^n\) for the same value of \(n\text{.}\) It makes no sense to write \(\vec{v} \cdot \vec{w}\) if \(\vec{v}\) is in \(\mathbb{R}^2\) and \(\vec{w}\) is in \(\mathbb{R}^3\text{,}\) for example.
The inputs to the dot product are two vectors, but the output is a scalar. The dot product is a completely different operation from scalar multiplication!
As a consequence of the previous point, an expression like \(\vec{v} \cdot (\vec{w} \cdot \vec{z})\) is meaningless, because we cannot take the dot product of a vector (like \(\vec{v}\)) with a scalar (like \(\vec{w} \cdot \vec{z}\)).
Example 2.2.3.
\(\displaystyle \begin{bmatrix}1\\1\\5\end{bmatrix} \cdot \begin{bmatrix}\sqrt{2} \\ 0 \\ \pi\end{bmatrix} = 1(\sqrt{2}) + 1(0) + 5(\pi) = \sqrt{2}+5\pi \)
\(\displaystyle \begin{bmatrix}1\\-2\end{bmatrix} \cdot \begin{bmatrix} 1\\ 1/2\end{bmatrix} = 1(1) + (-2)(1/2) = 0 \)
\(\displaystyle \begin{bmatrix}2\\1\end{bmatrix} \cdot \begin{bmatrix} 2\\1\end{bmatrix} = 2(2) + 1(1) = 2^2 + 1^2 = 5 \)
Take a moment and try to see if you can find the geometric significance of the second and third examples above. Each of them illustrates something that will be explored in more detail below.
Theorem 2.2.4.
Suppose that \(\vec{v}\text{,}\) \(\vec{w}\text{,}\) and \(\vec{z}\) are vectors in \(\mathbb{R}^n\text{,}\) and \(c\) is a scalar. Then:
\(\displaystyle \vec{v} \cdot \vec{w} = \vec{w} \cdot \vec{v} \)
\(\displaystyle \vec{v} \cdot (\vec{w} + \vec{z}) = \vec{v} \cdot \vec{w} + \vec{v} \cdot \vec{z} \)
\(\displaystyle (c\vec{v}) \cdot \vec{w} = c(\vec{v} \cdot \vec{w}) \)
Proof.
We will prove (2), leaving the others as exercises. Suppose that \(\vec{v} = \begin{bmatrix}v_1 \\ \vdots \\ v_n\end{bmatrix}\text{,}\) \(\vec{w} = \begin{bmatrix}w_1 \\ \vdots \\ w_n \end{bmatrix}\text{,}\) and \(\vec{z} = \begin{bmatrix}z_1\\ \vdots \\ z_n\end{bmatrix}\text{.}\) Then:
Subsection 2.2.2 Length
Now that we have the basic properties of the dot product, we are ready to start exploring what it means. We will start with the very special case of taking the dot product of a vector with itself. We begin in \(\mathbb{R}^2\text{.}\)
Example 2.2.5.
Consider the vector \(\vec{v} = \begin{bmatrix}2\\1\end{bmatrix}\text{.}\) In Example 2.2.3 we calculated that \(\vec{v} \cdot \vec{v} = 2^2+1^2 = 5\text{.}\)
Using the Pythagorean Theorem we calculate that the length of the line segment representing \(\vec{v}\) is \(\sqrt{2^2+1^2} = \sqrt{\vec{v} \cdot \vec{v}}\text{.}\)
The example generalizes to any vector in \(\mathbb{R}^2\text{:}\) If \(\vec{v} = \begin{bmatrix}x\\y\end{bmatrix}\text{,}\) then \(\vec{v} \cdot \vec{v} = x^2+y^2\text{,}\) and if we draw \(\vec{v}\) in the plane we will get a line segment of length \(\sqrt{x^2+y^2} = \sqrt{\vec{v}\cdot\vec{v}}\text{.}\)
Our plan is to take \(\sqrt{\vec{v}\cdot\vec{v}}\) as the definition of the length of \(\vec{v}\) when \(\vec{v}\) is a vector in any \(\mathbb{R}^n\text{.}\) Before we can sensibly do that we need to know that the expression \(\vec{v}\cdot\vec{v}\) always produces a non-negative answer, so that the square root makes sense.
Theorem 2.2.7.
Suppose that \(\vec{v}\) is a vector in \(\mathbb{R}^n\text{.}\) Then:
\(\vec{v}\cdot\vec{v} \geq 0 \text{.}\)
\(\vec{v} \cdot \vec{v} = 0 \) if and only if \(\vec{v} = \vec{0}\text{.}\)
Proof.
For the first claim, if \(\vec{v} = \begin{bmatrix}v_1 \\ \vdots \\ v_n\end{bmatrix}\) then \(\vec{v} \cdot \vec{v} = v_1^2 + \cdots + v_n^2\text{.}\) For each index \(j\) we have \(v_j^2 \geq 0\text{,}\) and the sum of non-negative numbers is non-negative, so \(\vec{v} \cdot \vec{v} \geq 0\text{.}\)
The second claim is an "if and only if" statement, which means that it asserts two things. Specifically, we need to prove that if \(\vec{v} = \vec{0}\) then \(\vec{v}\cdot\vec{v} = 0\text{,}\) and we also need to prove that if \(\vec{v} \cdot \vec{v} = 0\) then \(\vec{v} = \vec{0}\text{.}\)
The first direction is easy: If \(\vec{v} = \vec{0}\) then \(\vec{v} \cdot \vec{v} = \vec{0}\cdot\vec{0} = 0^2+\cdots + 0^2 = 0\text{.}\)
For the other direction, if \(\vec{v}\cdot\vec{v} = 0\) then \(v_1^2+\cdots+v_n^2 = 0\text{.}\) For each \(j\) we know \(v_j^2 \geq 0\text{,}\) and the only way that a sum of non-negative numbers can be \(0\) is if all of the numbers are \(0\text{,}\) so in fact each \(v_j^2 = 0\text{.}\) Thus each \(v_j = 0\text{,}\) so \(\vec{v} = \vec{0}\text{.}\)
Definition 2.2.8.
Suppose that \(\vec{v}\) is a vector in \(\mathbb{R}^n\text{.}\) We define the length (or norm, or magnitude) of \(\vec{v}\) to be:
Note 2.2.9.
Here are some things about the above definition that help make it a good notion of length:
The definition is sensible for any vector. We proved in Theorem 2.2.7 that \(\vec{v} \cdot \vec{v} \geq 0\text{,}\) so \(\norm{\vec{v}} = \sqrt{\vec{v}\cdot\vec{v}}\) makes sense.
Every vector has non-negative length, because the square root function always returns a non-negative number.
The only vector in \(\mathbb{R}^n\) with length \(0\) is the vector \(\vec{0}\text{.}\) This is because we proved in Theorem 2.2.7 that \(\vec{v}\cdot\vec{v} = 0\) if and only if \(\vec{v} = \vec{0}\text{,}\) which means that \(\norm{\vec{v}} = 0\) if and only if \(\vec{v} = \vec{0}\text{.}\)
This definition of length agrees with our geometric understanding of length for vectors in the plane. If you have seen geometry in three-dimensional space you will also recognize that in \(\mathbb{R}^3\) the length of the line segment from the origin \(O = (0,0,0)\) to a point \(P = (x, y, z)\) is \(\sqrt{x^2+y^2+z^2}\text{,}\) which agrees with our definition of \(\norm{\vec{OP}}\text{.}\)
When we want to prove things about lengths it is often easier to work with the square of the length, to remove the square root. The next theorem illustrates this technique, and also tells that length is affected by scalar multiplication in a reasonable way.
Theorem 2.2.10.
Suppose that \(\vec{v}\) is a vector in \(\mathbb{R}^n\) and \(c\) is a scalar. Then \(\norm{c\vec{v}} = \abs{c}\norm{\vec{v}}\text{.}\)
Proof.
We calculate \(\norm{c\vec{v}}^2\text{,}\) using the properties of dot products from Theorem 2.2.4
Now we take square roots on both sides. Remember that \(\sqrt{x^2} = \abs{x}\) for every real number \(x\text{,}\) and that lengths are always non-negative, to see
Unfortunately, there is no way to calculate \(\norm{\vec{v}+\vec{w}}\) just from \(\norm{v}\) and \(\norm{w}\text{.}\) For instance, if \(\vec{v} = \begin{bmatrix}1\\0\end{bmatrix}\text{,}\) \(\vec{w} = \begin{bmatrix}0\\1\end{bmatrix}\text{,}\) and \(\vec{z} = \begin{bmatrix}-1\\0\end{bmatrix}\) then \(\norm{\vec{v}} = \norm{\vec{w}} = \norm{\vec{z}} = 1\text{,}\) but \(\norm{\vec{v}+\vec{w}} = \sqrt{2}\) while \(\norm{\vec{v}+\vec{z}} = 0\text{.}\)
Although we don't have an exact calculation of \(\norm{\vec{v}+\vec{w}}\text{,}\) we do have two very useful inequalities, which we present here without proof.
Theorem 2.2.11. Cauchy-Schwartz Inequality.
If \(\vec{v}\) and \(\vec{w}\) are vectors in \(\mathbb{R}^n\) then \(\abs{\vec{v} \cdot \vec{w}} \leq \norm{\vec{v}}\norm{\vec{w}}\text{.}\)
Theorem 2.2.12. Triangle Inequality.
If \(\vec{v}\) and \(\vec{w}\) are vectors in \(\mathbb{R}^n\) then \(\norm{\vec{v}+\vec{w}} \leq \norm{\vec{v}} + \norm{\vec{w}}\text{.}\)
Definition 2.2.13.
A unit vector is a vector \(\vec{v}\) such that \(\norm{\vec{v}} = 1\text{.}\)
Theorem 2.2.14.
If \(\vec{v}\) is any vector in \(\mathbb{R}^n\) other than \(\vec{0}\) then \(\frac{1}{\norm{\vec{v}}}\vec{v}\) is a unit vector.
Proof.
We just need to do a calculation, using Theorem 2.2.10
Subsection 2.2.3 Angle
In the previous section we examined \(\vec{v}\cdot\vec{v}\) and saw that we could extract the length of \(\vec{v}\) from this dot product. We now turn to the more general case of the dot product of two different vectors. We again begin by exploring the situation in \(\mathbb{R}^2\text{.}\) We will be encountering angles and using some trigonometry in this section, so now is a good time to set the following convention:
Unless explicitly stated otherwise, all angles are measured in radians.
Consider two vectors, \(\vec{v}\) and \(\vec{w}\text{,}\) in \(\mathbb{R}^2\text{.}\) Draw both vectors in standard position, and let \(\theta\) be the shorter of the two angles between the vectors (so \(0 \leq \theta \leq \pi\)).
Using the cosine law, we get
Next, we expand the left side and use properties of the dot product from Theorem 2.2.4:
Now we plug this back in to the left side of the equation
Simplifying this leads to
As long as neither \(\vec{v}\) nor \(\vec{w}\) is \(\vec{0}\) we can write this as
As we did with length, we now take the formula we found for \(\mathbb{R}^2\) as the definition of angle in \(\mathbb{R}^n\text{.}\)
Definition 2.2.16.
Suppose that \(\vec{v}\) and \(\vec{w}\) are non-zero vectors in \(\mathbb{R}^n\text{.}\) The angle between \(\vec{v}\) and \(\vec{w}\) is the angle \(\theta\) such that \(0 \leq \theta \leq \pi\) that satisfies
Example 2.2.17.
What is the angle between \(\vec{v} = \begin{bmatrix}1\\0\\2\\-1\end{bmatrix}\) and \(\vec{w} = \begin{bmatrix}0\\1\\-1\\0\end{bmatrix}\) in \(\mathbb{R}^4\text{?}\)
Solution.Let \(\theta\) be the angle we're looking for. Then:
The angle is therefore the angle with \(0 \leq \theta \leq \pi \) with \(\cos\theta = -\frac{1}{\sqrt{3}}\text{.}\) This angle is approximately \(\theta = 2.186\) (radians, as always).
Definition 2.2.18.
We say that two vectors \(\vec{v}\) and \(\vec{w}\) are orthogonal (or perpendicular) if \(\vec{v}\cdot\vec{w} = 0\text{.}\) We write \(\vec{v} \perp \vec{w}\) as an abbreviation for the statement "\(\vec{v}\) and \(\vec{w}\) are orthogonal".
Note 2.2.19.
Suppose that \(\vec{v} \perp \vec{w}\text{.}\) Then one of the following three things must be true:
\(\displaystyle \vec{v} = \vec{0} \)
\(\displaystyle \vec{w} = \vec{0} \)
\(\cos\theta = 0 \text{,}\) in which case \(\theta = \pi/2\text{,}\) i.e., the angle between the two vectors is \(90^\circ\text{.}\)
Example 2.2.20.
Let \(\vec{v} = \begin{bmatrix}3\\2\\1\\-1\end{bmatrix}\) and \(\vec{w} = \begin{bmatrix}0\\1\\-1\\1\end{bmatrix}\text{.}\) Then \(\vec{v} \perp \vec{w}\) (that is, these two vectors are orthogonal), because \(\vec{v}\cdot\vec{w} = 0\text{.}\)
Theorem 2.2.21. The Pythagorean Theorem.
Let \(\vec{v}\) and \(\vec{w}\) be vectors in \(\mathbb{R}^n\text{.}\) Then \(\vec{v} \perp \vec{w}\) if and only if \(\norm{\vec{v} + \vec{w}}^2 = \norm{\vec{v}}^2 + \norm{\vec{w}}^2\text{.}\)
Proof.
We begin with a calculation.
From here we see that \(\norm{\vec{v}+\vec{w}}^2 = \norm{\vec{v}}^2 + \norm{\vec{w}}^2\) if and only if \(2(\vec{v}\cdot\vec{w}) = 0\text{,}\) which is if and only if \(\vec{v} \perp \vec{w}\text{.}\)
Subsection 2.2.4 Orthogonal projections
It happens fairly often that we have vectors \(\vec{v}\) and \(\vec{w}\text{,}\) and we would like to express \(\vec{v}\) as the sum of a vector in the same direction as \(\vec{w}\) and a vector that is orthogonal to \(\vec{w}\text{.}\)
If \(\vec{w} = \begin{bmatrix}1\\0\end{bmatrix}\) this is not difficult to do: Given \(\vec{v} = \begin{bmatrix}x\\y\end{bmatrix}\) we can write
and \(\vec{w}\perp\begin{bmatrix}0\\y\end{bmatrix}\text{.}\) Our next goal is to generalize this idea to cases when \(\vec{w}\) is not necessarily along a coordinate axis. As in previous sections, we begin in \(\mathbb{R}^2\) and then generalize to higher dimensions.
From highschool geometry you know that the shortest distance from the tip of \(\vec{v}\) to the line following \(\vec{w}\) occurs at the point on that line where the angle made to the tip of \(\vec{v}\) is a right angle, as shown in the figure above. The vector from the origin to this point, labelled as \(\vec{p}\) on the figure, is called the orthogonal projection of \(\vec{v}\) on \(\vec{w}\), and is often written as \(\proj_{\vec{w}}(\vec{v})\text{.}\)
Let \(\vec{u} = \frac{1}{\norm{\vec{w}}}\vec{w}\text{,}\) so that \(\vec{u}\) is the unit vector in the same direction as \(\vec{w}\text{.}\) Then we have
so to find a formula for \(\vec{p}\) we only need to calculate \(\norm{\vec{p}}\text{,}\) which is the length of the line segment from the origin to the tip of \(\vec{p}\text{.}\)
From the trigonometry of right-angled triangles we have \(\cos\theta = \frac{\norm{\vec{p}}}{\norm{\vec{v}}}\text{,}\) so \(\norm{\vec{p}} = \norm{\vec{v}}\cos\theta\text{.}\) We know a formula for \(\cos\theta\) (Definition 2.2.16), and plugging that in gives us
Putting together these calculations, we obtain:
The result of this calculation now becomes our definition in general.
Definition 2.2.23.
Let \(\vec{v}\text{,}\) \(\vec{w}\) be vectors in \(\mathbb{R}^n\) with \(\vec{w} \neq \vec{0}\text{.}\) The orthogonal projection of \(\vec{v}\) on \(\vec{w}\) is the vector
Definition 2.2.24.
Let \(\vec{v}\text{,}\) \(\vec{w}\) be vectors in \(\mathbb{R}^n\text{,}\) with \(\vec{w} \neq \vec{0}\text{.}\) The component of \(\vec{v}\) orthogonal to \(\vec{w}\) is the vector
Lemma 2.2.25.
Let \(\vec{w}\) be a non-zero vector in \(\mathbb{R}^n\text{.}\) The only vector that is simultaneously parallel to \(\vec{w}\) and orthogonal to \(\vec{w}\) is \(\vec{0}\text{.}\)
Proof.
First, notice that \(\vec{0} = 0\vec{w}\text{,}\) so \(\vec{0}\) is parallel to \(\vec{w}\text{,}\) but also \(\vec{0}\cdot\vec{w} = 0\text{,}\) so \(\vec{0}\) is orthogonal to \(\vec{w}\text{.}\)
Now suppose that \(\vec{v}\) is a vector that is both parallel to \(\vec{w}\) and orthogonal to \(\vec{w}\text{.}\) Since \(\vec{v}\) is parallel to \(\vec{w}\) there is a scalar \(c\) such that \(\vec{v} = c\vec{w}\text{.}\) Since \(\vec{v} \perp \vec{w}\) we have \(\vec{v}\cdot\vec{w} = 0\text{.}\) Therefore
From here we see that either \(c=0\) or \(\norm{\vec{w}}^2 = 0\text{.}\) The latter option is impossible, because one of our assumptions is that \(\vec{w} \neq \vec{0}\text{.}\) Therefore \(c=0\text{,}\) and hence
Theorem 2.2.26.
Let \(\vec{v}\) and \(\vec{w}\) be vectors in \(\mathbb{R}^n\text{,}\) with \(\vec{w} \neq \vec{0}\text{.}\) The only way to write \(\vec{v} = \vec{v_1} + \vec{v_2}\) with \(\vec{v_1}\) parallel to \(\vec{w}\) and \(\vec{v_2}\) orthogonal to \(\vec{w}\) is by using \(\vec{v_1} = \proj_{\vec{w}}(\vec{v})\) and \(\vec{v_2} = \operatorname{perp}_{\vec{w}}(\vec{v})\text{.}\)
Proof.
There are two things that need to be proved here. First, we must show that the choice \(\vec{v_1} = \proj_{\vec{w}}(\vec{v})\) and \(\vec{v_2} = \operatorname{perp}_{\vec{w}}(\vec{v})\) has the properties stated in the theorem, and then we need to show that this is the only possible choice of \(\vec{v_1}\) and \(\vec{v_2}\text{.}\)
First, let's show that the proposed choice of \(\vec{v_1}\) and \(\vec{v_2}\) does work. That is, let \(\vec{v_1} = \proj_{\vec{w}}(\vec{v})\) and \(\vec{v_2} = \operatorname{perp}_{\vec{w}}(\vec{v})\text{.}\) From the formula for \(\proj_{\vec{w}}(\vec{v})\) we see that this vector is a scalar multiple of \(\vec{w}\text{,}\) and hence \(\vec{v_1}\) is parallel to \(\vec{w}\text{.}\) Next, we calculate:
Thus our proposed choice of \(\vec{v_2}\) does satisfy \(\vec{v_2}\perp\vec{w}\text{.}\) Finally, we have
So far we have proved that the propsed choices of \(\vec{v_1}\) and \(\vec{v_2}\) have all the properties we wanted. Now we turn to showing that these are the only choices that work. Do to that, suppose that we have another decomposition, say \(\vec{v} = \vec{z_1} + \vec{z_2}\text{,}\) where \(\vec{z_1}\) is parallel to \(\vec{w}\) and \(\vec{z_2}\) is orthogonal to \(\vec{w}\text{.}\) Then we have
so
The vector \(\vec{v_1}-\vec{z_1}\) is the difference of two vectors each of which is parallel to \(\vec{w}\text{,}\) so it is also parallel to \(\vec{w}\text{.}\) On the other hand, \(\vec{z_2} - \vec{v_2}\) is the difference of two vectors that are each orthogonal to \(\vec{w}\text{,}\) so it is also orthogonal to \(\vec{w}\) (you should verify this fact!). But these two vectors are the same, so this one vector is both parallel to \(\vec{w}\) and orthogonal to \(\vec{w}\text{.}\) In Lemma 2.2.25 we saw that the only such vector is \(\vec{0}\text{.}\) Therefore \(\vec{v_1} - \vec{z_1} = \vec{0}\text{,}\) so \(\vec{z_1} = \vec{v_1}\text{.}\) Similarly, \(\vec{z_2} - \vec{v_2} = \vec{0}\text{,}\) so \(\vec{z_2} = \vec{v_2}\text{.}\)
Example 2.2.27.
Write the vector \(\begin{bmatrix}1\\3\\-1\end{bmatrix}\) as the sum of a vector that is parallel to \(\begin{bmatrix}1\\1\\0\end{bmatrix}\) and a vector that is orthogonal to \(\begin{bmatrix}1\\1\\0\end{bmatrix}\text{.}\)
Solution.By Theorem 2.2.26 there is only one way to do this, namely:
Now it's just a matter of calculating those two vectors.
And
Here is the desired expression:
You could verify directly that \(\begin{bmatrix}-1\\1\\-1\end{bmatrix} \perp \begin{bmatrix}1\\1\\0\end{bmatrix}\text{,}\) but we don't actually need to, because in Theorem 2.2.26 we proved that the method we used here always works.
Exercises 2.2.5 Exercises
\(\vec{u} = \begin{bmatrix} 2 \\ -1 \\ 3\end{bmatrix} \) and \(\vec{v} = \begin{bmatrix} -1 \\ 1 \\ 1 \end{bmatrix} \) \(\vec{u} = \begin{bmatrix} 1 \\ 2 \\ -1\end{bmatrix} \) and \(\vec{v} = \vec{u}\text{.}\) \(\vec{u} = \begin{bmatrix} 1 \\ 1 \\ -3\end{bmatrix} \) and \(\vec{v} = \begin{bmatrix} 2 \\ -1 \\ 1 \end{bmatrix} \) \(\vec{u} = \begin{bmatrix} 3 \\ -1 \\ 5\end{bmatrix} \) and \(\vec{v} = \begin{bmatrix} 6 \\ -7 \\ -5 \end{bmatrix} \) \(\vec{u} = \begin{bmatrix} x \\ y \\ z\end{bmatrix} \) and \(\vec{v} = \begin{bmatrix} a \\ b \\ c \end{bmatrix} \) \(\vec{u} = \begin{bmatrix} x \\ y \\ z\end{bmatrix} \) and \(\vec{v} = 0\) \(\displaystyle 0\) \(\displaystyle 6\) \(\displaystyle -2\) \(\displaystyle 0\) \(\displaystyle x\cdot a + y\cdot b + z\cdot c\) \(\displaystyle 0\)1.
Compute \(\vec{u} \cdot \vec{v}\) where:
\(\displaystyle \begin{bmatrix} 2 \\ -1 \\ 2\end{bmatrix} \) \(\displaystyle \begin{bmatrix} 1 \\ -1 \\ 2\end{bmatrix} \) \(\displaystyle \begin{bmatrix} 1 \\ 0 \\ -1\end{bmatrix} \) \(\displaystyle \begin{bmatrix} -1 \\ 0 \\ 2\end{bmatrix} \) \(\displaystyle 2\begin{bmatrix} 1 \\ -1 \\ 2\end{bmatrix} \) \(\displaystyle -3\begin{bmatrix} 1 \\ 1 \\ 2\end{bmatrix} \) \(\displaystyle 3\) \(\displaystyle \sqrt{6}\) \(\displaystyle \sqrt{2}\) \(\displaystyle \sqrt{5}\) \(\displaystyle 2\sqrt{6}\) \(\displaystyle \sqrt{54}\) Therefore, \(\norm{\vec{v}}=\sqrt{\vec{v}\cdot\vec{v}} = 3\text{.}\) Therefore, \(\norm{\vec{v}}=\sqrt{\vec{v}\cdot\vec{v}} =\sqrt{6}\text{.}\) Therefore, \(\norm{\vec{v}}=\sqrt{\vec{v}\cdot\vec{v}} =\sqrt{2}\text{.}\) Therefore, \(\norm{\vec{v}}=\sqrt{\vec{v}\cdot\vec{v}} =\sqrt{5}\text{.}\) Therefore, \(\norm{\vec{v}}=\sqrt{\vec{v}\cdot\vec{v}} =\sqrt{4\cdot 6}=2\sqrt{6}\text{.}\) Therefore, \(\norm{\vec{v}}=\sqrt{\vec{v}\cdot\vec{v}} =\sqrt{54}\text{.}\)2.
Compute \(\norm{\vec{v}}\) if \(\vec{v}\) equals:
\(\displaystyle \begin{bmatrix} 7 \\ -1 \\ 5\end{bmatrix} \) \(\displaystyle \begin{bmatrix} -2 \\ -1 \\ 2\end{bmatrix} \) \(\displaystyle \begin{bmatrix} \frac{7}{5\sqrt{3}} \\ \frac{-1}{5\sqrt{3}} \\ \frac{5}{5\sqrt{3}}\end{bmatrix} \) \(\displaystyle \begin{bmatrix} \frac{-2}{3} \\ \frac{-1}{3} \\ \frac{2}{3}\end{bmatrix} \) We first compute the square of the norm of the vector: so that \(\norm{\begin{bmatrix} 7 \\ -1 \\ 5\end{bmatrix} } = \sqrt{75}=5\sqrt{3}\text{.}\) Thus, a unit vector in the same direction is given by We first compute the square of the norm of the vector: so that \(\norm{\begin{bmatrix} -2 \\ -1 \\ 2\end{bmatrix} } = \sqrt{9}=3\text{.}\) Thus, a unit vector in the same direction is given by3.
Find a unit vector in the direction of:
\(( 3 , -1 , 0)\) and \(( 2 , -1 , 1 )\) \(( 2 , -1 , 2)\) and \(( 2 , 0 , 1)\text{.}\) \(( -3 , 5 , 2)\) and \(( 1 , 3 , 3 )\) \(( 4 , 0 , -2)\) and \(( 3 , 2 , 0 )\) \(\displaystyle \sqrt{2}.\) \(\displaystyle \sqrt{2}.\) \(\displaystyle \sqrt{21}.\) \(\displaystyle 3.\) For \(P=( 3 , -1 , 0)\) and \(Q=( 2 , -1 , 1 )\text{,}\) we compute: For \(P=( 2 , -1 , 2)\) and \(Q=( 2 , 0 , 1)\text{,}\) we compute: For \(P=( -3 , 5 , 2)\) and \(Q=( 1 , 3 , 3 )\text{,}\) we compute: For \(P=( 4 , 0 , -2)\) and \(Q=( 3 , 2 , 0 )\text{,}\) we compute:4.
Find the distance between the following pairs of points:
We should point out that, in each of the above cases, we could just as well have computed \(\norm{\vect{QP}}\) and would have gotten the same result.
According to the Cosine Formula 2.2.5.5.1, we have to compute the norm of the vectors and their dot product: Therefore,5.
Find \(\cos(\theta)\) where \(\theta\) is the angle between the vectors
According to the Cosine Formula 2.2.5.5.1, we have to compute the norm of the vectors and their dot product: Therefore, and6.
Find the angle between the vectors
\(\begin{bmatrix} 2 \\ -1 \\ 3 \end{bmatrix} \) and \(\begin{bmatrix} x \\ -2 \\ 1 \end{bmatrix} \) are orthogonal. \(\begin{bmatrix} 2 \\ -1 \\ 1 \end{bmatrix} \) and \(\begin{bmatrix} 1 \\ x \\ 2 \end{bmatrix} \) are at an angle of \(\frac{\pi}{3}\text{.}\) \(\displaystyle x=\frac{-5}{2}\) \(x=1\) and \(x=-17\) We want the dot product of the two vectors to be zero, i.e. we want Thus, we need \(x=\frac{-5}{2}\text{.}\) To use the Cosine Formula 2.2.5.5.1, we have to compute the norm of the vectors and their dot product: Therefore, Since \(\cos (\frac{\pi}{3}) = \frac{1}{2}\text{,}\) we can rewrite that to If we square both sides of this equation, we get Therefore, the answer is: for \(x=1\) and \(x=-17\text{,}\) the angle between the two vectors is \(\frac{\pi}{3}\text{.}\)7.
Find all real numbers \(x\) such that:
8.
Find the \(\operatorname{proj}_{\vec{v}}(\vec{w})\) where
9.
Decompose the vector \(\vec{v}\) into \(\vec{v} = \vec{a}+\vec{b}\) where \(\vec{a}\) is parallel to \(\vec{u}\) and \(\vec{b}\) is perpendicular to \(\vec{u}\text{.}\)
Show that, of the four diagonals of a cube, no pair is perpendicular. Show that each diagonal is perpendicular to the face diagonals it does not meet.10.
Since we are making a claim about the dot product of \(\vec{u} + \vec{v}\) and \(\vec{u} - \vec{v}\text{,}\) let us use the properties of the dot product to simplify it: The two vectors being perpendicular means exactly that their dot product is zero, i.e. By (2.2.2), this happens if and only if \(\norm{\vec{u}}^2 - \norm{\vec{v}}^2 = 0\text{,}\) or in other words: as claimed. For example, the vectors \(\vec{u} = \begin{bmatrix} 3 \\ 4 \end{bmatrix} \) and \(\vec{v} = \begin{bmatrix} 5 \\ 0 \end{bmatrix} \) have the same norm since We check that indeed11.
Show that \(\norm{\vec{u}} = \norm{\vec{v}}\) if and only if \(\vec{u} + \vec{v}\) and \(\vec{u} - \vec{v}\) are perpendicular. Give an example in \(\mathbb{R}^2\text{.}\)
12.
If the diagonals of a parallelogram have equal length, show that the parallelogram is a rectangle.