The Physics of Waves and Oscillation

by Peter Wolfenden

Introduction

This article is an elementary treatment of the physics of sound. It requires no background in physics, but the reader is assumed to have studied calculus, trigonometry, and linear algebra. An appendix is provided for those readers who are unfamiliar with differential equations and/or who need to review their trigonometry identities.

The text is broken into 5 sections, which are summarized below:

The first section defines sound and gives a qualitative description of its properties. Doppler shifting and sonic booms are explained, and the concept of linearity is introduced.

The second section is an introduction to harmonic oscillators and differential equations. Linearity is more fully explained on mathematical terms, and non-linear systems are dismissed as too horrible to include within the scope of this article. ((A somewhat complex explanation of resonance in harmonic oscillators concludes the section.

The third section describes how the simple systems analyzed in the second section may be extended into the continuum. The mathematics associated with such an extension are quite laborious, and the reader is given an instructive taste of tedium.

The fourth section uses the ideal string model to derive the one-dimensional homogeneous wave equation. This equation is then applied to the hypothetical propagation of sound through air in a one-dimensional universe. Higher dimensions are extrapolated from the one-dimensional case with some hand-waving, and an argument using energy conservation is given to explain why sound loses intensity with increasing distance from its source in 2 and 3 dimensions, but not in 1 dimension.

(not yet implemented)

The fifth section is an introduction to Fourier analysis, an important signal processing tool.

 

Section I

In the most general sense, sound is the propagation of density waves through some medium. The medium most commonly encountered by most human beings is air, but sound also travels through water, rubber, steel, and tofu. In fact, most homogeneous substances conduct sound. The density waves are typically created by the vibration of some object immersed in the medium, such as a string, membrane, or chamber. The waves propagate outwards from their point of origin, and set up sympathetic vibrations in other nearby objects immersed in the medium, such as eardrums, wineglasses, and microphones. The speed at which density waves travel through a given medium depends entirely on the physical properties of the medium, and is independent of the manner in which the waves are produced.

This last property is remarkable, and should not strike the reader as obvious unless he or she has studied wave phenomena. After all, we know from everyday experience that velocities are additive. Imagine, for example, that while driving down I-95 at 60kph, a strange and arguably homicidal individual pitches a baseball forwards at 60kph with respect to her/his windshield. At the instant of release (before air resistance reduces the ball's speed), the ball is moving forwards at120kph with respect to the road, and 180kph with respect to the windshield of a vehicle coming towards her/him at 60kph in the other lane. (fig 1)

Naturally, if the ball were pitched backwards instead of forwards at 60kph with respect to the car, it would move with zero horizontal velocity respect to the road. So it would fall straight down, bounce, and come to rest (provided that the road is perfectly flat and level, and the ball has no spin).

But waves, and sound in particular, behave differently. The noise produced by a car's engine, which takes the form of density waves travelling through air, does not travel forwards from the front bumper any faster than it travels backwards from the rear bumper. In air of homogeneous sea-level density, sound travels at about 1,210 kilometers per hour regardless of the velocity of its source. But a moving vehicle chases the sound leaving its nose and runs away from the sound leaving its tail. This causes the sound waves to pile up ahead of the vehicle and stretch out behind. (fig 2)

This is the commonly observed Doppler effect, which to a stationary pedestrian causes an observed drop in the pitch of a blaring car horn as it zooms by. The faster the car moves, the more the sound moving forwards is squeezed and the more the sound moving backwards is stretched, and the greater the observed drop in pitch.

(What is pitch? Explain.)

But how do we know that the rise in pitch is not due to additive velocities, as in the case of the baseball? If sound wave velocities were additive, the sound waves leaving the front bumper would be moving faster than those leaving the rear, and an observer on the ground would sense them going by faster, and notice a rise in pitch.

But there are two experiments you can perform to convince yourself that the speed of sound is constant. Unfortunately both require rather specialized equipment, so most people rely on published results. First, you can set up a long, flat racetrack somewhere out in the Midwest, and arrange to have a microphone at one end record the instant at which it hears a pistol shot fired by the driver of a fast car. If the driver and microphone have their watches synchronized, and they try the experiment at various speeds, they should find that the time lapse between the shot being fired and the shot being heard depends only on the distance between the car and microphone at the moment when the shot is fired, and not on the speed at which the car is travelling.

The second experiment involves moving (flying, usually) at the speed of sound, which is about 1210 kph at sea level on a windless day. If a vehicle moves at the speed of sound, the noise produced by its engine is added to itself repeatedly at the front bumper, creating an extremely loud noise called a sonic boom. The many relatively small waves of sound produced by the humming (or roaring, or singing, or whatever) of the engine are squeezed together into one big shock wave which builds in intensity as the vehicle cruises along at 1210 kph while making noise. This is why aircraft try to make the transition from subsonic to supersonic speed or vice versa as quickly and as far away from population centers and avalanche areas as possible.

But sound waves within the limits of reasonable pitch and volume (unlike sonic booms) have some nice properties that make the mathematical analysis of sound particularly rewarding. One of the most important of these properties is linearity, which may described as follows: The effect on an eardrum or microphone when the first note of a symphony is played is the same as the sum of the individual effects of each instrument on the eardrum or microphone. This means that two separate recordings of two separate parts "added" together in a mixer give the same result as a single recording of the two parts played together.

Similarly, a differential equation is linear if any number of solutions to the equation may be added together to form another solution to the equation. This may not seem relevant to the reader unless she has studied physics, but the next section is an attempt to make it so. We shall see how some basic physical systems may be described by a simple set of mathematical equations, and how the solutions to those equations describe the ways in which the system may behave. In particular, we shall see that solutions to the equations for systems that obey Hooke’s Law are linear.

The simplicity with which linear "mixing" may be described mathematically has made it relatively easy to build machines, which manipulate sound electronically. But had nature made sound non-linear, it would be quite difficult to build speakers, and it would probably have been necessary to invent quite powerful computers before building anything like a recording studio.

Section II

The next few pages will be an introduction to the nature of (idealized) physical vibration. This section, though perhaps not obviously relevant to music, should provide a good foundation for the derivation of the wave equation in section 4. We begin with Hooke's law and a one-dimensional simple harmonic oscillator.

Let our simple harmonic oscillator system consist of a spring of length L attached at one end to an object of mass M and at the other end to an immovable object (see fig 1).

At this point we will make some simplifying assumptions about our system, namely:

1) The object with mass M is perfectly rigid (so it won't wobble or shake).

2) The mass moves without friction or air resistance back and forth in a perfectly straight line, as though it were sliding along a track aligned with the axis of the spring. This makes it possible to express the position of the moving mass in terms of a single coordinate, x.

3) The spring itself has no mass.

4) External forces (gravity, for example) may be ignored.

The above assumptions are made with the technical equivalent of artistic license. Since they greatly simply the system while preserving much of the essential nature of physical oscillation, they are useful for purposes of explanation, and introductory physics texts almost invariably use them. Real behavior may be more precisely simulated using more complicated models, but the associated differential equations are more difficult to solve. Since the objective of this section is to cover only the basic concepts of harmonic oscillation, we will consider only the simplest scenario.

If the system is at rest and there are no unbalanced forces, then the system will remain at rest indefinitely, and is said to be at equilibrium. This happens only when the spring is at its favorite, or equilibrium length and the mobile mass happens to have zero velocity. The mass is then at a certain position along the x-axis which we will label x0.

(fig 2)

If we displace the mass by pulling or pushing it to the right or left, the spring is distorted, and like most stable physical objects it seeks to regain its original shape (provided we didn't crush, break, fold, spindle, mutilate or otherwise permanently affect the spring). The spring's resistance to distortion creates a restoring force in the direction opposite to the displacement.

For an ideal spring, the restoring force is given by the following linear relationship, called Hooke's Law:

F = -k * (x - x0)

Note: on the x-axis, positive values of x lie to the right of x0 and negative values of x lie to the left.

This means that the force with which the spring resists being distorted is directly proportional and opposite in sign to the displacement of the spring from its equilibrium position. The constant of proportionality "k" is called the spring constant, and it tells us how "stiff" our ideal spring is (the larger k is, the larger the restoring force for a given displacement).

The spring in our simple harmonic oscillator is an ideal spring with spring constant k. So if at equilibrium our mobile mass is located at x0 on the x-axis, then the displacement from equilibrium is (x - x0), and by Hooke's Law we know that the restoring force exerted by the spring is the following function of x : F(x) = -k * (x - x0).

What happens to this restoring force? A stretched ideal spring pulls both ends inwards with equal force, and a squashed ideal spring pushes both ends outwards with equal force. The immovable object is unaffected by any and all forces, so there is no motion at that end of the spring. The mobile mass, however, experiences an acceleration given by Newton's equation Force = Mass*acceleration, or F = ma. Before we actually solve the differential equation of motion for this harmonic oscillator, let's take a look at the potential and kinetic energy associated with a given displacement of the mass from equilibrium.

Work = ºForce*dx, which (neglecting friction) for the spring = º(k*x)dx = kx2/2. This is the work required to draw the mass a distance x away from equilibrium. Do not be disturbed by the fact that the work has a positive sign regardless of the sign of x. The spring resists equally in both positive and negative displacements from equilibrium, and work is a scalar quantity, so it is reasonable to expect positive work done in moving a distance -Æx when the mass is displaced to the left of x0. This work is stored in the spring and becomes potential energy, since it can be used again to perform work. The spring does an amount of work equal to kx2/2 in moving the mass back from a displacement x to equilibrium. In the absence of outside forces, this work goes into the kinetic energy of the mass. The kinetic energy of an object with mass m is mv2/2. So if we draw the mass away from equilibrium by Æx, thereby putting kx2/2 into the potential energy of the spring, and then release the mass, the potential energy in the spring is transferred to the kinetic energy of the object as the mass is pulled back towards equilibrium. The velocity of the mass when it reaches equilibrium should then be x(k/m)1/2, in the direction opposite to the original displacement Æx. The momentum of the mass then carries it beyond equilibrium, and the spring does an amount of work = kx2/2 to bring it to rest again at -Æx. This cycle repeats itself over and over, as the energy is transferred from the spring to the mass and back again, until friction dissipates the energy and the mass comes to rest.

We expect the mass to move back and forth, but how will it move back and forth? Or, as physicists say, how does the system evolve in time? We can determine this precisely by looking at the equations that gave us the energy diagram. Acceleration is simply the second derivative with respect to time of position. So if we take the displacement of the mass from equilibrium to be some unknown function of time x(t), and equate the force exerted by the spring with Ma, we get the following differential equation for x(t):

velocity of M = x' = dx(t)/dt,

acceleration of M = x'' = d( dx(t)/dt )/dt

Mx'' = -k(x-x0)

If we introduce a new quantity X = (x - x0), then X(t) = (x(t)-x0), and we can rewrite the above differential equation : as:

MX'' = -kX

or X'' + (k/M)X = 0

If we introduce another new quantity w = (k/M)1/2, we can again rewrite the equation :

X'' + w2X = 0 (the harmonic oscillator differential equation)

It is easy to verify that X(t) = Acos(wt) is a solution to the last equation (where A is an arbitrary constant), and likewise X(t) = Asin(wt). Physically this means that the mass moves back and forth (oscillates) sinusoidally with an amplitude which is arbitrary but does not change in time. The period of this oscillatory motion, or the amount of time required for the mass to complete one cycle is 2¹/w (throughout this tutorial we will use radians rather than degrees- appendix), and the frequency of the oscillation, or the number of cycles per second, is w/2¹. Note that the motion is isochronous, which is simply to say that the frequency does not depend on the amplitude. This makes harmonic oscillators useful as timekeepers, since they don't (in theory) slow down as they lose energy. Of course, no perfect harmonic oscillators exist, but some crystals (like those in many chronometers) vibrate very nearly perfectly.

Now we will make use of the definition of linearity to get more solutions to our differential equation. If an equation is linear, any linear combination of solutions is also a solution. A linear combination of mathematical objects, such as vectors or functions, is a sum, where each term is multiplied by a constant. For example, if v1, v2, and v3 are functions of t, then all linear combinations may be written in the form A*v1 + B*v2 + C*v3, and a specific linear combination might be v1 + 2*v1 + (.1)*v3 (where A=1, B=2, C=.1), or just v2 (where A=0, B=1, C=0), or even 0, where all coefficients are zero.

It is easy to verify that our harmonic oscillator equation is linear:

Let's say that X1 and X2 are two arbitrary solutions. In other words:

X1'' + w2X1 = 0 and X2'' + w2X2 = 0, so obviously A*X1 and A*X2, where A and B are arbitrary constants, are also solutions (derivatives, being linear operators, don't affect coefficients). Is (AX1+BX2) also a solution?

(AX1 + BX2)'' = AX1'' + BX2'', and w2(AX1 + BX2) = w2AX1 + w2BX2, therefore

(AX1 + BX2)'' + w2(AX1 + BX2) = AX1'' + BX2'' + w2AX1 + w2BX2 =

[AX1'' + w2AX1] + [BX2'' + w2BX2] = 0 + 0 = 0

So (AX1 + BX2) is a solution. It follows that any linear combination of solutions to our differential equation is a solution.

So how does this help us? We know two solutions, X1 = Acos(wt), and X2 = Bsin(wt). We can therefore write a solution of the form : X = Acos(wt) + Bsin(wt). It turns out that this new X is a general solution to our differential equation, which means all solutions can be written in this form. This is because the sine and cosine functions are linearly independent, which should sound familiar if the reader has studied linear algebra. Two vectors, v1 and v2, are linearly independent if the only way to make the linear combination Av1 + Bv2 = 0 is to make A = B = 0. Similarly, two functions of a single variable x, f1(x) and f2(x), are linearly independent if the only way to make Af1 + Bf2 = 0 for all x is to make A and B zero. So, in fact, two functions are linearly dependent (ie not linearly independent) only if one is a constant multiple of the other. This is also true for vectors. Obviously, sine and cosine are linearly independent, since their zeros don't coincide.

To show that Acos(wt) + Bsin(wt) is a general solution requires some facts taken from the study of ordinary differential equations. First, a differential equation (in one variable, say x) of order n has at most n linearly independent solutions. Given n linearly independent solutions to our nth order diff. eq., all solutions may then be written as some linear combination of these n linearly independent solutions. The n linearly independent solutions span the solution space of the equation, in much the same way as a set of basis vectors span geometric space. (If these assertions strike the reader as particularly offensive or interesting, a book on ODEs might be worthwhile) Sine and cosine are linearly independent, and our harmonic oscillator differential equation is of order 2, therefore X = Acos(wt) + Bsin(wt) is a general solution. Of course, there are infinitely many general solutions. Why? Because cos(wt + p1) and cos(wt + p2) are both solutions, and are linearly independent as long as p1 and p2 do not differ by an amount = n*(¹/2), where n = ±(0, 1, 2, ...). This means we have an infinite number of linearly independent pairs of solutions to our differential equation, and each pair corresponds to a general solution of the form :

X = Acos(wt + p1) + Bcos(wt + p2).

sin(wt) and cos(wt) are a special case where p1 = 0 and p2 = -¹/4.

Our general solution can be rewritten once more (using trig identities from the appendix) as :

X = Dcos(wt + ¶),

where D = (A2 + B2)1/2 = amplitude, and ¶ = tan-1(B/A) = phase.

In general, when we solve a second order differential equation, we expect two arbitrary constants. The first form of our general solution had A and B, which were perfectly legitimate, but amplitude and phase are more geometrically intuitive to most people.

The initial conditions (abbreviated as IC), or the position and velocity of the mass at t=0, determine the amplitude and phase of oscillation. If our system has a mobile mass with mass = (1 kg) and a spring with spring constant = (1kg/sec2), then w = (1/sec), and the period = 2¹sec. Our general solution is X = Dcos(t + ¶), where D and ¶ remain to be determined.

If, for example, we hold the mobile mass stationary at a distance of 3 meters to the right of the equilibrium position and then release it at t=0, we can specify both constants in the following manner:

X'(t) = -Dsin (t + ¶), and X'(0) = 0, because the mass is stationary at t=0, therefore -Dsin(¶) = 0, and we conclude that either D=0 or ¶=0 or both.

X(t) = Dcos(t + ¶), and X(0) = 3 meters, so D cannot be zero, and ¶ must therefore be zero. So X(0) = Dcos(0) = D = 3 meters, and we have specified both constants. Amplitude = 3 meters, and phase = 0. The frequency is 1 Hz, so we may predict the behavior of the system for all t>0. The mass will come to rest at t = 0, ¹, 2¹, ... and x = 3m, -3m, 3m, ... and will have velocity equal to -3m/sec, 3m/sec, -3m/sec at t = ¹/2, 3¹/2, 5¹/2, ....

(picture here)

If, on the other hand, we hold the mass at a distance of 1 meter to the left of the equilibrium position and give it a leftward velocity of 2 meters per second at t=0, the mass will oscillate with amplitude=... (is this second example really necessary?).

It is also sometimes useful to abstract the notion of force to mean the first derivative (minus sign) with respect to distance of energy. In our system, distance is simply displacement from equilibrium, and energy is simply the potential energy stored in the spring when it is stretched or squashed. So if we define a potential function U(x) = -º(F(x))dx to represent the energy stored in the spring for a given displacement of the mass, then F(x) = -d(U(x))/dx. And for our system we know F(x) = -k(Æx), so U(x) = ºk(Æx) dx = k(Æx)2 + C. The arbitrary constant C coming from the integration is the energy of the system at equilibrium, which is zero since we defined our potential as energy stored in the spring. So U(x) = k(Æx)2.

We are now in a position to verify that our oscillator is a conservative system. The kinetic energy of our system is given by T = (1/2)mv2, and is also a function of x. So the total energy of the system as a function of position is T + U = (1/2)m(X')2 + (1/2)k(X)2 , and we can substitute our solution in for X : X = Dcos(wt + ¶), X' = -Dwsin(wt + ¶), to get :

total energy = (1/2)mD2w2sin2(wt + ¶) + (1/2)kD2cos2(wt + ¶)

= (1/2)mD2(k/m)sin2(wt + ¶) + (1/2)kD2cos2(wt + ¶)

= (1/2)kD2(sin2(wt + ¶) + cos2(wt + ¶))

= (1/2)kD2

which is constant with respect to time.

Of course, a harmonic oscillator need not consist of a spring and mass of the sort illustrated in our diagrams up to this point. The only things required for an oscillator are a mass of some kind and a linear restoring force of some kind. For example, the spring could apply a torque (rotational force) to a moment of inertia (rotational mass) rotating about a fixed point. If the magnitude of the torque increases linearly with respect to the magnitude of the angular displacement, the rotation will be harmonic, and we expect a plot of theta vs time to be a sinusoid. This is precisely the case in an old fashioned speedometer, as illustrated: (fig 6)

(put in an arrow pointing to the needle)

For this situation we have a differential equation of the form :

ðq'' + (k/I)ðq = 0, and a general solution of the form:

ðq = Acos(wt) + Bsin(wt), or

ðq = Csin(wt+¶), where w = (k/I)1/2

Our look at the 1-D harmonic oscillator section ends with a brief look at resonance. Resonance is a dramatic increase in the vibrational amplitude of a harmonic oscillator which occurs when a periodic driving force with similar frequency is applied to the oscillator.

Mathematically, we can express this new situation in terms of a new differential equation (let's assume for the moment that the driving force is sinusoidal):

Driving force = G*cos(yt+z), G = amplitude, y = frequency, z = phase,

Total force on mobile mass = k*Æx + G*cos(yt+z), so

ma = -(k*Æx + G*cos(yt+z)), and our differential equation is:

x'' + (k/m)x = -(G/m)*cos(yt+z) = (G/m)*cos(yt+Z), where Z = z+¹.

This is an inhomogeneous differential equation. A theorem from the study of differential equations lets us write the general solution to an inhomogeneous differential equation as a linear combination of two independent pieces. One piece is the solution to the corresponding homogeneous equation, and the other is any particular solution to the inhomogeneous equation. The hardest part of solving an inhomogeneous equation is generally finding a particular solution.

We have already solved the homogeneous case:

x'' + (k/m)x = 0, x(t) = C*cos(wt+¶)

So all that remains to be done is the hard part.

Try: x = D*cos(ht+p),

x'' = -D*h*h*cos(ht+p), (k/m)x = (k*D/m)*cos(ht+p)

x'' + (k/m)x = ((k*D/m)-D*h*h)*cos(ht+p)

So, to make x'' + (k/m)x = (G/m)*cos(yt+Z), we have :

G = (k*D - h*h*m*D), h = y, p = Z.

D = G/(k - y*y*m), and we have a particular solution of the form:

x = (G/(k - y*y*m))*cos(yt+Z)

Note that k = w*w*m, so if y=w, which happens when the driving force and the oscillator have equal frequencies, (k-y*y*m) = 0, and (G/(k-y*y*m)) becomes infinite.

Our general solution is of the form :

x(t) = C*cos(wt+¶) + (G/(k - y*y*m))*cos(yt+Z)

For y close to w, (G/(k-y*y*m)) is large, because the two cosine terms have periods which are nearly equal, their sum "beats" as the two waves slide in and out of phase. The closer y gets to w, the slower and louder the beats become.

For the moment we have no way of proving that this resonance effect should also be observed for non-sinusoidal periodic driving forces, but the following physical argument should make sense. The sum of kinetic and potential energy for an undriven simple harmonic oscillator remains constant if there is no dissipation due to friction. When the driving force pushes the oscillator in the direction it is already moving, the speed of the mass increases and kinetic energy is added to the oscillator. Likewise, when the driving force opposes the motion of the oscillator, the oscillator loses energy.

If the periodic driving force has a frequency unequal to that of the oscillator, the phase of the driving force with respect to the oscillator will itself change in time, and have a frequency equal to the difference between the driving and oscillating frequencies. The actual numerical value associated with relative phase depends on the choice of origin, but we know the beat frequency and therefore the period of the relative phase.

The phase regions in which energy is added and taken from the oscillator also depend on the nature of the driving force, but the sum over a whole phase period must be zero. If, for example, the energy of the oscillator increases while the phase is between 0 and ¹, then the energy must correspondingly decrease while the phase is between ¹ and 2¹. But if the driving frequency is made to approach the oscillator frequency, the phase frequency approaches zero, and the oscillator gains and loses increasing amounts of energy over increasing lengths of time. If the driving frequency matches the oscillation frequency, three things may happen. If the relative phase of the oscillator and driver (which is now fixed since the phase frequency is zero) is such that the energy of the oscillator is increasing, the oscillator will go on gaining energy until it explodes or its spring loses its linearity. If the relative phase is such that the energy is decreasing, then the oscillator will slow down, reverse direction, and gain energy steadily until the oscillator breaks down. Finally, if the relative phase is sitting on a boundary between the phase regions where the driver adds and subtracts energy, there is no energy change at all, and the oscillator goes chugging along forever without traumatizing the spring. There are two such boundaries. One corresponds to the oscillator and driver moving in perfect tandem, so the spring does no work at all. At the other boundary, the center of the spring remains fixed while the mass and the driving force push and pull symmetrically on the ends.

(picture here)

When a driving frequency happens to match the frequency of an oscillator, the chances of the relative phase being on one of the magic boundaries is essentially zero. So eventually the oscillator begins to vibrate with an increasing amplitude. If the oscillator is nonlinear, its energy will generally peak somewhere and bounce around unpredictably (this is currently a "hot" topic in engineering). If the oscillator has some damping mechanism to drain away energy it may just hum to itself, like the floor when a downstairs neighbor hits a certain note on her bass. But in some cases, such as the infamous crystal wine glass, the oscillator will reach a critical vibrational amplitude and self-destruct.

So why are harmonic oscillators important? Our spring/mass scenario seems pretty contrived, but harmonic oscillator behavior is observed in a wide variety of physical phenomena. This is simply because in every stable physical configuration (a solar system, a molecule, or a piece of tofu) there is a restoring force of some kind tending to pull the system back to equilibrium. If we can describe the system in terms of one coordinate (radial displacement or squashedness), the motion of the system following a disturbance which displaces the system from equilibrium is oscillatory. If the restoring force is a complicated function of displacement, the oscillations will in general NOT be harmonic. But for small displacements, we can often approximate the restoring force using Hooke's Law (should I be explicit here and work through a simple example, like a pendulum with a rigid arm?). So for most simple systems, small amplitude vibrations are harmonic.

Unfortunately, in general things are not usually so simple. Most interesting physical systems behave not like single oscillators but like many coupled oscillators. In fact, the physical systems which will be of greatest interest to us as musicians behave this way. So to study the (ideal) vibrating string, membrane, and air column, we will need some more math.

 

Section III

Before we look at continuous systems, it may be instructive to take a quick look at the nature of coupled oscillations.

Let's imagine a system consisting of two of the ideal spring-mass oscillators discussed in section 1 connected by a third ideal spring. For simplicity, let's make the system one-dimensional, and establish a good set of coordinates.

A good set of coordinates describes all possible states of the system with maximum efficiency. A state of the system is simply a certain position or configuration with a certain velocity or motion. For example, if we wanted to establish a general set of coordinates for a system consisting of three particles moving in a plane, we could use three-dimensional Cartesian coordinates to describe the position and velocity of each particle in space. We would have x-position, y-position, z-position, x-velocity, y-velocity, and z-velocity as functions of time for all three particles. It would, however, be more efficient simply to consider the plane to which the particles are confined, and use two-dimensional coordinates. If we also know that the motion of each particle is confined to the perimeter of a circle with radius R in the plane, then we can use a polar coordinate system with its center at the midpoint of the circle to describe the state of the system yet more efficiently. (illustration) With the coordinates of a particle given by (R,ðq), and R being fixed, the position of each particle may be completely specified in terms of the single coordinate ðq. The velocity of the particle may be expressed as R* ðq', where ðq'= ðvð is the angular velocity of the particle. The coordinates ðq1 ðq2 ðq3 are the most efficient way of describing the state of our three particle system, and therefore comprise a good coordinate system. The minimum number of coordinates needed to describe all the allowable states of a system is also known as the number of degrees of freedom of the system. A good coordinate system has exactly as many coordinates as the physical system it describes has degrees of freedom. The three-particle system just described has three degrees of freedom, and a completely free particle in 3 dimensions can only be described in terms of a coordinate system with at least three coordinates, so it, too, has three degrees of freedom.

For our two-mass, three-spring system, we have two moving parts. Since both masses are confined to move in one-dimension, the system has two degrees of freedom, and our good coordinate system therefore has two coordinates. Let x1 be the position of m1 and x2 be the position of m2 (illustration). The system has an equilibrium position that may be found by balancing all forces. If we assume for simplicity that all three springs are at equilibrium length when the system is at rest, it is convenient to measure x1 and x2 as displacements from equilibrium. We may then calculate all forces acting on the masses in terms of their displacements using Hooke's law. Finally, we can write a system of differential equations by equating force with acceleration.

In the case of the harmonic oscillator there is only one differential equation, because that physical system has only one degree of freedom. With two degrees of freedom we shall find two differential equations, and with n degrees of freedom we would find n differential equations. The two which we will find for our coupled oscillator system cannot be solved independently, so they constitute a system of equations. The systems of equations which represent oscillating systems of reasonable complexity are seldom actually solvable. But harmonic oscillators are a special case, and systems of such oscillators can almost always be solved using techniques from linear algebra. We expect, given n degrees of freedom, to find 2*n linearly independent solutions (since the equations are second order). These are arranged in pairs which may be combined to give n linearly independent solutions, each with a phase factor. (recall how the single oscillator had two independent solutions which were combined to form a single solution with a phase factor) These independent solutions are called normal modes, and each one has a characteristic frequency.

Without further ado, let's actually set up and solve the system for our coupled oscillators.

(here we go)

F1 = force on mass 1 = -(Æx1)k1 + (Æx2 - Æx1)k12

F2 = force on mass 2 = -(Æx2)k2 + (Æx1 - Æx2)k12

(using F=ma:)

m1(Æx1'') = -Æx1(k1) + (Æx2 - Æx1)k12

m2(Æx2'') = -Æx2(k2) + (Æx1 - Æx2)k12

x1'' = -x1(k1/m1) + (x2 - x1)(k12/m1)

x2'' = -x2(k2/m2) + (x1 - x2)(k12/m2)

We can now rewrite the system in matrix form:

We are particularly interested in the eigenvectors of the above matrix. The eigenvectors turn out to correspond to the normal modes of the system, and the associated eigenvalues correspond to the frequencies of the normal modes. The eigenvalue equation for our matrix may be written in the following form:

which is equivalent to:

And the two equations: ax + by = ðlx , cx + dy = ðly may be rewritten as (ðl - a)x/b = y and y = cx/(ðl - d). Setting the y terms equal gives the following quadratic in ðl : (ðl - a) (ðl - d) = cb, or ðl2- (a+d) ðl + (ad-cb) = 0

The roots of this equation are:

.... (work this out)

The normal modes of the system are:

.... (plug lambdas into equation to get eigenvectors, normal modes)

(illustration) of normal modes..

So for this system we have two frequencies () corresponding to two normal modes (). So we can write a general solution to the system in the following manner: ()

In general we expect the number of normal modes to get larger and larger as the number of degrees of freedom of the system increases, and for the motion of the system to get more and more complicated. For a system with n degrees of freedom we must solve an nth degree polynomial in order to get the eigenvalues of the characteristic matrix. For n larger than two this can be difficult, and for n larger than four it can be impossible without the use of numerical techniques like Euler's method.

So, though we could build a crude model of a vibrating string using springs and masses, and take a look at the limiting behavior of the system as the masses get smaller and smaller and there are more and more of them in a given interval on the x axis, it would probably be easier to start afresh.

 

Section IV

The last three sections of this article have been devoted to the treatment of oscillation in systems with a finite number of degrees of freedom. Such systems are said to be "discrete". In this section, we will extend our analysis of oscillation to "continuous" systems, in which oscillations and waves are one and the same. And, at last, we can hope for some insight into the nature of sound.

First we will derive the wave equation in one dimension using an idealized elastic string model. Then we will look at solutions to the wave equation.

It is easy to become confused by the many ways in which the idealized string model seems familiar, and the rather unfamiliar ways in which it is used. When confronted with the notion of a string, most people think of a violin (or guitar) string, which is fixed at both ends and which produces different notes (or harmonics) depending on how its length is shortened (by fingers or kaepas) and its vibrations are driven (by plucking or bowing). The wave equation will, in fact, give us a picture of such behavior when we clamp the ends, and we will see where the harmonic series comes from. But equally important is the notion of an infinite string, which has at most one end clamped (but still keeps tension somehow), and has no harmonics, properly speaking. This seemingly absurd notion is actually quite valuable in that it gives us a model for elastic media, and we shall see that "wave packets" propagate along the infinite string in a manner analogous to the way bursts of noise travel through air. Fortunately, both models use the same equation, which will now derive.

For the derivation we will use a simple one-dimensional model, consisting of a perfectly elastic string suspended in space. It is possible to extend the derivation to higher dimensions, and find an equation describing the behavior of a vibrating membrane, or even a solid. But such a derivation is extremely involved, and is beyond the scope of this tutorial.

We will, at this point, make a number of assumptions about the string and its behavior which reduce the complexity of the problem considerably. Though these assumptions are not generally valid in the real world, they preserve the essential features of harmonic vibration and make theoretical life much more pleasant than it would be otherwise :

1) The string has perfect and uniform elasticity. "Perfect elasticity" means that a segment of the string will obey Hooke's law if stretched, and "uniform elasticity" means that the "spring constant" of a given segment is the same for any other segment of equal length, regardless of its location. Note that if a segment of length s has spring constant k, a segment of length b*s will have spring constant k/b. If this seems non-obvious, try playing with a rubber band. It is harder to stretch a short piece than a long piece. To see why this is so, imagine two identical pieces of length s and spring constant k placed end to end, forming a segment of length 2s. Displacing one end of the long segment by a given distance Æx stretches both constituent pieces Æx/2, so the restoring force is the same as for a single piece stretched by Æx/2. So a double-length segment has a spring constant half that of its constituent pieces. The same argument can be extended to show a segment of length b*s has a spring constant of k/b.

2) The string has uniform density p(x).

3) The equilibrium position for the string is a straight line, which for convenience we will call the x axis, and all displacements from equilibrium are transverse (perpendicular to the string's length). So if we were to make a mark at some point along the string's length and set the string vibrating, the mark would be confined to move in the plane perpendicular to the string's equilibrium position at that point. We can, of course, imagine stretching the string longitudinally (that's how we test its stretchiness, after all), and we could even set it vibrating longitudinally. In fact, we could derive the wave equation with this sort of vibration, and ignore transverse motion entirely. The point is that we want to isolate one dimension of the vibration, so as to be able to write a simple function describing the state of the string.

To make things as simple as they can possibly be, we assume that all displacements are all confined to a single 2-dimensional plane, and any motion of the string is likewise constrained. So a mark on the string actually moves back and forth in a straight line perpendicular to the string's equilibrium position at that point.

This purely transverse motion makes it possible to express the displacement of the string as a function of one variable. We will represent our string by the formula y = f(x), where y is the displacement of the string from equilibrium at x. (figure 1)

4) The string is unaffected by all external forces, such as gravity and viscous drag, and has no internal forces save its own stretchiness (described by Hooke's law below). The string therefore has no internal friction or internal source of mechanical energy. Note that rubber bands do NOT behave this way. When you stretch them, they warm up and lose tension.

5) The slope of the string at any point along its length is never large.

It is of no concern to us at this point if the ends are fixed or if the string is infinitely long. For now we will confine our attention to a small segment of the string, sitting between x and (x + Æx) along the equilibrium axis of the string. It does not matter what x and Æx are, since this segment is supposed to be completely general (Joe segment). If we can work out what the string looks like at an arbitrary value of x and Æx, then we have described the behavior of the string everywhere, which is our goal.

The slope of the string segment is described by two angles, one for each endpoint. ðq1 is the angle between the positive equilibrium axis of the string and a vector (with a leftward horizontal component) tangent to the string at the left endpoint. ðq2 is the angle between the positive equilibrium axis of the string and a vector (with a rightward horizontal component) tangent to the string at the right endpoint.

(figure 2 - (fix this, need x and x + Æx))

 

So, looking at our arbitrary string segment, we know:

1) Since the displacement of the string is transverse, all velocities and accelerations must therefore also be transverse. So the net horizontal force on the string segment must be zero.

2) There must be a constant horizontal tension all along the string, because the horizontal forces at each end of an arbitrary string segment must be balanced. If, for example, the horizontal tension were greater at point A than at point B, the horizontal forces on the string segment connecting A and B would be unbalanced, and the segment would be drawn towards point A. This contradicts the requirement that an arbitrary string segment have balanced horizontal forces, so the string tension at A and B must be equal. Let's call this uniform horizontal tension T'.

3) Given the slope of the string at each endpoint of our arbitrary segment, we can break the string tension T into horizontal and vertical components at each end, as shown in (figure 3).

At x, the left endpoint, we have a horizontal force given by Tcos(ðq1) and a vertical force given by Tsin (ðq1). Likewise at (x+Æx), the right endpoint, we have a horizontal force component Tcos (ðq2) and a vertical component Tsin (ðq2). The horizontal forces are assumed to be opposite (since presumably ðq1 is between ¹/2 and 3¹/2, while ðq2 is between -¹/2 and ¹/2).

Setting the horizontal components of the two forces to sum to zero gives:

(1) - F1cos(ðq1) = F2cos(ðq2) = T'.

This equation reflects the requirement that there be not longitudinal motion.

The net vertical force on the string segment is the sum of the endpoints:

(2) Fupwards = F2sin(ðq2) + F1sin(ðq1).

[Keep in mind that sin(ðq1) < 0]

Using Newton's law of motion : F = ma, we get :

F = Mass*acceleration

= (density*length)*(second derivative of displacement w.r.t time)

= ð(ðr(Æs)ð)*ð(¶(¶y/¶t)/¶tð),

where ðr = mass/unit length, Æs=the actual length of the string segment, (which for large angles ðq1and ðq2could be considerably more than Æx) and ð(¶(¶y/¶t)/¶tð) is the transverse acceleration of the segment.

So, using our expression for Fupwards, we have :

(3) F2sin(ðq2) + F1sin(ðq1) = ð(ðr(Æs)ð)*ð(¶(¶y/¶t)/¶tð)

Dividing both sides by T' and using (1) gives :

(4) tan(ðq2) + tan(ðq1) = ð(ðr(Æs)ð)*ð(¶(¶y/¶t)/¶tð)/T'

Geometrically, -tan(ðq1) is equivalent to the slope of the string at x, and tan(ðq2) is the slope at (x+Æx), so we can write:

(5) ¶y(x+Æx,t)/¶x - ¶y(x,t)/¶x = ð(ðr(Æs)ð)*ð(¶(¶y/¶t)/¶tð)/T'

Note that we are using partials (¶/¶x) instead of derivatives (d/dx) because y is a function of two variables : x and t. The slope of the string at (x,t) is ¶y(x,t)/¶x, and the velocity at (x,t) is ¶y(x,t)/¶t.

If the slope of the string is small, we may use Æx to approximate Æs. So substituting Æx for Æs, and dividing through by Æx, we get:

(6) ( ¶y(x+Æx,t)/¶x - ¶y(x,t)/¶x )/Æx = ð(ðr/T'ð)*ð(¶(¶y/¶t)/¶tð)

As Æx goes to zero, the left size approaches the derivative of ¶y/¶x with respect to x (which is really the second partial of y with respect to x) Taking the limit of the above equation as Æx goes to zero gives us:

(7) (¶(¶y/¶x)/¶x) = ð(ðr/T'ð)*ð(¶(¶y/¶t)/¶tð)

which is usually written (reversed) in physics books as:

(8) ytt = c2yxx

[where c2 = T'/ ðr, and y(x,t) is sometimes written as u(x,t)]

________________________________________________________

This, as advertised, is the homogeneous wave equation. It is second order and linear, so we can actually solve it (convince yourself that the equation is linear by taking two arbitrary solutions y1(x,t) and y2(x,t) and showing that a linear combination is also a solution). If, after solving the homogeneous wave equation, we find ourselves at a loss for fun things to do, we can add a driving force to the string and solve the resulting nonhomogeneous form of the wave equation. But for now let's concentrate on solving the homogeneous form.

In the case of the infinitely long string, the homogeneous solution will give us some insight into how waves propagate through elastic media, and in the case of the string fixed at two points we will be able to derive the harmonic series.

Section IV.5

 

The general solution to the partial differential equation utt = c2uxx may be found by using a change of variables. Our goal is to find u as a function of x and t. To do this we find the characteristic coordinates of the pde in terms of x and t, and substitute them for x and t. Our pde will then be in its simplest form, and hopefully solvable. The characteristic coordinates of a pde are solutions to the associated characteristic equation, which is generally of an order less than the pde itself.

Note: at this point I am assuming the reader understands the difference between partial differentiation with respect to a variable and regular differentiation. Also, the chain rule is assumed known : d(F(u,v))/dx = (¶F/¶u)(du/dx) + (¶F/¶v)(dv/dx). See the appendix for further explanation.

Fortunately, in the case of second order two-variable pdes, there exists a simple formula (found in all introductory pde textbooks) for the characteristic equation. First we rewrite our partial differential equation utt = c2uxx in the general form for pdes of this type:

A* utt + B*utx + C*uxx =0, and the characteristic equation is :

(let's put in an (illustration) with the square root done properly, here)

dx/dt = ±sqrt(B*B-4*A*C)/2*A, which is simply the quadratic formula.

The two roots (±c in the case of our pde) correspond to two differential equations which may be solved to get two characteristic coordinates.

1) dx = cdt

Integrate both sides to get x = ct + ß, or (ß = x - ct)

2) dx = -cdt,

which becomes x = -ct + ƒ, or (ƒ = x + ct)

The arbitrary constants introduced in the integration are the characteristic coordinates, ß = x - ct and ƒ = x + ct.

Now we want to rewrite the wave equation in terms of ß and ƒ.

ux = uß(¶ß/¶x) + uƒ(¶ƒ/¶x) = uß + uƒ

uxx = ußß(¶ß/¶x) + u߃(¶ƒ/¶x) + uƒß(¶ß/¶x) + uƒƒ(¶ƒ/¶x)

= ußß + u߃ + u߃ + uƒƒ = (ußß + 2u߃ + uƒƒ)

ut = uß(¶ß/¶t) + uƒ(¶ƒ/¶t) = -cuß + cuƒ

utt = -cußß(¶ß/¶t) - cu߃(¶ƒ/¶t) + cuƒß(¶ß/¶t) + cuƒƒ(¶ƒ/¶x)

= c2ußß - c2u߃ - c2uƒß + c2uƒƒ = c2(ußß - 2u߃ + uƒƒ)

plugging everything in :

utt - c2uxx = c2(ußß - 2u߃ + uƒƒ) - c2(ußß + 2u߃ + uƒƒ)

= -4c2u߃

utt - c2uxx = 0, by wave equation, so -4c2u߃ = 0

We assume that waves actually propagate, so c ­ 0, therefore u߃ = 0

Integrate u߃ with respect to ƒ to get uß = ðy(ß), an arbitrary function of ß. Any function with no dependence on ƒ will be zero if differentiated with respect to f. Since the variables ß and ƒ are independent, any function of the variable ß only will be zero if differentiated by ƒ. Therefore ðy(ß)ƒ = 0, and u߃ = 0 as required.

Integrate uß with respect to ß to get u = ºðy(ß)dß + ðF(ƒ), where ðF, the arbitrary constant introduced in the integration, is an arbitrary function of ƒ. ðF(ƒ)ß = 0, so uß = ðy(ß) as required.

If we let ðY(ß) = ºðy(ß)dß, then we get u(ß,ƒ) = ðY(ß) + ðF(ƒ).

Substitute ß = x - ct and ƒ = x + ct x into u(ß,ƒ) to get:

(9) u(x,t) = ðY(x-ct) + ðF(x+ct)

So the general solution to the wave equation consists of two arbitrary functions of x moving (or propagating) at speed c in opposite directions along the x axis.

Note that the second derivatives with respect to x and t of both arbitrary functions must be well defined, so ðY and ðF must be continuously differentiable in x and t. A solution picked at random might not be continuous, so we must be careful. A random solution also might not satisfy the physical constraints which allowed us to derive the equation in the first place. The only real danger here, provided that ðY and ðF are continuously differentiable, is that the solution might not have small slopes. For the moment, let's not worry about the magnitude of the slopes associated with solutions. If we find that a certain solution has dangerously large slopes, we can simply multiply it by a scaling factor to bring all the slope magnitudes into line without changing the nature of the solution. In fact, the illustrations in this section will exaggerate slopes so as to make the shape of the string more clear.

We may now construct any and all solutions to the wave equation on an infinite string out of arbitrary functions. At this point, a few examples may be as helpful.

(give an illustration here)

 

(Do some examples with packets and (illustration)s of waves)

But how to deduce the shape of the string given initial conditions?

If, at some t which for convenience we will call 0, we know f(x) to be the transverse displacement of the string from equilibrium, and g(x) to be the transverse velocity of the string, then we have:

u(x,t) = ðY(x-ct) + ðF(x+ct)

u(x,0) = ðY(x) + ðF(x) = f(x) (1)

ut(x,t) = (-c)ðY'(x-ct) + (c)ðF'(x+ct)

ut(x,0) = (-c)ðY'(x) + (c)ðF'(x) = g(x) (2)

Now, integrate with respect to x along a segment of string from x1 to x2:

__________________________________________________________

Combining this last expression with equations 1 and 2 gives us, for 1:

And for 2:

Combining these equations to get a single expression for u(x,t) :

Which is our general formula for the position of the string at time t given its position and velocity at t=0. Note that to obtain velocity of the string we need only to differentiate the above expression with respect to t (do this). And if the functions f(x) and g(x) are given for t1­0, simply replace t with t-t1 in the equations.

Boundary Conditions

What happens if we fix the position of one end of the string, say by attaching it to an immovable object? (illustration) Imagine a string extending out to infinity on the left, and fixed to an infinitely massive and rigid (and non-gravitating) mass on the right. At the boundary between the string and the mass (which we may for convenience place at x=0) the position and velocity of the string are fixed at zero, but the slope may vary. The only way to fix the slope at the boundary is by introducing some stiffness into the string, which would violate our elasticity assumption. If a wave packet ðY(x-ct) is incident from the left on the boundary at x=0, every force (transverse) which the packet exerts on the boundary is exactly matched by the inertia of the infinite mass, which opposes motion. We could simulate this effect by extending the string to infinity on the right, and having another wave packet ðF(x+ct) which is ðY(x-ct) flipped backwards and upside down and incident from the right on x=0. (illustration) As the two wave packets pass through each other, they exert exactly equal and opposite transverse forces on the point x=0, and the point remains fixed just as though the infinite mass had been there. The wave packets then continue onwards, ðY(x-ct) moving off towards x=+° and ðF(x+ct) towards x=-°. So it is as though the incident packet is reflected and inverted by the point x=0.

Another type of boundary permits an end of the string to slide transversely without friction on a track (picture). When an incident wave packet hits this type of boundary, it finds no resistance since the force which normally must be exerted to get the string to move from its equilibrium position is gone (there being no more string). The slope of the string at this type of boundary is always zero, (though near the boundary the string may have nonzero slope), but the transverse position of the end may vary. (the only way to get a nonzero slope at the boundary is to have some friction) We may simulate this type of boundary by extending the string to infinity on the right as we did before, and introducing another wave packet which is simply the mirror image of the original. (picture) The two packets meet at x=0, and preserve zero slope at x=0 as they pass through each other. This is because as mirror images they exert transverse force in the same direction. After crossing x=0, the two packets continue to propagate in their respective directions, so it is as though the incident packet has been reflected. (illustration)

 

Now for the case of the string fixed at both ends, we can look at the problem in two ways. First, we can think of any displacement along the length of the string as a moving wave, which will be reflected back and forth between the fixed boundaries. This makes sense if, as in the previous examples, the displacement is a clearly localized "wave packet" over the time interval of interest. If, however, the displacement involves a large portion of the whole string at once (a plucked cello string, for example), then this approach becomes confusing. If the string just vibrates back and forth, no moving wave packets are clearly visible. But as it turns out, if two periodic waves with the same period are moving in opposite directions towards each other, a standing wave is created wherever they cross.

For example, let's take two sine waves with period=2¹. Let ðY(x-ct) be sin(x-ct) and let ðF(x+ct) be sin(x+ct). So u(x,t) = ðY(x-ct) + ðF(x+ct) = sin(x-ct) + sin(x+ct). We could at this point plot u(x,t) on a computer and get some pictures (possible illustration). But let's do things mathematically first.

Recall the addition formulas for sine and cosine:

sin(a+b) = sin(a)cos(b) + cos(a)sin(b)

cos(a+b) = cos(a)cos(b) - sin(a)sin(b)

Also sin(-a) = -sin(a) and cos(-a) = cos(a), so

sin(x-ct) = sin(x)cos(ct) - cos(x)sin(ct), sin(x+ct) = sin(x)cos(ct) + cos(x)sin(ct), therefore

u(x,t) = 2sin(x)cos(ct)

This is not quite a solution to utt = c2uxx , but we can make it one by adding a phase factor of -¹/2 to cos(ct) to get cos(ct-¹/2), or sin(ct). This means setting the clock back ¹/2c seconds, which shouldn't cause any problems. u(x,t) = 2sin(x)sin(ct) is a solution to the wave equation, but it doesn't propagate to the right or left. The time dependence is expressed in the sin(ct) factor, which simply causes the amplitude of the sine wave to oscillate from -2 to 2 and back again every 2¹/c seconds. The period of the x dependent term is 2¹, so the sine wave crosses zero at 0,±¹,±2¹,±3¹,... regardless of t. These points where u(x,t) is always zero are known as nodes. If we hold a standing wave fixed at two of its nodes, the vibration should not be affected, and we have a solution to the case of a vibrating string fixed at both ends.

Or instead we can go back to the wave equation and use linearity to construct a general solution for the fixed string. This is analogous to the procedure we used to find a general solution for the simple harmonic oscillator.

Let's take the case of a string of length L fixed at both ends. Let the leftmost end be located at x=0 and the rightmost end at x=L. It is easy to show that u(x,t) = Asin(w1t)sin(2¹x/L) is a solution to utt = c2uxx , where A is an arbitrary constant and w1 = c(2¹/L) is the frequency (in radians per second in time) of the sin(w1t) term. Looking at the sin((2¹x/L) term, we see that period in x is 2¹, sin(0)=0 and sin(2¹)=0. So u(0,t)= 0 and u(L,t)=0 for all t, and the ends remain fixed as required.

(illustration)

Notice that we can get a similar solution simply by halving the period of the sin(2¹x/L) term, or doubling its frequency (in radians per unit length) to get sin(4¹x/L). This also keeps the ends fixed, since sin(4¹)=0. But we have to double the frequency of the sin(w1t) term to satisfy

utt = c2uxx.

So u(x,t) = Asin(2*w1t)sin((2*2¹/L)x) works as a solution, where 2*w1=c(4¹/L), or 2*(c(2¹/L)).

In fact, any function of the form:

u(x,t) = Asin((n*c2¹/L)t)sin((n*2¹/L)x) will work, where n =1,2,3,... This gives us an infinite number of solutions, each with a different frequency.

(illustration)

For a given n, the nodes are spaced L/n apart on the x axis, and the frequency of the vibration is 2¹L/cn cycles per second, or Hz. So notice that the solutions associated with negative values of n (-1,-2,-3,...) correspond to the positive n value solutions after t = ¹L/cn (half a period in time) has passed.

We may easily demonstrate that these solutions are linearly independent. And since the wave equation is linear, any linear combination of solutions of the form Asin((c2¹n/L)t)sin((2¹n/L)x) will also be a solution.

In fact, these linearly independent solutions are the nomal modes of the vibrating string. The frequency (in time) of the normal mode associated with n=1 is called the fundamental frequency, and the frequency of the nth normal mode where n>1 is known as the nth harmonic. (harmonic series)

(Show that these may be constructed using two moving waves, added to make a standing wave - use pictures from the plot program)

Now talk about media, energy conservation as we increase dimensionality and how that accounts for the drop in volume as distance from sound source increases.

Derive the wave equation in 2 and 3 dimensions, making no attempt to describe the solutions, but hinting that they also obey the general criteria for linearity and are therefore fourierizable in some way.

 

Appendix

This appendix contains a section devoted to elementary trigonometry identities, and a section on differential equations which includes some basic calculus review.

 

Trigonometry Review:

First, the definition of sine and cosine :

(picture of unit circle)

 

(graphs of sin, cos, tan)

(identities, s + c = 1, plus angle addition formulas, half angle ...)

Differential Equations:

 

Functions and variables:

In calculus textbooks, functions of a single variable are commonly written as letters followed by a variable name in parentheses : f(x), g(y), or u(z), for example. The variable names themselves are, of course, not important. They are significant only insofar as they are associated with the definition of the function. If, for example, we know that f(x) = -sin(x), then we know then f(6321.02) = -sin(6321.02), and f(y) = -sin(y). So if we made a plot of f(x) against the x-axis, it would look exactly the same as a plot of f(w) against the w-axis.

Functions of several variables are written similarly, but with a list of variables separated by commas inside the parentheses : v(x,y,z), u(t,s), f(l,m,n,o,p). It is important that the variables be distinguishable from one another, and that they be in a well defined order. But otherwise, the names are arbitrary. For example, if u(x,y,z) = sin(x)*cos(y) + z, we know that u(a,b,c) = sin(a)*cos(b) + c, and u(z,x,b) = sin(z)*cos(x) + b. Obviously, u(a,b,x) is not, in general, the same as u(x,b,a) (but setting u(a,b,x) equal to u(x,b,a) and solving for a,b, and x might well produce an interesting locus of points).

Domain and Range:

The domain of a function is the set of all values which may be "plugged in" to the function to produce a legal value.

For example :

1) the domain of sin(x) is the set of real numbers

2) the domain of tan(x) is the set of real numbers minus the set of zeros of cos(x) : (numbers of the form ¹/2 + n*¹, where n is an integer). The function tan(x) is undefined for values of x which corresponds to zeros of cos(x), so these points are not included in the domain of tan(x).

3) the domain of sqrt(x) is the set of real numbers => 0

The range of a function is the set of all values which may be produced by plugging values into the function.

For example :

1) the range of sin(x) is the closed interval [-1,1]

2) the range of tan(x) is the real numbers

3) the range of sqrt(x) is the positive real numbers

A function is sometimes referred to as a "mapping". A function maps its domain to its range.

(picture)

Continuity:

A function g is continuous if:

given any two points p1 and p2 in the domain of g, the value of g at p1 differs from the value of g at p2 by no more than epsilon*d, where epsilon is some fixed number over the whole range of g, and d is the distance between the points p1 and p2.

Single variable differentiation:

Differentiation is an operation defined for a function of a single variable. For example, the derivative of f(x) is d(f(x))/dx, or f'(x), and the third derivative of f(x) is d(d(d(f(x))/dx)/dx)/dx, or d3f(x)/(dx)3, or f'''(x). Higher derivatives than third are often written with a parenthesized number in the exponent : f(4)(x) = f''''(x). But since f is a function of one variable, it doesn't really matter what name we give it. In fact, the (x) is sometimes omitted entirely. In this case, f' and f'' and f(8) are understood to mean differentiation with respect to the single variable which is used in defining f.

The chain rule for simple differentiation:

 

Partial differentiation:

When differentiating a function of several variables, it is necessary to specify which variable is relevant to the differentiation. For example, if g is a function of a, b, and c, the expression g' is ambiguous. g' could refer to differentiation with respect to a, b, c or some linear combination of a, b and c.

Consider a continuous function u of two variables, and imagine a surface in 3-space (with coordinate axes x, y, and z) by assigning the value u(x,y) to the z coordinate above each point (x,y) in the xy plane. The continuity of u guarantees that the surface will be unbroken, and will not shoot off to infinity anywhere.

(picture)

We can imagine taking derivatives in any number of ways, for example:

1) Define a straight line in the xy plane somewhere, and call it the k axis. If the line is described by y = mx + b, we can parameterize the line in terms of variable k this way:

x(k) = k

y(k) = b + mk

z(k) = 0

So each real number k corresponds to a point on the k axis with x, y, and z coordinates given by the above parametric equations.

This k axis in the xy plane corresponds to a curve on the surface u, which is the projection of the k axis up or down along the z axis onto the surface u(x,y).

(picture)

If k is a number describing a point somewhere along the k axis, we may then take the values of x and y generated by the parametric equations and plug them into u : u(x(k),y(k)). And since the curve on the surface of u is now expressed entirely in terms of k, we may write the curve as u(k).

The chain rule for partial differentiation: