Driving vibrations with sawtooth waves

The previous post looked at driving a vibrating system with square waves rather than the more customary sine waves.

You could think of a square wave as a crude approximation to a sine wave. A sawtooth wave is another crude approximation to a sine wave, and so it would be interesting to see how systems driven by a sawtooth forcing function differ from those driven by a square or sinusoidal forcing function.

Sine wave, sawtooth wave, and square wave

With a square wave, the differences were most pronounced at low frequencies, i.e. when the driving frequency was low relative to the natural frequency. The same is true for sawtooth waves, but with some interesting differences.

As before we will look at the equation

u'' + u = sin(ωt)

and with sine replaced with a variation, this time a sawtooth wave rather than a square. We will use initial conditions u(0) = u‘(0) = 1.

We start with ω = 1, i.e. driving the system at its natural frequency.

Resonance with sawtooth forcing

We see the resonance we’d expect when driving a system at its natural frequency. The amplitudes are lower for the sawtooth forcing function than the sinusoidal forcing function.

When we back away a little from the resonant frequency, setting ω = 0.9, we see beats.

Sawtooth wave beats

When we reduce ω to 0.5, we get resonance again for the sawtooth forcing solutions.

Beats for sawtooth-driven solution

At ω = 0.4 we see something like beats again.

Sawtooth driven beats

And at ω = 1/3 it looks like we have resonance again for the sawtooth-driven system.

Sawtooth driven resonance

We also see beats at ω = 1/n for all integer n. I’ve tried this for n up to 12. But at other frequencies that are not reciprocals of integers I see periodic solutions.

I have only investigated this numerically. Do any of you know of analytical results that apply here?

Update: There’s a simple reason why driving frequencies of 1/n do indeed create resonance: The Fourier series for the driving function has a component at the resonant frequency. Thanks to gmvh for pointing this out.

The square wave has a similar resonance at driving frequency 1/n, but only when n is an odd number. I missed that because I didn’t happen to try any frequencies of that form. Here’s an example with n = 5.

Driving oscillations with square waves

What happens when you drive a vibrating system with a square wave rather than a sine wave? Do you still see the same kinds of behavior, such as beats and resonance? When does the difference between a square wave and a sine wave matter most? Those are the questions this post will address.

Background

Basic mechanical oscillations are modeled by the equation

m u'' + γ u' + k u = F sin(ωt + φ)

You can think of m, γ, and k as mass, damping, and spring constant. The same equation describes non-mechanical oscillations, such as electrical systems. Sometimes the terms such as “mass” are still used when they are metaphorical rather than literal. I wrote a four-part series of posts on mechanical vibrations a while back starting here.

The term on the right hand side is the forcing function. F is the amplitude of the driving force and ω is its frequency.

The natural frequency of a system modeled by the equation above is

ω02 = k/m.

When F = 0, the solutions to the differential equation will have this frequency. When F is not zero, the solutions a component with the natural frequency and a component with the driving frequency. When the two frequencies are different, you get beats. When the two frequencies are the same, you get resonance. More on beats and resonance here.

In this post we will look at solutions to the equation above where the forcing function is a square wave rather than a sine wave. That is, we will replace

sin(ωt + φ)

with

sign( sin(ωt + φ) )

where

sign(x)

is 1 when x is positive and -1 when x is negative.

Exploration

To simplify things a little, we will set the damping term γ to zero, and set the phase φ in the driving function to zero. Also, we will set m, k, and F all equal to 1. We will focus on varying only the driving frequency ω.

We need initial conditions for our differential equations, so let’s pick u(0) = 1 and u‘(0) = 0.

When we drive the system at its natural frequency, i.e. with ω = 1, we get the same kind of resonance from a sine wave and a square wave.

Square wave resonance

Update: There are more resonant frequencies [1].

When we lower the driving frequency to ω = 0.5 we see more of a difference.

Square beats omega = 0.5

In general we see more difference as ω gets smaller, and more difference when the period 1/ω is not an integer. For example, here is ω = 0.25, period T= 4:

Square wave beats omega = 0.25

And here is ω = 0.3, period T = 1.66.

Square wave beats, omega = 0.3Here’s an example of driving a system with a square wave at a frequency higher than the natural frequency.

Square wave beats, omega = 1.7

Here the difference between the solutions for the square wave and sine wave are closer together. Apparently the higher frequency makes more difference than the non-integer period.

In the examples above, the solution with a square wave forcing function starts out flat. That’s because of our initial conditions u(0) = 1 and u‘(0) = 0. If we change the initial velocity condition to u‘(0) = 1, we get smoother solutions.

Here are the solutions with u‘(0) = 1 and ω = 0.25

Square wave beats, omega = 0.25, initial velocity 1

and ω = 0.3.

Square wave beats, omega = 0.30, initial velocity 1

Changing the initial velocity made more of a difference when the period was an integer.

See the next post for systems driven by a sawtooth wave rather than a square wave.

Related posts

[1] Sine-driven systems exhibit resonance if and only if the driving frequency matches the natural frequency. While writing the next post on sawtooth forcing functions, I accidentally discovered resonance at lower frequencies. The same can happen here with square wave-driven systems if ω = 1/(2n + 1). For example, here’s a plot with ω = 1/5. The reason resonance shows up for odd periods is that the square wave has Fourier components at every odd frequency.

Symplectic Euler

This post will look at simple numerical approaches to solving the predator-prey (Lotka-Volterra) equations. It turns out that the simplest approach does poorly, but a slight variation does much better.

Following [1] we will use the equations

u‘ = u (v – 2)
v‘ = v (1 – u)

Here u represents the predator population over time and v represents the prey population. When the prey v increase, the predators u increase, leading to a decrease in prey, which leads to a decrease in predators, etc. The exact solutions are periodic.

Euler’s method replaces the derivatives with finite difference approximations to compute the solution in increments of time of size h. The explicit Euler method applied to our example gives

u(th) = u(t) + h u(t) (v(t) – 2)
v(th) = v(t) + h v(t) (1 – u(t)).

The implicit Euler method gives

u(th) = u(t) + h u(t + h) (v(t + h) – 2)
v(th) = v(t) + h v(t + h) (1 – u(t + h)).

This method is implicit because the unknowns, the value of the solution at the next time step, appear on both sides of the equation. This means we’d either need to do some hand calculations first, if possible, to solve for the solutions at time t + h, or use a root-finding method at each time step to solve for the solutions.

Implicit methods are more difficult to implement, but they can have better numerical properties. See this post on stiff equations for an example where implicit Euler is much more stable than explicit Euler. I won’t plot the implicit Euler solutions here, but the implicit Euler method doesn’t do much better than the explicit Euler method in this example.

It turns out that a better approach than either explicit Euler or implicit Euler in our example is a compromise between the two: use explicit Euler to advance one component and use implicit Euler on the other. This is known as symplectic Euler for reasons I won’t get into here but would like to discuss in a future post.

If we use explicit Euler on the predator equation but implicit Euler on the prey equation we have

u(th) = u(t) + h u(t) (v(t + h) – 2)
v(th) = v(t) + h v(t + h) (1 – u(t)).

Conversely, if we use implicit Euler on the predator equation but explicit Euler on the prey equation we have

u(th) = u(t) + h u(t + h) (v(t) – 2)
v(th) = v(t) + h v(t) (1 – u(t + h)).

Let’s see how explicit Euler compares to either of the symplectic Euler methods.

First some initial setup.

    import numpy as np

    h  = 0.08  # Step size
    u0 = 6     # Initial condition
    v0 = 2     # Initial condition
    N  = 100   # Numer of time steps

    u = np.empty(N)
    v = np.empty(N)
    u[0] = u0
    v[0] = v0

Now the explicit Euler solution can be computed as follows.

    for n in range(N-1):
        u[n+1] = u[n] + h*u[n]*(v[n] - 2)
        v[n+1] = v[n] + h*v[n]*(1 - u[n])

The two symplectic Euler solutions are

    
    for n in range(N-1):
        v[n+1] = v[n]/(1 + h*(u[n] - 1))
        u[n+1] = u[n] + h*u[n]*(v[n+1] - 2)

and

    for n in range(N-1):
        u[n+1] = u[n] / (1 - h*(v[n] - 2))
        v[n+1] = v[n] + h*v[n]*(1 - u[n+1])

Now let’s see what our solutions look like, plotting (u(t), v(t)). First explicit Euler applied to both components:

And now the two symplectic methods, applying explicit Euler to one component and implicit Euler to the other.

Next, let’s make the step size 10x smaller and the number of steps 10x larger.

Now the explicit Euler method does much better, though the solutions are still not quite periodic.

The symplectic method solutions hardly change. They just become a little smoother.

More differential equations posts

[1] Ernst Hairer, Christian Lubich, Gerhard Wanner. Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations.

Leapfrog integrator

The so-called “leapfrog” integrator is a numerical method for solving differential equations of the form

x'' = f(x)

where x is a function of t. Typically x is position and t is time.

This form of equation is common for differential equations coming from mechanical systems. The form is more general than it may seem at first. It does not allow terms involving first-order derivatives, but these terms can often be eliminated via a change of variables. See this post for a way to eliminate first order terms from a linear ODE.

The leapfrog integrator is also known as the Störmer-Verlet method, or the Newton-Störmer-Verlet method, or the Newton-Störmer-Verlet-leapfrog method, or …

The leapfrog integrator has some advantages over, say, Runge-Kutta methods, because it is specialized for a particular (but important) class of equations. For one thing, it solves the second order ODE directly. Typically ODE solvers work on (systems of) first order equations: to solve a second order equation you turn it into a system of first order equations by introducing the first derivative of the solution as a new variable.

For another thing, it is reversible: if you advance the solution of an ODE from its initial condition to some future point, make that point your new initial condition, and reverse time, you can step back to exactly where you started, aside from any loss of accuracy due to floating point; in exact arithmetic, you’d return to exactly where you started.

Another advantage of the leapfrog integrator is that it approximately conserves energy. The leapfrog integrator could perform better over time compared to a method that is initially more accurate.

Here is the leapfrog method in a nutshell with step size h.

\begin{align*} x_{i+1} &= x_i + v_i h + \frac{1}{2} f(x_i) h^2 \\ v_{i+i} &= v_i + \frac{1}{2}\left(f(x_i) + f(x_{i+1})\right) h \end{align*}

And here’s a simple Python demo.

    import numpy as np
    import matplotlib.pyplot as plt
    
    # Solve x" = f(x) using leapfrog integrator
    
    # For this demo, x'' + x = 0
    # Exact solution is x(t) = sin(t)
    def f(x):
        return -x
    
    k = 5               # number of periods
    N = 16              # number of time steps per period
    h = 2*np.pi/N       # step size
    
    x = np.empty(k*N+1) # positions
    v = np.empty(k*N+1) # velocities
    
    # Initial conditions
    x[0] = 0
    v[0] = 1
    anew = f(x[0])
    
    # leapfrog method
    for i in range(1, k*N+1):
        aold = anew
        x[i] = x[i-1] + v[i-1]*h + 0.5*aold*h**2
        anew = f(x[i])
        v[i] = v[i-1] + 0.5*(aold + anew)*h

Here’s a plot of the solution over five periods.

There’s a lot more I hope to say about the leapfrog integrator and related methods in future posts.

More on ODE solvers

Counterexample to Dirichlet principle

Let Ω be an open set in some Euclidean space and v a real-valued function on Ω.

Dirichlet principle

Dirichlet’s integral for v, also called the Dirichlet energy of v, is

\int_\Omega \frac{1}{2} | \nabla v |^2

Among functions with specified values on the boundary of Ω, Dirichlet’s principle says that minimizing Dirichlet’s integral is equivalent to solving Laplace’s equation.

In a little more detail, let g be a continuous function on the boundary ∂Ω of the region Ω. A function u has minimum Dirichlet energy, subject to the requirement that u = g on ∂Ω, if and only if u solves Laplace’s equation

\Delta; u = 0

subject to the same boundary condition.

Dirichlet’s principle requires some hypotheses not stated here, as Hadamard’s example below shows.

Hadamard’s example

Let g(θ) be the function [1]

g(\theta) = \sum_{n=1}^\infty \frac{\sin n!\theta}{n^2}

The function g is continuous and so there exists a unique solution to Laplace’s equation on the unit disk with boundary values given by g, but the Dirichlet energy of the solution diverges.

The solution, in polar coordinates, is

u(r, \theta) = \sum_{n=1}^\infty r^{n!} \,\,\frac{\sin n!\theta}{n^2}

The Laplace operator in polar coordinates is

\frac{1}{r} \frac{\partial }{\partial r}\left(r \frac{\partial u}{\partial r} \right) + \frac{1}{r^2} \frac{\partial^2 u}{\partial \theta^2}

and you can differentiate u term by-term to show that it satisfies Laplace’s equation.

Dirichlet’s integral in polar coordinates is

\int_0^{2\pi} \int_0^1 \frac{1}{2} \left\{ \left( \frac{\partial u}{\partial r}\right)^2 + \frac{1}{r^2}\left(\frac{\partial u}{\partial \theta}\right)^2 \right\} \, r\,dr\,d\theta

Integrating term-by-term, the nth term in the series for the Dirichlet energy in Hadamard’s example is

\frac{(n!)^2}{2n^4(2n! - 1)}

and so the series rapidly diverges.

Dirichlet’s principle requires that there be at least one function satisfying the specified boundary conditions that has finite Dirichlet energy. In the example above, the solution to Laplace’s equation with boundary condition g has infinite Dirichlet energy. It turns out the same is true for every function satisfying the same boundary condition, whether it satisfies Laplace’s equation or not.

Related posts

[1] What is the motivation for this function? The function is given by a lacunary series, a Fourier series with increasingly large gaps between the frequency components. The corresponding series for u cannot be extended to an analytic function outside the closed unit circle. If it could be so extended, Dirichlet’s principle would apply and the example wouldn’t work.

NASA’s favorite ODE solver

NASA’s Orbital Flight Handbook, published in 1963, is a treasure trove of technical information, including a section comparing the strengths and weaknesses of several numerical methods for solving differential equations.

The winner was a predictor-corrector scheme known as Gauss-Jackson, a method I have not heard of outside of orbital mechanics, but one apparently particularly well suited to NASA’s needs.

The Gauss-Jackson second-sum method is strongly recommended for use in either Encke or Cowell [approaches to orbit modeling]. For comparable accuracy, it will allow step-sizes larger by factors of four or more than any of the forth order methods. … As compared with unsummed methods of comparable accuracy, the Gauss-Jackson method has the very important advantage that roundoff error growth is inhibited. … The Gauss-Jackson method is particularly suitable on orbits where infrequent changes in the step-size are necessary.

Here is a table summarizing the characteristics of each of the solvers.

Notice that Gauss-Jackson is the only method whose roundoff error accumulation is described as “excellent.”

A paper from 2004 [1] implies that the Gauss-Jackson method was still in use at NASA at the time of writing.

The Gauss-Jackson multi-step predictor-corrector method is widely used in numerical integration problems for astrodynamics and dynamical astronomy. The U.S. space surveillance centers have used an eighth-order Gauss-Jackson algorithm since the 1960s.

I could imagine a young hotshot explaining to NASA why they should use some other ODE solver, only to be told that the agency had already evaluated the alternatives half a century ago, and that the competitors didn’t have the same long-term accuracy.

More math and space posts

[1] Matthew M. Berry and Liam M. Healy. Implementation of the Gauss-Jackson Integration for Orbit Propagation. The Journal of the Astronautical Sciences, Vol 52, No 3, July-September 2004, pp. 311–357.

ODE solver landscape

Many methods for numerically solving ordinary differential equations are either Runge-Kutta methods or linear multistep methods. These methods can either be explicit or implicit.

The table below shows the four combinations of these categories and gives some examples of each.

\begin{tabular}{|l|ll|} \hline  & Runge-Kutta & Linear multistep\\ \hline Explicit & ERK & Adams-Bashforth\\ Implicit & (S)DIRK & Adams-Moulton, BDF\\ \hline \end{tabular}

Runge-Kutta methods advance the solution of a differential equation one step at a time. That is, these methods approximate the solution at the next time step using only the solution at the current time step and the differential equation itself.

Linear multistep methods approximate the solution at the next time step using the computed solutions at the latest several time steps.

Explicit methods express the solution at the next time step as an explicit function of other information, not including the solution itself. The solution at the next time step appears on only one side of the equations.

Implicit methods express the solution at the next time step as a function of other information including the solution itself. The solution at the next time step appears on both sides of the equations. Additional work needs to be done to solve for the solution.

More on explicit vs implicit methods here.

In the table above, ERK stands for, not surprisingly, explicit Runge-Kutta methods. DIRK stands for diagonally implicit Runge-Kutta. SDIRK stands for singly diagonally implicit Runge-Kutta. BDF stands for backward difference formulas.

More posts on ODE solvers

New math for going to the moon

spacecraft rendezvous

Before I went to college, I’d heard that it took new math and science for Apollo to get to the moon. Then in college I picked up the idea that Apollo required a lot of engineering, but not really any new math or science. Now I’ve come full circle and have some appreciation for the math research that was required for the Apollo landings.

Celestial mechanics had been studied long before the Space Age, but that doesn’t mean the subject was complete. According to One Giant Leap,

In the weeks after Sputnik, one Langley [Research Center] scientist went looking for books on orbital mechanics—how to fly in space—and in the Langley technical library he found exactly one: Forest R. Moulton’s [1] An Introduction to Celestial Mechanics. In 1958 Langley was in possession of one of the most recent editions of Moulton: the 1914 update of the 1902 edition.

I have a quibble with part of the quote above. The author describes orbital mechanics as “how to fly in space.” More technically, at the time, orbital mechanics was “how things fly through space.” Orbital mechanics was passive. You wanted to know how, for example, Titan moves around Saturn. Nobody asked about the most efficient way to change the orbit of Titan so that it ends up at a certain place at a certain time.

NASA needed active orbital mechanics. It had to do more than simply describe existing orbits; it had to design orbits. And it had to control orbits. None of the terms in your equations are known to infinite precision, so it is not enough to understand the exact equations under ideal circumstances. You have to understand how uncertainties in the parts impact the whole, and how to adjust for them.

And all this has to be done in a computer with about 500 kilobits of ROM [2]. Because the computer memory was limited, NASA had to know which terms in the equations could be dropped, what approximations could be made, etc. Understanding how to approximate a system well with limited resources is much harder than working with exact equations [3].

Nobody at NASA would have said “We’ve got the math in the bag. Now we just need the engineers to get busy.”

Related posts

[1] This is the same Moulton of Adams-Moulton and Adams-Bashforth-Moulton numerical methods for solving differential equations. Presumably Mr. Moulton’s interest in numerical solutions to differential equations came out of his interest in celestial mechanics. See where Adams-Moulton fits into the ODE solver landscape in the next post.

[2] Each word in the Apollo Guidance Computer was 15 bits of data plus one check bit. There were 2048 words of RAM, 36,864 words of ROM. This amounts to 552,960 bits of ROM, excluding check bits, as much as 68 kilobytes using 8-bit bytes.

[3] Not that the “exact” equations are actually exact. When you write down the equations of motion for three point masses, for example, you’ve already done a great deal of simplification.

A spring, a rubber band, and chaos

Suppose you have a mass suspended by the combination of a spring and a rubber band. A spring resists being compressed but a rubber band does not. So the rubber band resists motion as the mass moves down but not as it moves up. In [1] the authors use this situation to motivate the following differential equation:

y'' + 0.01 y' + a y^+ - b y^- = 10 + \lambda \sin \mu t

where

\begin{align*} y^+ =& \max\{\phantom{-}y, 0\} \\ y^- =& \max\{-y, 0\} \end{align*}

If ab then we have a linear equation, an ordinary damped, driven harmonic oscillator. But the asymmetry of the behavior of the rubber band causes a and b to be unequal, and that’s what makes the solutions interesting.

For some parameters the system exhibits essentially sinusoidal behavior, but for other parameters the behavior can become chaotic.

Here’s an example of complex behavior.

Motion of mass suspended by spring and rubber band

Here’s the Python code that produced the plot.

    from scipy import linspace, sin
    from scipy.integrate import solve_ivp
    import matplotlib.pyplot as plt

    def pos(x):
        return max(x, 0)

    def neg(x):
        return max(-x, 0)

    a, b, λ, μ = 17, 1, 15.4, 0.75

    def system(t, z):
        y, yp = z # yp = y'
        return [yp, 10 + λ*sin(μ*t) - 0.01*yp - a*pos(y) + b*neg(y)]

    t = linspace(0, 100, 300)

    sol = solve_ivp(system, [0, 100], [1, 0], t_eval=t)
    plt.plot(sol.t, sol.y[0])
    plt.xlabel("$t$")
    plt.ylabel("$y$")

In a recent post I said that I never use non-ASCII characters in programming, so in the code above I did. In particular, it was nice to use λ as a variable; you can’t use lambda as a variable name because it’s a reserved keyword in Python.

Update: Here’s a phase portrait for the same system.

phase plot

More posts on differential equations

[1] L. D. Humphreys and R. Shammas. Finding Unpredictable Behavior in a Simple Ordinary Differential Equation. The College Mathematics Journal, Vol. 31, No. 5 (Nov., 2000), pp. 338-346

A brief comment on hysteresis

You might hear hysteresis described as a phenomena where the solution to a differential equation depends on its history. But that doesn’t make sense: the solution to a differential equation always depends on its history. The solution at any point in time depends (only) on its immediately preceding state. You can take the state at any particular time, say position and velocity, as the initial condition for the solution going forward.

But sometimes there’s something about the solution to a differential equation that does not depend on its history. For a driven linear oscillator, the frequency of the solution depends only on the frequency of the driving function. But this is not necessarily true of a nonlinear oscillator. The same driving frequency might correspond to two different solution frequencies, depending on whether the frequency of the driving function increased to its final value or decreased to that value.

Hysteresis is interesting, but it’s more remarkable that linear system does not exhibit hysteresis. Why should the solution frequency depend only on the driving frequency and not on the state of the system?

The future of system with hysteresis still depends only on its immediately preceding state. But there may be a thin set of states that lead to solutions with a given frequency, and the easiest way to arrive in such a set may be to sneak up on it slowly.

Related posts