Duffing equation for nonlinear oscillator

The Duffing equation is an ordinary differential equation describing a nonlinear damped driven oscillator.

\frac{d^2x}{dt} + h\frac{dx}{dt} + \Omega_0^2 x + \mu x^3 = F \cos \omega t

If the parameter μ were zero, this would be a damped driven linear oscillator. It’s the nonlinear x³ term that makes things nonlinear and interesting.

Using an analog computer in 1961, Youshisuke Ueda discovered that this system was chaotic. It was the first discovery of chaos in a second-order non-autonomous periodic system. Lorenz discovered his famous Lorenz attractor around the same time, though his system is third order.

Ueda’s system has been called “Ueda’s strange attractor” or the “Japanese attractor.”

In the linear case, when μ = 0, the phase portrait simply spirals outward from the origin towards its steady state.

But things get much more interesting when μ = 1. Here’s Mathematica code to numerically solve the differential equation and plot the phase portrait.

duff[t_] = 
    NDSolve[{x''[t] + 0.05 x'[t] + x[t] + x[t]^3 == 7.5 Cos[t], 
             x[0] == 0, x'[0] == 1}, x[t], {t, 0, 100}][[1, 1, 2]]
ParametricPlot[{duff[t], duff'[t]}, {t, 0, 50}, AspectRatio -> 1]

Here’s the same system integrated out further, to t = 200 rather 50.

Source: Wanda Szemplińska-Stupnicka. Chaos, Bifurcations and Fractals Around Us.

Iterating sines

Pick a starting point x0 and define x1 = f(x0), x2 = f(x1), x3 = f(x2) etc. For which functions and which starting points does the sequence converge? If the sequence converges, how quickly does it converge? In general these are hard questions, but this post will answer a couple special cases.

If f(x) = sin(0.9x) and x0 = 0.5 + i, here’s what the iterations look like:

If the Taylor series for f(x) looks like a1x + a2x2 + a3x3 + … the iterates will converge if |a1| < 1 and your starting point is any complex number in a sufficiently small disk around the origin. The sequence might converge for starting points outside that disk, but there’s no guarantee.

The reason I chose sin(0.9x) for the example above rather than sin(x) is that the former has a1 = 0.9 and satisfies the theorem. The series for sin(x) has leading coefficient 1 and so the theorem doesn’t apply. For sin(x), the sequence of iterates does in fact converge starting at x0 = 0.5 + i but it diverges for other points, such as any purely imaginary starting point.

How fast do the iterates of f(x) converge if |a1| < 1? There is some constant b with |a1| < b < 1 such that | xn | < bn |x0 |. In other words, the sequence converges geometrically.

If we look at iterates of sin(x), the sequence converges for any starting value on the real line. The convergence is slower, on the order of 1/√n. And for points off the real line, the sequence may or may not converge.

Source: Asymptotic Methods in Analysis (Dover)

Update: See Mike Croucher’s blog post for a plot of the starting points where the sine iterates converge, repleat with Mathematica code.

See Mike's post for Mathematica code to make this plot