A friend asked me a question the other day that came out of a graphics application. He needed to trace out an ellipse in such a way that the length of curve traced out each second was constant. For a circle, the problem is simple: (cos(t), sin(*t*)) will trace out a circle covering a constant amount of arc length per unit time. The analogous parameterization for an ellipse, (*a* cos(t), *b* sin(*t*)) will move faster near the longer semi-axis and slower near the shorter one.

There’s a general solution to the problem for any curve. Given a parameterization *p*(*t*), where *p* is vector-valued, the length covered from time 0 to time *t* is

If you change the time parameterization by inverting this function, solving for *t* as a function of *s*, then the total length of curve traversed by *p*(*t*(*s*)) up to time *s* is *s*. This is called either the **unit speed parameterization** or **parameterization by arc length**.

**The hard part is inverting s(t)**. If you had to find a unit speed parameterization in a calculus class, the problem would be carefully designed so the function

*s*(

*t*) is easy to invert. Real applications don’t usually work out so easily.

## Digression on elliptic integrals and elliptic functions

My response to my friend’s question was that there probably is a closed-form unit speed parameterization, but it would probably involve elliptic functions. He didn’t need much resolution, and decided to do something ad hoc.

Starting with the parameterization *p*(*t*) = (*a* cos(t), *b* sin(*t*)), the arc length *s*(*t*) is given by a special function known as an “incomplete elliptic integral of the second kind.” I remembered that the Jacobi elliptic functions were related to the inverses of elliptic integrals, so my hunch was that you could make a unit speed parameterization using Jacobi elliptic functions. Maybe you can, but it’s not as easy as I thought because the Jacobi functions are related to the inverses of elliptic integrals of the *first* kind.

**Elliptic integrals** are so named because they were first identified by computing arc length for a (portion of) an ellipse. **Elliptic functions** were discovered by inverting elliptic *integrals*, but not the same class of elliptic integrals that give the arc length of an ellipse. (There may well be a transformation that makes these more directly related, but I’m not aware of it.)

Incidentally, **elliptic curves** are related to elliptic *functions*, but they are not ellipses! There is a connection from ellipses to elliptic curves, but it’s historical and indirect.

What if we had a more general curve than an ellipse, say something parameterized by **cubic splines**? Cubic splines are piecewise cubic polynomials, patched together in such a way that the first and second derivatives are continuous across the patches. We can find length of a spline by finding the length of each polynomial patch.

If *p*(*t*) is the parameterization of a curve in 2 or 3 dimensions (or really any number of dimensions) and each component of *p* is a cubic polynomial, then each component of the derivative of *p* is a quadratic polynomial, and so the sum of the squares of the components is a fourth degree polynomial. So finding the arc length involves integrating the square root of a fourth degree polynomial. This makes it an **elliptic integral**!

Unfortunately, knowing that the arc length of a cubic spline corresponds to an elliptic integral is not so useful because it could be any type of elliptic integral, depending on its parameters. You’d have to do some work first to put it into a form where you could call on elliptic integrals to finish your problem.

## Numerically computing arc length and unit speed parameterization

The elliptic integral path is something of a dead end. It could still be useful if you needed high accuracy, or if you had some restriction on the class of curves you’re interested in. But in general, you’d need to use numerical integration to find the arc length.

You could also find unit-speed parameterizations numerically, using root-finding to invert *s*(*t*) at particular points. Since *s* is an increasing function of *t*, you could use a bisection method, which is not the most efficient approach but very robust.

It takes a fair amount of computation to carry root finding where each function evaluation requires computing a numerical integral. But this would work, and depending on your context it could be efficient enough.

If you needed more efficiency, say for a real-time embedded system, you could take a different approach. Your spline is probably an approximation to something else, and so your solution only needs to be as accurate as the spline approximation. This gives you the wiggle room to do something more efficient. You might change your parameterization slightly to make the arc length calculations easier, or come up with a way to approximately invert the arc length function, something that takes advantage of your particular problem and error tolerance.

From the numerical perspective, it seems that we can proceed roughly as follows. We need s(t+δ)-s(t) to be equal to 1, so approximate it as s(t+δ)-s(t) = δ * |p'(t)|, and take δ = 1 / |p'(t)|. It may be insanely incorrect, but we can take a more detailed Riemann sum, at the cost of approximating even |p'(t)|: s(t+δ)-s(t) = δ/2 |p'(t)| + δ/2 |p'(t+δ)| = δ/2 |p'(t)| + δ/2 |p'(t)| + δ²/2 (p'(t) · p”(t)) / |p'(t)|, and solving the quadratic for δ. More terms would require more derivatives of |p'(t)| and lead to polynomials of higher order.

I feel that there should be some iterative approach, like taking δ = 1 / |p'(t)| as a rough guess, evaluate s(t+δ)-s(t) for this fixed δ with a more robust numerical scheme, see if we overshoot or undershoot, and use this to guess a better δ.

What I’ve often done for this is compute a bunch of (t_i, s_i) pairs for t_i in the range I need using numerical integration, then compute t(s) for any s using cubic interpolation on the data (s_i,t_i). The s_i aren’t going to be evenly spaced, but that’s often not a problem.

I’d take the super lazy approach: Quantize the unit of display by the pixel and if we put new pixels on the screen at a constant rate the curve should be drawn at a constant rate. You could get fancy and say pixels placed diagonally are sqrt(2) slower, but for small pixels you might not even notice that.

I assume that p(t) is not terribly hard to compute (i.e., faster than the rate we want the curve drawn) and that delta_t is small enough that we never skip pixels. So for every t, compute the pixels that need to be turned on, put them in a queue and then have the drawing function pull pixels out of the queue at a fixed rate and put them to the screen.

Instead of taking ds/dt = |p'(t)| and finding s(t) by integration, we can take dt/ds=1/|p'(t)| and find t(s) by numerically solving this ODE.

For the ellipse, the Euler method gives:

x = a*cos(t), y = b*sin(t), t = t + h / sqrt((a/b*y)^2 + (b/a*x)^2),

where h is the step size for s.

You can start at t=0, for example and stop just before t reaches 2*pi.

MikeD’s idea is related to the midpoint circle algorithm [1] for rasterizing circles (and other conic sections).

[1] https://en.wikipedia.org/wiki/Midpoint_circle_algorithm