Leapfrog integrator

The so-called “leapfrog” integrator is a numerical method for solving differential equations of the form

x'' = f(x)

where x is a function of t. Typically x is position and t is time.

This form of equation is common for differential equations coming from mechanical systems. The form is more general than it may seem at first. It does not allow terms involving first-order derivatives, but these terms can often be eliminated via a change of variables. See this post for a way to eliminate first order terms from a linear ODE.

The leapfrog integrator is also known as the Störmer-Verlet method, or the Newton-Störmer-Verlet method, or the Newton-Störmer-Verlet-leapfrog method, or …

The leapfrog integrator has some advantages over, say, Runge-Kutta methods, because it is specialized for a particular (but important) class of equations. For one thing, it solves the second order ODE directly. Typically ODE solvers work on (systems of) first order equations: to solve a second order equation you turn it into a system of first order equations by introducing the first derivative of the solution as a new variable.

For another thing, it is reversible: if you advance the solution of an ODE from its initial condition to some future point, make that point your new initial condition, and reverse time, you can step back to exactly where you started, aside from any loss of accuracy due to floating point; in exact arithmetic, you’d return to exactly where you started.

Another advantage of the leapfrog integrator is that it approximately conserves energy. The leapfrog integrator could perform better over time compared to a method that is initially more accurate.

Here is the leapfrog method in a nutshell with step size h.

\begin{align*} x_{i+1} &= x_i + v_i h + \frac{1}{2} f(x_i) h^2 \\ v_{i+i} &= v_i + \frac{1}{2}\left(f(x_i) + f(x_{i+1})\right) h \end{align*}

And here’s a simple Python demo.

    import numpy as np
    import matplotlib.pyplot as plt
    
    # Solve x" = f(x) using leapfrog integrator
    
    # For this demo, x'' + x = 0
    # Exact solution is x(t) = sin(t)
    def f(x):
        return -x
    
    k = 5               # number of periods
    N = 16              # number of time steps per period
    h = 2*np.pi/N       # step size
    
    x = np.empty(k*N+1) # positions
    v = np.empty(k*N+1) # velocities
    
    # Initial conditions
    x[0] = 0
    v[0] = 1
    anew = f(x[0])
    
    # leapfrog method
    for i in range(1, k*N+1):
        aold = anew
        x[i] = x[i-1] + v[i-1]*h + 0.5*aold*h**2
        anew = f(x[i])
        v[i] = v[i-1] + 0.5*(aold + anew)*h

Here’s a plot of the solution over five periods.

There’s a lot more I hope to say about the leapfrog integrator and related methods in future posts.

More on ODE solvers

Counterexample to Dirichlet principle

Let Ω be an open set in some Euclidean space and v a real-valued function on Ω.

Dirichlet principle

Dirichlet’s integral for v, also called the Dirichlet energy of v, is

\int_\Omega \frac{1}{2} | \nabla v |^2

Among functions with specified values on the boundary of Ω, Dirichlet’s principle says that minimizing Dirichlet’s integral is equivalent to solving Laplace’s equation.

In a little more detail, let g be a continuous function on the boundary ∂Ω of the region Ω. A function u has minimum Dirichlet energy, subject to the requirement that u = g on ∂Ω, if and only if u solves Laplace’s equation

\Delta; u = 0

subject to the same boundary condition.

Dirichlet’s principle requires some hypotheses not stated here, as Hadamard’s example below shows.

Hadamard’s example

Let g(θ) be the function [1]

g(\theta) = \sum_{n=1}^\infty \frac{\sin n!\theta}{n^2}

The function g is continuous and so there exists a unique solution to Laplace’s equation on the unit disk with boundary values given by g, but the Dirichlet energy of the solution diverges.

The solution, in polar coordinates, is

u(r, \theta) = \sum_{n=1}^\infty r^{n!} \,\,\frac{\sin n!\theta}{n^2}

The Laplace operator in polar coordinates is

\frac{1}{r} \frac{\partial }{\partial r}\left(r \frac{\partial u}{\partial r} \right) + \frac{1}{r^2} \frac{\partial^2 u}{\partial \theta^2}

and you can differentiate u term by-term to show that it satisfies Laplace’s equation.

Dirichlet’s integral in polar coordinates is

\int_0^{2\pi} \int_0^1 \frac{1}{2} \left\{ \left( \frac{\partial u}{\partial r}\right)^2 + \frac{1}{r^2}\left(\frac{\partial f}{\partial \theta}\right)^2 \right\} \, r\,dr\,d\theta

Integrating term-by-term, the nth term in the series for the Dirichlet energy in Hadamard’s example is

\frac{(n!)^2}{2n^4(2n! - 1)}

and so the series rapidly diverges.

Dirichlet’s principle requires that there be at least one function satisfying the specified boundary conditions that has finite Dirichlet energy. In the example above, the solution to Laplace’s equation with boundary condition g has infinite Dirichlet energy. It turns out the same is true for every function satisfying the same boundary condition, whether it satisfies Laplace’s equation or not.

Related posts

[1] What is the motivation for this function? The function is given by a lacunary series, a Fourier series with increasingly large gaps between the frequency components. The corresponding series for u cannot be extended to an analytic function outside the closed unit circle. If it could be so extended, Dirichlet’s principle would apply and the example wouldn’t work.

NASA’s favorite ODE solver

NASA’s Orbital Flight Handbook, published in 1963, is a treasure trove of technical information, including a section comparing the strengths and weaknesses of several numerical methods for solving differential equations.

The winner was a predictor-corrector scheme known as Gauss-Jackson, a method I have not heard of outside of orbital mechanics, but one apparently particularly well suited to NASA’s needs.

The Gauss-Jackson second-sum method is strongly recommended for use in either Encke or Cowell [approaches to orbit modeling]. For comparable accuracy, it will allow step-sizes larger by factors of four or more than any of the forth order methods. … As compared with unsummed methods of comparable accuracy, the Gauss-Jackson method has the very important advantage that roundoff error growth is inhibited. … The Gauss-Jackson method is particularly suitable on orbits where infrequent changes in the step-size are necessary.

Here is a table summarizing the characteristics of each of the solvers.

Notice that Gauss-Jackson is the only method whose roundoff error accumulation is described as “excellent.”

A paper from 2004 [1] implies that the Gauss-Jackson method was still in use at NASA at the time of writing.

The Gauss-Jackson multi-step predictor-corrector method is widely used in numerical integration problems for astrodynamics and dynamical astronomy. The U.S. space surveillance centers have used an eighth-order Gauss-Jackson algorithm since the 1960s.

I could imagine a young hotshot explaining to NASA why they should use some other ODE solver, only to be told that the agency had already evaluated the alternatives half a century ago, and that the competitors didn’t have the same long-term accuracy.

More math and space posts

[1] Matthew M. Berry and Liam M. Healy. Implementation of the Gauss-Jackson Integration for Orbit Propagation. The Journal of the Astronautical Sciences, Vol 52, No 3, July-September 2004, pp. 311–357.

ODE solver landscape

Many methods for numerically solving ordinary differential equations are either Runge-Kutta methods or linear multistep methods. These methods can either be explicit or implicit.

The table below shows the four combinations of these categories and gives some examples of each.

\begin{tabular}{|l|ll|} \hline  & Runge-Kutta & Linear multistep\\ \hline Explicit & ERK & Adams-Bashforth\\ Implicit & (S)DIRK & Adams-Moulton, BDF\\ \hline \end{tabular}

Runge-Kutta methods advance the solution of a differential equation one step at a time. That is, these methods approximate the solution at the next time step using only the solution at the current time step and the differential equation itself.

Linear multistep methods approximate the solution at the next time step using the computed solutions at the latest several time steps.

Explicit methods express the solution at the next time step as an explicit function of other information, not including the solution itself. The solution at the next time step appears on only one side of the equations.

Implicit methods express the solution at the next time step as a function of other information including the solution itself. The solution at the next time step appears on both sides of the equations. Additional work needs to be done to solve for the solution.

More on explicit vs implicit methods here.

In the table above, ERK stands for, not surprisingly, explicit Runge-Kutta methods. DIRK stands for diagonally implicit Runge-Kutta. SDIRK stands for singly diagonally implicit Runge-Kutta. BDF stands for backward difference formulas.

More posts on ODE solvers

New math for going to the moon

spacecraft rendezvous

Before I went to college, I’d heard that it took new math and science for Apollo to get to the moon. Then in college I picked up the idea that Apollo required a lot of engineering, but not really any new math or science. Now I’ve come full circle and have some appreciation for the math research that was required for the Apollo landings.

Celestial mechanics had been studied long before the Space Age, but that doesn’t mean the subject was complete. According to One Giant Leap,

In the weeks after Sputnik, one Langley [Research Center] scientist went looking for books on orbital mechanics—how to fly in space—and in the Langley technical library he found exactly one: Forest R. Moulton’s [1] An Introduction to Celestial Mechanics. In 1958 Langley was in possession of one of the most recent editions of Moulton: the 1914 update of the 1902 edition.

I have a quibble with part of the quote above. The author describes orbital mechanics as “how to fly in space.” More technically, at the time, orbital mechanics was “how things fly through space.” Orbital mechanics was passive. You wanted to know how, for example, Titan moves around Saturn. Nobody asked about the most efficient way to change the orbit of Titan so that it ends up at a certain place at a certain time.

NASA needed active orbital mechanics. It had to do more than simply describe existing orbits; it had to design orbits. And it had to control orbits. None of the terms in your equations are known to infinite precision, so it is not enough to understand the exact equations under ideal circumstances. You have to understand how uncertainties in the parts impact the whole, and how to adjust for them.

And all this has to be done in a computer with about 500 kilobits of ROM [2]. Because the computer memory was limited, NASA had to know which terms in the equations could be dropped, what approximations could be made, etc. Understanding how to approximate a system well with limited resources is much harder than working with exact equations [3].

Nobody at NASA would have said “We’ve got the math in the bag. Now we just need the engineers to get busy.”

Related posts

[1] This is the same Moulton of Adams-Moulton and Adams-Bashforth-Moulton numerical methods for solving differential equations. Presumably Mr. Moulton’s interest in numerical solutions to differential equations came out of his interest in celestial mechanics. See where Adams-Moulton fits into the ODE solver landscape in the next post.

[2] Each word in the Apollo Guidance Computer was 15 bits of data plus one check bit. There were 2048 words of RAM, 36,864 words of ROM. This amounts to 552,960 bits of ROM, excluding check bits, as much as 68 kilobytes using 8-bit bytes.

[3] Not that the “exact” equations are actually exact. When you write down the equations of motion for three point masses, for example, you’ve already done a great deal of simplification.

A spring, a rubber band, and chaos

Suppose you have a mass suspended by the combination of a spring and a rubber band. A spring resists being compressed but a rubber band does not. So the rubber band resists motion as the mass moves down but not as it moves up. In [1] the authors use this situation to motivate the following differential equation:

y'' + 0.01 y' + a y^+ - b y^- = 10 + \lambda \sin \mu t

where

\begin{align*} y^+ =& \max\{\phantom{-}y, 0\} \\ y^- =& \max\{-y, 0\} \end{align*}

If ab then we have a linear equation, an ordinary damped, driven harmonic oscillator. But the asymmetry of the behavior of the rubber band causes a and b to be unequal, and that’s what makes the solutions interesting.

For some parameters the system exhibits essentially sinusoidal behavior, but for other parameters the behavior can become chaotic.

Here’s an example of complex behavior.

Motion of mass suspended by spring and rubber band

Here’s the Python code that produced the plot.

    from scipy import linspace, sin
    from scipy.integrate import solve_ivp
    import matplotlib.pyplot as plt

    def pos(x):
        return max(x, 0)

    def neg(x):
        return max(-x, 0)

    a, b, λ, μ = 17, 1, 15.4, 0.75

    def system(t, z):
        y, yp = z # yp = y'
        return [yp, 10 + λ*sin(μ*t) - 0.01*yp - a*pos(y) + b*neg(y)]

    t = linspace(0, 100, 300)

    sol = solve_ivp(system, [0, 100], [1, 0], t_eval=t)
    plt.plot(sol.t, sol.y[0])
    plt.xlabel("$t$")
    plt.ylabel("$y$")

In a recent post I said that I never use non-ASCII characters in programming, so in the code above I did. In particular, it was nice to use λ as a variable; you can’t use lambda as a variable name because it’s a reserved keyword in Python.

Update: Here’s a phase portrait for the same system.

phase plot

More posts on differential equations

[1] L. D. Humphreys and R. Shammas. Finding Unpredictable Behavior in a Simple Ordinary Differential Equation. The College Mathematics Journal, Vol. 31, No. 5 (Nov., 2000), pp. 338-346

A brief comment on hysteresis

You might hear hysteresis described as a phenomena where the solution to a differential equation depends on its history. But that doesn’t make sense: the solution to a differential equation always depends on its history. The solution at any point in time depends (only) on its immediately preceding state. You can take the state at any particular time, say position and velocity, as the initial condition for the solution going forward.

But sometimes there’s something about the solution to a differential equation that does not depend on its history. For a driven linear oscillator, the frequency of the solution depends only on the frequency of the driving function. But this is not necessarily true of a nonlinear oscillator. The same driving frequency might correspond to two different solution frequencies, depending on whether the frequency of the driving function increased to its final value or decreased to that value.

Hysteresis is interesting, but it’s more remarkable that linear system does not exhibit hysteresis. Why should the solution frequency depend only on the driving frequency and not on the state of the system?

The future of system with hysteresis still depends only on its immediately preceding state. But there may be a thin set of states that lead to solutions with a given frequency, and the easiest way to arrive in such a set may be to sneak up on it slowly.

Related posts

A better adaptive Runge-Kutta method

This is the third post in a series on Runge-Kutta methods. The first post in the series introduces Runge-Kutta methods and Butcher tableau. The next post looked at Fehlberg’s adaptive Runge-Kutta method, first published in 1969. This post looks at a similar method from Dormand and Prince in 1980.

Like Fehlberg’s method, the method of Dormand and Prince can be summarized in a big, intimidating tableau, which we will display below. However we will discuss three differences between the methods:

  • Order 4(5) vs 5(4)
  • Derivative reuse
  • Precision / computation ratio

Dormand Prince tableau

Here’s the Butcher tableau for the Dormand-Prince method in all it’s glory:

Dormand-Prince tableau

The only detail of the table that will be important below is that 7th and 8th rows are identical.

Order 4(5) vs order 5(4)

Fehlberg’s method, a.k.a. RKF45, computes each update to the solution using a 4th order Runge-Kutta method, and uses a 5th order Runge-Kutta method to estimate the error.

The method of Dormand and Prince also uses 4th and 5th order Runge-Kutta methods, but in the opposite way. The fifth order method is used to advance the solution, and the 4th order method is used for comparison to estimate error.

Derivative reuse

The work in solving

y' = f(t, y)

by a Runge-Kutta method is roughly proportional to the number of stages. Dormand-Prince is a 7-stage method while Fehlberg is a 6-stage method, so it would seem that the latter is more efficient. However, if you look back at the Dormand-Prince tableau, the last row above the horizontal line equals the first row below the line. That means that the last evaluation of f at one step can be reused at the first evaluation of f at the next step.

Precision per unit work

In their book Solving Differential Equations, vol. 1, Hairer et al compare several adaptive Runge-Kutta methods, including Fehlberg (RKF45) and Dormand-Prince, and conclude that the latter produces more precision per unit work.

We again see that the [Fehlberg] method underestimates the local error. Further, with the use of local extrapolation, the advantage of RKF4(5) melts away to a large extent. The best method of all these is without a doubt the coefficient set of Dormand and Prince.

More differential equation posts

How to estimate ODE solver error

This post brings together several themes I’ve been writing about lately: caching function evaluations, error estimation, and Runge-Kutta methods.

A few days ago I wrote about how Runge-Kutta methods can all be summarized by a set of numbers called the Butcher tableau. These methods solve

y' = f(t, y)

by evaluating f at some partial step, then evaluating f again at some new partial step and some linear combination of values from previous steps, etc. In the preface to J. C. Butcher’s book on differential equations, JM Sanz-Serna describes this structure as follows.

Runge-Kutta schemes … are highly nonlinear with a remarkable Matrioshka doll structure, where the vector field has to be evaluated an expression that involves the vector field evaluated at and expression that involves the vector field …

Once all the “Matrioshka dolls” are lined up, i.e. all the intermediate results have been calculated, the final estimate is a linear combination of these values.

Here’s the clever idea behind adaptive solvers: Create two Runge-Kutta methods of different orders that depend on the same intermediate results. Then both can be computed without new function evaluations, and the results compared. The difference between the results can be used as an estimate of the local error. Then you can adjust your step size accordingly and try again.

I’m going to present two adaptive Runge-Kutta schemes. I’ll go over Fehlberg’s method in this post and a variation in the next post that has some advantages over Fehlberg’s method.

Runge-Kutta-Felhberg (RKF45)

Fehlberg’s method, commonly known as RKF45, starts with a six-stage Runge Kutta method whose coefficients are given by the following tableau.

\begin{array} {c|ccccccc} 0\\ 1/4 & 1/4 \\ 3/8 & 3/32 & 9/32 \\ 12/13 & 1932/2197 & -7200/2197& 7296/2197 \\ 1 & 439/216 & -8 & 3680/513 & -845/4104 \\ 1/2 & -8/27 & 2 & -3544/2565 & 1859/4104 & -11/40 \\ \hline y_1 & 25/216 & 0 & 1408/2565 & 2197/1404 & -1/5 & 0 & \\ \end{array}

The meaning of the tableau is described here. (Imagine the effort it took Erwin Felhberg to derive this in 1969. Presumably he had little more than a desktop calculator to help. Maybe not even that.)

What’s important about the tableau for this post is that the coefficients above the horizontal line are used to create six numbers, the k‘s in the notation of the post referenced above. The k‘s are multiplied by the coefficients below the horizontal line to produce the solution of the differential equation at the next step. This is the “4” of RKF45.

RKF45 then applies another method which reuses all the k‘s in a different linear combination. This is summarized in the following variation of the Butcher tableau.

\begin{array} {c|ccccccc} 0\\ 1/4 & 1/4 \\ 3/8 & 3/32 & 9/32 \\ 12/13 & 1932/2197 & -7200/2197 & 7296/2197 \\ 1 & 439/216 & -8 & 3680/513 & -845/4104 \\ 1/2 & -8/27 & 2 & -3544/2565 & 1859/4104 & -11/40 \\ \hline y_1 & 25/216 & 0 & 1408/2565 & 2197/1404 & -1/5 & 0 & \\ \hline \hat{y} & 16/135 & 0 & 6656/12825 & 28561/56430 & -9/50 & 2/55 \end{array}

This is essentially two tableau combined into one. The first is as above. The second is like the one above but with a different bottom row. The bottom row gives the coefficients for a Runge-Kutta method corresponding to the “5” part of RKF45.

The 4 stands for 4th order, i.e. the local error for a step size h is O(h4). The 5 stands for 5th order, i.e. local error O(h5). RKF45 is two different methods, but they share so much computation that the second one almost comes for free; it does not require any new function evaluations, only taking a linear combination of six numbers.

However, RK45 is only “free” if you’ve gone to the effort of using a six-stage method. The amount of computation is roughly proportional to the number of stages, so we do about 50% more work to have RKF45 with an error estimate than doing the most common 4th order RK method. So if you knew exactly what step size to use, basic RK would be more efficient. But how could you know the optimal step size a priori?

By guiding us to choose the right step size, the extra work in RKF45 more than pays for itself. It could save a lot of computation that would come from using too small a step size, or prevent inaccurate results due to using too large a step size. Or maybe both: maybe a differential equation needs small steps at one period of time and can use larger steps at another period of time.

For more information on Felhberg’s method, see Solving Differential Equations by Hairer et al.

Related posts

Runge-Kutta methods and Butcher tableau

If you know one numerical method for solving ordinary differential equations, it’s probably Euler’s method. If you know two methods, the second is probably 4th order Runge-Kutta. It’s standard in classes on differential equations or numerical analysis to present Euler’s method as conceptually simple but inefficient introduction, then to present Runge-Kutta as a complicated but efficient alternative.

Runge-Kutta methods are a huge family of numerical methods with a wide variety of trade-offs: efficiency, accuracy, stability, etc. Euler’s method is a member of the Runge-Kutta family as are countless other variations. You could devote a career to studying Runge-Kutta methods, and some people have.

Beneath the complexity and variety, all Runge-Kutta methods have a common form that can be summarized by a matrix and two vectors. For explicit Runge-Kutta methods (ERK) the matrix is triangular, and for implicit Runge-Kutta methods (IRK) the matrix is full.

This summary of an RK method is known as a Butcher tableau, named after J. C. Butcher who classified RK methods.

“The” Runge-Kutta method

For example, let’s start with what students often take to be “the” Runge-Kutta method. This method approximates solutions to a differential equation of the form

y' = f(t, y)

by

y_{n+1} = y_n + \frac{h}{6}\left( k_{n1} + 2k_{n2} + 2k_{n3} + k_{n4}\right)

where

k_{n1} &=& f(t_n, y_n) \\ k_{n2} &=& f(t_n + 0.5h, y_n + 0.5hk_{n1}) \\ k_{n3} &=& f(t_n + 0.5h, y_n + 0.5hk_{n2}) \\ k_{n4} &=& f(t_n + h, y_n + hk_{n3}) \\

The Butcher tableau for this ERK method is

\begin{array} {c|cccc} 0\\ 1/2 & 1/2\\ 1/2 &0 &1/2 \\ 1& 0& 0& 1\\ \hline & 1/6 & 1/3 & 1/3 &1/6 \end{array}

The numbers along the left side are the coefficients of h in the first argument of f.

The numbers along the bottom are the coefficients of the ks in the expression for the value of y at the next step.

The numbers in the middle of the array are the coefficients of the ks in second argument  of f. Because this is an explicit method, each k only depends on the previous ks, and so the table of coefficients has a triangular form.

Runge-Kutta 3/8 rule

The method above is the most common 4th order ERK rule, there is another known as the 3/8 rule. It is a little less efficient and a little more accurate. A step of this rule is given by

y_{n+1} = y_n + \frac{h}{8}\left( k_{n1} + 3k_{n2} + 3k_{n3} + k_{n4}\right)

where

\begin{align*} k_{n1} &= f(t_n, y_n) \\ k_{n2} &= f(t_n + \frac{h}{3}, y_n + \frac{h}{3}k_{n1}) \\ k_{n3} &= f(t_n +\frac{2h}{3}, y_n -\frac{h}{3}k_{n1} + k_{n2}) \\ k_{n4} &= f(t_n + h, y_n + h k_{n1} - h k_{n2} + hk_{n3}) \end{align*}

This method is summarized in the following Butcher tableau.

\begin{array} {c|cccc} 0\\ 1/3 & 1/3\\ 2/3 & -1/3 &1 \\ 1& 1& -1 & 1\\ \hline & 1/8 & 3/8 & 3/8 &1/8 \end{array}

This example makes it a little easier to see what’s going on since none of the coefficients in the triangular array are zero. Full detail is given in the section below.

General Explicit Runge-Kutta

The most general form of an ERK rule with s steps is

y_{n+1} = y_n + h \sum_{i-1}^s b_i k_{ni}

where

k_{ni} = f\left(x_n + c_i h, y_n + h \sum_{j=1}^{i-1} a_{ij} k_{nj}\right)

and the Butcher tableau is

\begin{array} {c|ccccc} 0\\ c_2 & a_{21}\\ c_3 & a_{31} & a_{32} \\ \vdots & \vdots & & \ddots \\ c_s& a_{s1}& a_{s2} & \cdots & a_{s,s-1}\\ \hline & b_1 & b_2 & \cdots & b_{s-1} & b_s \end{array}

General implicit Runge-Kutta

With explicit (ERK) methods, each k depends only on its predecessors. With implicit (IRK) methods each k potentially depends on each of the others. The matrix in the tableau is full, not triangular, and one must solve for the ks.

Now

k_{ni} = f\left(x_n + c_i h, y_n + h \sum_{j=1}^s a_{ij} k_{nj}\right)

with the sum going all the way up to s, and the Butcher tableau is

\begin{array} {c|ccccc} c_1 & a_{11} & a_{12} & \cdots & a_{1s} \\ c_2 & a_{21} & a_{22} & \cdots & a_{2s} \\ \vdots & \vdots & & \ddots & \vdots \\ c_s& a_{s1}& a_{s2} & \cdots & a_{s,s}\\ \hline & b_1 & b_2 & \cdots & b_{s} \end{array}

Implicit methods are more complicated to implement, and require more computation for a given step size. However, they are more stable for stiff differential equations and may allow larger steps. Implicit methods are less efficient when they’re not needed, and more efficient when they are needed.

Back to Euler’s method

I said at the top of the post that Euler’s method was a special case of Runge-Kutta. The Butcher tableau for the explicit (forward) Euler method is simply

 \begin{array} {c|c} 0 & 0\\ \hline & 1\end{array}

and the tableau for the implicit (backward) Euler method is just

\begin{array} {c|c} 1 & 1\\ \hline & 1\end{array}

In this post I say more about these two methods and compare their stability.

More on differential equations