Computing discrete logarithms with baby-step giant-step algorithm

At first “discrete logarithm” sounds like a contradiction in terms. Logarithms aren’t discrete, not as we usually think of them. But you can define and compute logarithms in modular arithmetic.

What is a logarithm? It’s the solution to an exponential equation. For example, the logarithm base 10 of 2 is the solution to the equation 10x = 2, and so x =0.30103. Similarly, you could look for the logarithm base 10 of 2 modulo 19. This is an integer value of x such that 10x = 2 mod 19, and you can show that 17 is a solution.

If we work modulo an integer n, the discrete logarithm base a of a number y is an integer x such that ax = y. For some values of a there may not be a solution, but there will be a solution if a is a generator of the integers mod n.

Brute force the simplest algorithm for finding discrete logarithms: try x = 1, 2, 3, …, n until you find a value of x satisfying ax = y. The problem with brute force is that the expected run time is on the order of n, and n is often very large in application.

Discrete logarithms are expensive to compute, but we can do better than brute force. (Cryptographic applications take advantage of the fact that discrete logarithms are hard to compute.) There’s a simple algorithm by Daniel Shanks, known as the baby-step giant-step algorithm, that reduces the run time from order n to order roughly √n. (Actually O(√n log n) for reasons we’ll see soon.)

Let s be the ceiling of the square root of n. The idea is to search for x by taking giant steps forward (multiples of s) and baby (single) steps backward.

Let G be the set of pairs (ags mod n, gs) where g ranges from 1 to s. These are the giant steps.

Let B be the set of pairs (yab mod n, b) where b ranges from 0 to s-1. These are the baby steps.

Now look for a pair in G and a pair in B with the matching first components, i.e. yab = ags. Then x = gsb is the required solution.

Constructing the sets G and requires O(s) operations, and matching the two sets takes O(s log s) operations.

Here’s an example, going back to the problem above of finding the discrete log base 10 of 2 mod 19, using a little Python code to help with the calculations.

The square root of 19 is between 4 and 5 so s = 5.

     >>> [(2*10**r % 19, r) for r in range(5)]
     [(2, 0), (1, 1), (10, 2), (5, 3), (12, 4)]
     >>> [(10**(4*t) % 19, 4*t) for t in range(1,6)]
     [(6, 4), (17, 8), (7, 12), (4, 16), (5, 20)]

The number 5 appears as the first element of a pair in both B and G. We have (5, 3) in B and (5, 20) in G so x = 20 – 3 = 17.


Related: Applied number theory

Beta reduction: The difference typing makes

Beta reduction is essentially function application. If you have a function described by what it does to x and apply it to an argument t, you rewrite the xs as ts. The formal definition of β-reduction is more complicated than this in order to account for free versus bound variables, but this informal description is sufficient for this blog post. We will first show that β-reduction holds some surprises, then explain how these surprises go away when you add typing.

Suppose you have an expression (λx.x + 2)y, which means apply to y the function that takes its argument and adds 2. Then β-reduction rewrites this expression as y + 2. In a more complicated expression, you might be able to apply β-reduction several times. When you do apply β-reduction several times, does the process always stop? And if you apply β-reduction to parts of an expression in a different order, can you get different results?

Failure to normalize

You might reasonably expect that if you apply β-reduction enough times you eventually get an expression you can’t reduce any further. Au contraire!

Consider the expression  (λx.xx) (λx.xx).  Beta reduction says to replace each of the red xs with the expression in blue. But when we do that, we get the original expression (λx.xx) (λx.xx) back. Beta reduction gets stuck in an infinite loop.

Next consider the expression L = (λx.xxy) (λx.xxy). Applying β-reduction the first time gives  (λx.xxy) (λx.xxyy or Ly. Applying β-reduction again yields Lyy. Beta “reduction” doesn’t reduce the expression at all but makes it bigger.

The technical term for what we’ve seen is that β-reduction is not normalizing. A rewrite system is strongly normalizing if applying the rules in any order eventually terminates. It’s weakly normalizing if there’s at least some sequence of applying the rules that terminates. Beta reduction is neither strongly nor weakly normalizing in the context of (untyped) lambda calculus.

Types to the rescue

In simply typed lambda calculus, we assign types to every variable, and functions have to take the right type of argument. This additional structure prevents examples such as those above that fail to normalize. If x is a function that takes an argument of type A and returns an argument of type B then you can’t apply x to itself. This is because x takes something of type A, not something of type function from A to B. You can prove that not only does this rule out specific examples like those above, it rules out all possible examples that would prevent β-reduction from terminating.

To summarize, β-reduction is not normalizing, not even weakly, in the context of untyped lambda calculus, but it is strongly normalizing in the context of simply typed lambda calculus.


Although β-reduction is not normalizing for untyped lambda calculus, the Church-Rosser theorem says it is confluent. That is, if an expression P can be transformed by β-reduction two different ways into expressions M and N, then there is an expression T such that both M and N can be reduced to T. This implies that if β-reduction does terminate, then it terminates in a unique expression (up to α-conversion, i.e. renaming bound variables). Simply typed lambda calculus is confluent as well, and so in that context we can say β-reduction always terminates in a unique expression (again up to α-conversion).

Reversing WYSIWYG

The other day I found myself saying that I preferred org-mode files to Jupyter notebooks because with org-mode, what you see is what you get. Then I realized I was using “what you see is what you get” (WYSISYG) in exactly the opposite of the usual sense. Jupyter notebooks are WYSIWYG in the same sense that a Word document is. You work with a nicely formatted file and the changes you make are immediately reflected visually.

The source file for a Jupyter notebook is a JSON document containing formatting instructions, encoded images, and the notebook content. Looking at a notebook file in a text editor is analogous to unzipping a Word document and looking at the XML inside. Here’s a sample:

    "    if (with_grid_):\n",
    "        plt.grid (which=\"both\",ls=\"-\",color=\"0.65\")\n",
    "    "
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "collapsed": false
   "outputs": [
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAawAAAESCAYAAA...

It’s hard to diff two Jupyter notebooks because content changes don’t map simply to changes in the underlying file.

We can look a little closer at WYSIWYG and ask what you see where and what you get where. Word documents and Jupyter notebooks are visually WYSIWYG: what you see in the interactive environment (browser) is what you get visually in the final product. Which is very convenient, except for version control and comparing changes.

Org-mode files are functionally WYSIWYG: what you see in the interactive environment (editor) is what you get functionally in the final code.

You could say that HTML and LaTeX are functionally or causally WYSIWYG whereas Word is visually WYSIWYG. Word is visually WYSIWYG in the sense that what you see on your monitor is essentially what you’ll see coming out of a printer. But Word doesn’t always let you see what’s causing your document to look as it does. Why can’t I delete this? Why does it insist on putting that here? HTML and LaTeX work at a lower level, for better and for worse. You may not be able to anticipate how things will look while editing the source, e.g. where a line will break, but cause and effect are transparent.

Everybody wants WYSIWYG, but they have different priorities regarding what they want to see and are concerned about different aspects of what they get.

Floating point: between blissful ignorance and unnecesssary fear

Most programmers are at one extreme or another when it comes to floating point arithmetic. Some are blissfully ignorant that anything can go wrong, while others believe that danger lurks around every corner when using floating point.

The limitations of floating point arithmetic are something to be aware of, and ignoring these limitations can cause problems, like crashing airplanes. On the other hand, floating point arithmetic is usually far more reliable than the application it is being used in.

It’s well known that if you multiply two floating point numbers, a and b, then divide by b, you might not get exactly a back. Your result might differ from a by one part in ten quadrillion (10^16). To put this in perspective, suppose you have a rectangle approximately one kilometer on each side. Someone tells you the exact area and the exact length of one side. If you solve for the length of the missing side using standard (IEEE 754) floating point arithmetic, your result could be off by as much as the width of a helium atom.

Most of the problems attributed to “round off error” are actually approximation error. As Nick Trefethen put it,

If rounding errors vanished, 90% of numerical analysis would remain.

Still, despite the extreme precision of floating point, in some contexts rounding error is a practical problem. You have to learn in what context floating point precision matters and how to deal with its limitations. This is no different than anything else in computing.

For example, most of the time you can imagine that your computer has an unlimited amount of memory, though sometimes you can’t. It’s common for a computer to have enough memory to hold the entire text of Wikipedia (currently around 12 gigabytes). This is usually far more memory than you need, and yet for some problems it’s not enough.

More on floating point computing:

Proofs and programs

Here’s an interesting quote omparing writing proofs and writing programs:

Building proofs and programs are very similar activities, but there is one important difference: when looking for a proof it is often enough to find one, however complex it is. On the other hand, not all programs satisfying a specification are alike: even if the eventual result is the same, efficient programs must be preferred. The idea that the details of proofs do not matter—usually called proof irrelevance—clearly justifies letting the computer search for proofs …

The quote is saying “Any proof will do, but among programs that do the same job, more efficient programs are better.” And this is certainly true, depending on what you want from a proof.

Proofs serve two main purposes: to establish that a proposition is true, and to show why it is true. If you’re only interested in the existence of a proof, then yes, any proof will do. But proofs have a sort of pedagogical efficiency just as programs have an execution efficiency. Given two proofs of the same proposition, the more enlightening proof is better by that criteria.

I assume the authors of the above quote would not disagree because they say it is often enough to find one, implying that for some purposes simpler proofs are better. And existence does come first: a complex proof that exists is better than an elegant proof that does not exist! Once you have a proof you may want to look for a simpler proof. Or not, depending on your purposes. Maybe you have an elegant proof that you’re convinced must be essentially true, even if you doubt some details. In that case, you might want to complement it with a machine-verifiable proof, no matter how complex.

Source: Interactive Theorem Proving and Program Development
Coq’Art: The Calculus of Inductive Constructions

Big Logic

As systems get larger and more complex, we need new tools to test whether these systems are correctly specified and implemented. These tools may not be new per se, but they may be applied with new urgency.

Dimensional analysis is a well-established method of error detection. Simply checking that you’re not doing something like adding things that aren’t meaningful to add is surprisingly useful for detecting errors. For example, if you’re computing an area, does your result have units of area? This seems like a ridiculously basic question to ask, and yet it is more effective than it would seem.

Type theory and category theory are extensions of the idea of dimensional analysis. Simply asking of a function “Exactly what kind of thing does it take in? And what sort of thing does it put out?” is a powerful way to find errors. And in practice it may be hard to get answers to these questions. When you’re working with a system that wasn’t designed to make such things explicit and clear, it may be that nobody knows for certain, or people believe they know but have incompatible ideas. Type theory and category theory can do much more than nudge you to define functions well, but even that much is a good start.

Specifying types is more powerful than it seems. With dependent types, you can incorporate so much information in the type system that showing that an instance of a type exists is equivalent to proving a theorem. This is a consequence of the Curry-Howard correspondence (“Propositions as types”) and is the basis for proof assistants like Coq.

I’d like to suggest the term Big Logic for the application of logic on a large scale, using logic to prove properties of systems that are too complex for a typical human mind to thoroughly understand. It’s well known that systems have gotten larger, more connected, and more complex. It’s not as well known how much the tools of Big Logic have improved, making formal verification practical for more projects than it used to be.

Primitive recursive functions and enumerable sets

The set of primitive recursive (PR) functions is the smallest set of functions of several integer arguments satisfying five axioms:

  1. Constant functions are PR.
  2. The function that picks the ith element of a list of n arguments is PR.
  3. The successor function S(n) = n+1 is PR.
  4. PR functions are closed under composition.
  5. PR functions are closed under primitive recursion.

The last axiom obviously gives PR functions their name. So what is primitive recursion? Given a PR function  that takes k arguments, and another PR function g that takes k+2 arguments, the primitive recursion of f and g is a function h of k+1 arguments satisfying two properties:

  1. h(0, x1, …, xk) = f(x1, …, xk)
  2. h(S(y), x1, …, xk) = g(yh(yx1, … xk), x1, …, xk)

Not every computable function is primitive recursive. For example, Ackermann’s function is a general recursive function, but not a primitive recursive function. General recursive functions are Turing complete. Turing machines, lambda calculus, and general recursive functions are all equivalent models of computation, but the first two are better known.

For this post, the main thing about general recursive functions is that, as the name implies, they are more general than primitive recursive functions.

Now we switch from functions to sets. The characteristic function of a set A is the function that is 1 for elements of A and zero everywhere else. In other areas of math, there is a sort of duality between functions and sets via characteristic functions. For example, the indicator function of a measurable set is a measurable function. And the indicator function of a convex set is a convex function. But in recursive functions, there’s an unexpected wrinkle in this analogy.

A set of integers is recursively enumerable if it is either empty or the image of a general recursive function. But there’s a theorem, due to Alonzo Church, that a non-empty recursively enumerable set is actually the image of a primitive recursive function. So although general recursive functions are more general, you can’t tell that from looking at function images. For example, although the Ackermann function is not primitive recursive, there is a primitive recursive function with the same image.

Kalman filters and functional programming

A few weeks ago I started a series of posts on various things you could do with a functional fold. In the first post I mentioned that the idea came from a paper by Brian Beckman on Kalman filters and folds:

This post was inspired by a paper by Brian Beckman (in progress) that shows how a Kalman filter can be implemented as a fold. From a Bayesian perspective, the thing that makes the Kalman filter work is that a certain multivariate normal model has a conjugate prior. This post shows that conjugate models more generally can be implemented as folds over the data. That’s interesting, but what does it buy you? Brian’s paper discusses this in detail, but one advantage is that it completely separates the accumulator function from the data, so the former can be tested in isolation.

At the time Brian was working on one big paper in private. This has since been split into several papers and they’re now public.


Formal methods let you explore the corners

I heard someone say the other day that the advantage of formal software validation methods is that they let you explore the corners, cases where intuition doesn’t naturally take you.

This made me think of corners in the geometric sense. If you have a sphere in a box in high dimensions, nearly all the volume is in the corners, i.e. outside the sphere. This is more than a metaphor. You can think of software options geometrically, with each independent choice corresponding to a dimension. Paths through a piece of software that are individually rare may account for nearly all use when considered together.

With a circle inside a square, nearly 78.5% of the area is inside the circle. With a ball sitting inside a 3-D box, 52.4% of the volume is inside the ball. As the dimension increases, the proportion of volume inside the sphere rapidly decreases. For a 10-dimensional sphere sitting in a 10-dimensional box, 0.25% of the volume is in the sphere. Said another way, 99.75% of the volume is in the corners.

When you go up to 100 dimensions, the proportion of volume inside the sphere is about 2 parts in 1070, a 1 followed by 70 zeros [1]. If 100 dimensions sounds like pure fantasy, think about a piece of software with more than 100 features. Those feature combinations multiply like geometric dimensions [2].

Here’s a little Python code you could use to see how much volume is in a sphere as a function of dimension.

    from scipy.special import gamma
    from math import pi

    def unit_sphere_volume(n): 
        return pi**(0.5*n)/gamma(0.5*n + 1)

    def unit_cube_volume(n): 
        return 2**n

    def ratio(n):
        return unit_sphere_volume(n) / unit_cube_volume(n)

    print( [ratio(n) for n in range(1, 20)] )

* * *

[1] There are names for such extremely large numbers. These names are hardly ever used—scientific notation is much more practical— but they’re fun to say. 1070 is ten duovigintillion in American nomenclature, ten undecilliard in European.

[2] Geometric dimensions are perfectly independent, but software feature combinations are not. In terms of logic, some combinations may not be possible. Or in terms of probability, the probability of exploring some paths is conditional on the probability of exploring other paths. Even so, there are inconceivably many paths through any large software system. And in large-scale operations, events that should “never happen” happen regularly.

New Twitter account for functional programming and categories

I’m starting a new Twitter account @FunctorFact for functional programming and category theory.

These two subjects have a lot of overlap, and some tweets will combine both, but many will be strictly about one or the other. So some content will be strictly about programming, some strictly about math, and some combining ideas from both.

FunctorFact icon

ASCII art diagrams in Emacs org-mode

Yesterday I wrote about ASCII art diagrams and gave four reasons you might want to use this ancient approach to creating simple diagrams:

  • It could be quicker than creating a graphical image .
  • You can paste them into plain text documents like source code files.
  • They can be produced programmatically.
  • There is software to turn ASCII art into more polished images.

Today I’ll post a few notes about how to create graphical versions of ASCII diagrams in Emacs with org-mode.

Running code inside org-mode

You can embed and execute source code in org-mode files. I wrote a couple posts about this, one showing how to run Python and R inside org-mode and another showing how to mix languages in org-mode. The latter shows Perl calling R and Python, all in 14 lines of code.

There are currently 39 programming languages that org-mode can call by default. In addition to conventional programming languages like the ones listed above, org-mode also supports ditaa, the language that treats ASCII art as a specification language to produce graphics.

You can edit code blocks just as you would other text in an org-mode file. But if you’d like to edit a code block in its language’s mode, type C-c ' from inside the code block. If you’re editing Python code, for example, this will open a new window, in Python mode, containing just that code block. If you type C-c ' inside a ditaa code block, Emacs opens a new window in “artist mode,” a mode specifically for editing ASCII art.

You can run code inside org-mode two ways: interactively and for export. With your cursor inside a block of code, type C-c C-c to execute the code and report the results. You can also export an entire org-mode document and have the results of code execution embedded in the final document. This works much the same as “*weave” projects like Sweave, Pweave, and Hweave. But while each of these is specific to a particular programming language (R, Python, and Haskell respectively), org-mode works with dozens of languages, including ditaa.

Running ditaa inside org-mode

You embed ditaa code just like you’d embed any other code. In the first post mentioned above, I gave this example of calling R:

    #+begin_src R

Here’s the analogous code for ditaa:

    #+BEGIN_SRC ditaa :file foo.png
    | Hello |

The markers to begin and end a source code segment can be upper or lower case. I used lower case in my previous post, but it may be more readable to use upper case so that the markers stand out better from their arguments.

The R example above didn’t use any header arguments, though it could have. With ditaa, you must provide a header argument: the name of the file to save the graphics in.

If you run the ditaa code by navigating inside the code block and running C-c C-c, Emacs will add a couple lines after the code block:


This is the literal text, what you’d see if you opened the file in another editor. But org-mode uses the double brackets for links. You wouldn’t see the double brackets. Instead you’d see a hyperlink with text file:foo.png. Clicking on that link opens the image.

Hello image produce by ditaa

You can export the org-mode file with the command C-c C-e. This brings up a menu of export options: h for HTML, l for LaTeX, etc. Within each of these are further options. For example, you could type l for LaTeX and then type l again to select export to LaTeX or type p to have have org-mode run LaTeX on the file and produce a PDF. If you know you want a PDF, you could do this all in one command: C-c C-l l p.

You can control whether org-mode exports code or the results of running code (or both or neither). Org-mode exports the results of ditaa code, i.e. graphics files, by default. This makes sense: your PDF file will have a nice image version of your diagram, not the ASCII art used as its source code.

Configuration and troubleshooting

By default, the only programming language you can run inside org-mode is Emacs Lisp. This makes sense. You want to be deliberate about what code you run, but if you don’t want to run Emacs Lisp you’d better not run Emacs!

Inside your Emacs configuration file, you specify what languages org-mode is allowed to run. Here’s an example allowing Python and ditaa:

    (org-babel-do-load-languages 'org-babel-load-languages '(
        (python . t) 
        (ditaa . t))

Recent versions of Emacs are supposed to ship with ditaa, but the ditaa jar file was missing from the two computers I experimented with. Emacs complained

    org-babel-execute:ditaa: Could not find ditaa.jar at ...

and so I copied ditaa.jar to the place where Emacs said it was looking for it. That worked, but it’s kind of a kludge because now I had two copies of the jar file. A better solution is to tell Emacs where you already have ditaa.

I like to use the exact same init.el file on every machine I use and so I added a section where I put OS-specific and machine-specific configuration. Here I put a different path to ditaa for each OS.

    ;; OS specific
        ((string-equal system-type "windows-nt") ; Microsoft Windows
                (setq-default ispell-program-name "C:/bin/Aspell/bin/aspell.exe") 
                # etc.
                (setq org-ditaa-jar-path "c:/bin/ditaa/ditaa.jar")
        ((string-equal system-type "gnu/linux") ; Linux
                (setq x-select-enable-clipboard t)
                 # etc.
                (setq org-ditaa-jar-path "/usr/bin/ditaa")

ASCII art diagrams

“Technology is additive.” — Kevin Kelly

Old technologies never die. Instead, their range of application shrinks. Or maybe it grows when conditions change.

ASCII art, drawing pictures with fixed-width plain text characters, is no longer how many people want to produce diagrams. Just fire up Adobe Illustrator and you get incomparably more resolution of expression.

And yet there are times when ASCII art comes in handy. You can, for example, paste it into source code files. Someone more familiar with Emacs than Illustrator may be able to produce a simple diagram in the former faster than the latter. And it can be relatively easy to programmatically produce a large number of ASCII art diagrams, depending on the nature of the diagrams.

It’s also possible to use ASCII art as a way to specify nicely rendered images. I’ll show how to do that with ditaa below.

Here’s an ASCII version of the conjugate prior diagram I made some time ago:


                          |             |
                          | Exponential |
                          |             |
                          lambda |                                                          
+-------------+           +-------------+           +-------------+
|             |   tau     |             |   lambda  |             |
|  Lognormal  |---------->|    Gamma    |<----------|   Poisson   |
|             |           |             |---+       |             |
+-------------+           +-------------+   |       +-------------+
      |                          ^   ^      | beta
      |                          |   |      |
      |                          |   +------+
      |                      tau |           
      |                          |           
      |                   +-------------+   
      |        mu         |             |   
      +------------------>|    Normal   |
                          |             |----+
                          +-------------+    | 
                                     ^       | mu
                                     |       |

And here’s the image produced by ditaa processing the ASCII diagram above:

Conjugate prior diagram produced by ditaa

Update: See my next post on how to create ASCII art diagrams and their graphic version from ditaa using Emacs org mode.

Update: When I first made the diagram above, I tried using Greek letters, e.g. using β rather than “beta,” but this didn’t work. I thought “OK, I suppose it’s not really ASCII art if you use non-ASCII characters.” But someone told me Unicode characters worked for him, so I tried again when I wrote the follow up post and it worked.

My first attempt, from a Windows laptop, calling ditaa from the command line, did not work. My second attempt, running inside org-mode from a Windows desktop, did work. My third attempt, running from Emacs on Linux also worked.

ditaa diagram using Unicode symbols

Insertion sort as a fold

I’ve written several posts lately about various algorithms that can be expressed as functional folds:

These have all been numerical algorithms. Insertion sort is an example of a non-numerical algorithm that could be implemented as a fold.

Insertion sort is not the fastest sorting algorithm. It takes O(n2) operations to sort a list of size n while the fastest algorithms take O(n log n) operations. But insertion sort is simple, and as it works its way down a list, the portion it has processed is sorted. If you interrupt it, you have correct results given the input so far. If you interrupt something like a quicksort, you don’t have a sorted list. If you’re receiving a stream of data points and want to sort them as they come, you have to use insertion sort rather than something like quicksort.

The function to fold over a list looks like this:

    f as a = [x | x <- as, x < a] ++ [a] ++ [x | x <- as, x >= a]

Given a sorted list as and a value a, return a new sorted list that has inserted the element a in the proper position. Our function f does this by joining together the list of the elements less than a, the list containing only a, and the list of elements at least as big as a.

Here’s how we could use this to alphabetize the letters in the word “example.”

    foldl f [] "example"

This returns "aeelmpx".

Haskell takes our function f and an empty list of characters [] and returns a new list of characters by folding f over the list of characters making up the string "example".

You can always watch how foldl works by replacing it with scanl to see intermediate results.

    scanl f [] "example"


    ["", "e", "ex", "aex", "aemx", "aempx", "aelmpx", "aeelmpx"]

Computing higher moments with a fold

Folds in functional programming are often introduced as a way to find the sum or product of items in a list. In this case the fold state has the same type as the list items. But more generally the fold state could have a different type, and this allows more interesting applications of folds. Previous posts look at using folds to update conjugate Bayesian models and numerically solve differential equations.

This post uses a fold to compute mean, variance, skewness, and kurtosis. See this earlier post for an object-oriented approach. The code below seems cryptic out of context. The object-oriented post gives references for where these algorithms are developed. The important point for this post is that we can compute mean, variance, skewness, and kurtosis all in one pass through the data even though textbook definitions appear to require at least two passes. It’s also worth noting that the functional version is less than half as much code as the object-oriented version.

(Algorithms that work in one pass through a stream of data, updating for each new input, are sometimes called “online” algorithms. This is unfortunate now that “online” has come to mean something else.)

The Haskell function moments below returns the number of samples and the mean, but does not directly return variance, skewness and kurtosis. Instead it returns moments from which these statistics can easily be calculated using the mvks function.

    moments (n, m1, m2, m3, m4) x = (n', m1', m2', m3', m4')
            n' = n + 1
            delta = x - m1
            delta_n = delta / n'
            delta_n2 = delta_n**2
            t = delta*delta_n*n
            m1' = m1 + delta_n
            m4' = m4 + t*delta_n2*(n'*n' - 3*n' + 3) + 6*delta_n2*m2 - 4*delta_n*m3
            m3' = m3 + t*delta_n*(n' - 2) - 3*delta_n*m2
            m2' = m2 + t

    mvsk (n, m1, m2, m3, m4) = (m1, m2/(n-1.0), (sqrt n)*m3/m2**1.5, n*m4/m2**2 - 3.0)                         

Here’s an example of how you would use this Haskell code to compute statistics for the list [2, 30, 51, 72]:

    ghci>  mvsk $ foldl moments (0,0,0,0,0) [2, 30, 51, 72]
    (38.75, 894.25,-0.1685, -1.2912)

The foldl applies moments first to its initial value, the 5-tuple of zeros. Then it iterates over the data, taking data points one at a time and visiting each point only once, returning a new state from moments each time. Another way to say this is that after processing each data point, moments returns the 5-tuple that it would have returned if that data only consisted of the values up to that point.

For a non-numerical example of folds, see my post on sorting.