Eight fallacies of declarative computing

Erik Meijer listed eight fallacies of declarative programming in his keynote address at YOW in Melbourne this morning:

  1. Exceptions do not exist.
  2. Statistics are precise.
  3. Memory is infinite.
  4. There are no side-effects.
  5. Schema don’t change.
  6. There is one developer.
  7. Compilation time is free.
  8. The language is homogeneous.

To put these in some context, Erik made several points about declarative programming in his talk. First, “declarative” is relative. For example, if you’re an assembly programmer, C looks declarative, but if you program in some higher level language, C looks procedural. Then he argued that SQL is not as declarative as people say and that in some ways SQL is quite procedural. Finally, the fallacies listed above correspond to things that can cause a declarative abstraction to leak.

(The videos of the YOW presentations should be available in January. I haven’t heard anyone say, but I imagine the slides from the presentations will be available sooner, maybe in a few days.)

5 thoughts on “Eight fallacies of declarative computing

  1. Could you elaborate what you mean by #8? I feel like a lot of languages offer the promise of adhering strictly to a single paradigm, but we’ve often found that a mix of paradigms is often both necessary and pragmatic.

  2. Erik’s comment for #8 was that declarative programming assumes there is a single language in play, one place to “declare” what you want. But most substantial projects involve multiple programming languages.

  3. I think what he had in mind by “statistics are precise” is the assumption that there is no measurement error and no variability.

    Suppose you own a store and the cashier says 300 people came to your store today. Maybe it wasn’t exactly 300 people (measurement error). But even if it were, it would be foolish to assume that exactly 300 people will come every day (variability).

  4. coming from the DB research world, when I hear “statistics are precise” I assume he is referring to the database catalog stats, the basis of cost-based optimization (table cardinalities, column value ranges, distribution histograms, etc). in practice, statistics can be costly to compute and are often stale. but I didn’t see the talk!

Comments are closed.