Striving for simplicity, arriving at complexity

This post is a riff on a line from Mathematics without Apologies, the book I quoted yesterday.

In an all too familiar trade-off, the result of striving for ultimate simplicity is intolerable complexity; to eliminate too-long proofs we find ourselves “hopelessly lost” among the too-long definitions. [emphasis added]

It’s as if there’s some sort of conservation of complexity, but not quite in the sense of a physical conservation law. Conservation of momentum, for example, means that if one part of a system loses 5 units of momentum, other parts of the system have to absorb exactly 5 units of momentum. But perceived complexity is psychological, not physical, and the accounting is not the same. By moving complexity around we might increase or decrease the overall complexity.

The opening quote suggests that complexity is an optimization problem, not an accounting problem. It also suggests that driving the complexity of one part of a system to its minimum may disproportionately increase the complexity of another part. Striving for the simplest possible proofs, for example, could make the definitions much harder to digest. There’s a similar dynamic in programming languages and programs.

Larry Wall said that Scheme is a beautiful programming language, but every Scheme program is ugly. Perl, on the other hand, is ugly, but it lets you write beautiful programs. Scheme can be simple because it requires libraries and applications to implement functionality that is part of more complex languages. I had similar thoughts about COM. It was an elegant object system that lead to hideous programs.

Scheme is a minimalist programming language, and COM is a minimalist object framework. By and large the software development community prefers complex languages and frameworks in hopes of writing smaller programs. Additional complexity in languages and frameworks isn’t felt as strongly as additional complexity in application code. (Until something breaks. Then you might have to explore parts of the language or framework that you had blissfully ignored before.)

The opening quote deals specifically with the complexity of theorems and proofs. In context, the author was saying that the price of Grothendieck’s elegant proofs was a daunting edifice of definitions. (More on that here.) Grothendieck may have taken this to extremes, but many mathematicians agree with the general approach of pushing complexity out of theorems and into definitions. Michael Spivak defends this approach in the preface to his book Calculus on Manifolds.

… the proof of [Stokes’] theorem is, in the mathematician’s sense, an utter triviality — a straightforward calculation. On the other hand, even the statement of this triviality cannot be understood without a horde of definitions … There are good reasons why the theorems should all be easy and the definitions hard. As the evolution of Stokes’ theorem revealed, a single simple principle can masquerade as several difficult results; the proofs of many theorems involve merely stripping away the disguise. The definitions, on the other hand, serve a twofold purpose: they are rigorous replacements for vague notions, and machinery for elegant proofs. [emphasis added]

Mathematicians like to push complexity into definitions like software developers like to push complexity into languages and frameworks. Both strategies can make life easier on professionals while making it harder on beginners.

Related post: A little simplicity goes a long way

14 thoughts on “Striving for simplicity, arriving at complexity

  1. “There are good reasons why the definitions should all be easy and the definitions hard.”

    Think the second “definitions” in there isn’t quite right.

  2. I tend to have similar thoughts when I see a line of code that iterates over all members in a collection and applies some function to them. The author may not have written a loop, but in all likelihood, the compiler had to.

  3. Your post makes me think of the Curry-Howard correspondence, which is the observation that you can think of types in programming languages as theorems and programs as proofs of those theorems. Certain mathematically inclined computer scientists seem to have the corresponding preference for complex types and simple programs.

  4. Hernan Echegoyemberry

    Perhaps I can explain this one, in terms of Entropy, Kolmogorov-Chaitin Complexity, and Evolution.

    According to algorithmic information theory, there is no way, “in general”, to prove that a program is of minimal length.

    But we can know it for some particular cases, for example what is the minimal program to express the function of an instruction (is the instruction itself), however “in general” it means entering the space of ‘all possibilities’ of combinations, there we can no longer know what the program with the minimum code length will be for a function, because we can’t depart too much from the axioms without “running it” i.e. bumping into the Halting Problem.

    So, “In general” means any possible combination of axioms/instructions, it’s about future, with the problems and intentions that future brings with it (ultimately related with evolution, context, space and time…).

    Future is unpredictable, even for programs, that’s what Halting Problem is about.

    And here is the trick, years after we’ve designed any formal axiom set, any instruction set, or any language, we become aware that now there are Other new most useful functions, most useful axioms, and so on, by practice, by context (always changing), then we decide to encode those “fashion” “pop” programs of the domain, into a new axiom/instruction set/language/framework/library/whatever, now with an appropriate representation (appropriate in terms of actual entropy, a new Huffman Coding for the actual fashion).

    All new!…except its fate…

    Over time this code will end up having long and ugly programs, not so long after we’ve create the “new code”, we will realize that it brings a huge world of new possible combinations, new problems, and new context, it is, we are in the same place (like in red queen hypothesis)

    For every set of axioms/domain language we decide to use, that minimize the size of some particular expressions, programs, proofs, .. it will leads to other expressions (outside the intended domain ) to be longer. No matter what axiom we use, because it depends ultimately in what do you want to prove, and that’s very intention-dependant, and space-time dependant, (let alone intention depends on context, evolution, and the rest of the universe).

    So there is something predictable after all: we will not stop, we will keep doing this again and again, we are very good entropy growing employers, we will always have new features that provide us with new enjoyments and new suffering.

  5. Scheme has too little syntax in my opinion. Something like Clojure would be easier to read.

    You want the right number of symbols, not too many and not too few. Of course it’s hard to say just how big that set should be. For example, the approximately 100 symbols you can type with an English keyboard seems to be a sweet spot, more convenient than Morse code with two symbols or Chinese with thousands.

  6. If I’m understanding Hernan’s post correctly, this means that in terms of languages, the more abstractions we build into a language, the more abstractions we discover, and encapsulate into the language and it’s frameworks, in effect increasing their complexity. I can agree with this, and wonder is McCarthy had the right idea with Lisp. Relatively light on syntax, but the ability to modify the language at runtime and define macros have prevented the language from growing in complexity.

  7. There is a way to think about complexity in software engineering in terms of graph cuts and contractions. A well-encapsulated module may be a complex hairball inside, but present a simple interface. It can be contracted into a single graph node that is not high-degree.

    Even a grand edifice of definitions need not be cognitively complex if it is nicely hierarchically encapsulated – if the dependency graph looks more like a tree than something that the cat threw up.

  8. Thanks for another interesting read John.

    We are indeed pushing more and more complexity into our frameworks and libraries, but, I think that in comparison to mathematics, we’re pushing in the wrong direction. Strong typing, and algebraic data types seem more analogous to the definitions of mathematics than frameworks do. I suspect, this is more than an analogy, but I don’t have the requesite math background to fully understand the relationships.

    Our obsession with frameworks seems more like creating an enormous book of theorems, from which, with enough patience, you might be able to cobble together a proof for your particular problem. Or, if not, you write your own and add it to the book.

    Thanks again for a thought-provoking read.

  9. Interesting post! It reminds of how in physics its easier to give an elegant explanation to an expert who has lots of background concepts while for the explanation to be understandable to a non-expert it often needs to be non-elegent.

  10. I was linked this, so I am a bit without context. Unfortunately as well, I am not entirely comfortable commenting on mathematical proofs and definitions. However, when I look at your analogy to programming languages I run into a problem; as if there is something missing. In my head I think of the entropy between interfaces. If a programming language is block A and the program/code is block B, then there is probably (from my intuition of real-world systems) an interface with its own energetic characteristics. Maybe Scheme is minimalist and leads to ugly programs…but perhaps you can achieve a similar minimalism without creating such a rough interface.

    Risking a realization that I am missing part of (if not the entire) point, here is why I think this is important. Perhaps there _is_ a conservation of complexity somewhere in the system, but you need to be very careful of whether you are actually creating the correct bounds to observe it. Otherwise you risk making a blanket statement that is not necessarily productive and hard to argue against.

    Perhaps this is my knee-jerk train of thought (a.k.a. my defense mechanism) to quotes that harmonize with intuition.

    MBF

Comments are closed.