Rich Hickey argued in a recent talk that simplicity is objective but easiness is subjective. Something is simple if it is singular: it does one thing, it is made of one thing, etc. Something is easy if it is close at hand, i.e. familiar.
I think this is a useful distinction, though simplicity is a little harder to pin down than the talk implies. Simplicity is relative and requires context. Rich Hickey’s context is programming languages, and in that context it may be fairly objective to say one construction is simpler than another because it does less.
For example, Hickey says one complication of Lisp is that it uses parentheses for function calls and for grouping. It would be simpler if one symbol did one thing. Mathematica does something like this. Parentheses are for grouping only. Function calls are delimited by square brackets. The square brackets are inconsistent with standard mathematical notation, so they’re not as easy (i.e. familiar), but they are simpler.
Mnemonics often complicate things to make them easier. For example, consider this mnemonic for pi:
How I want a drink, alcoholic of course, after the heavy lectures involving quantum mechanics.
This sentence is easier for most people to remember than 3.14159265358979. But the sentence is also more complex. A computer can represent the number in 8 bytes but the sentence takes 94 bytes of ASCII, more in Unicode.
Sometimes complex is better than simple, better in some context. It’s easier to memorize coherent sentences than numbers. But imagine if we got so excited by this mnemonic that we decided to represent all numbers by sentences. This would be amusing for a little while but would quickly become painful.
Some things are objectively simple but inhuman. Counting seconds since some event (e.g. Unix time) is much simpler than our system of keeping time with days, weeks, months, and years. But our human experience is profoundly influenced by the rotations and revolutions of our planet. Even weeks, which have no astronomical significance, seem to be aligned with human nature. So we keep our complex calendars while our computers count seconds.
I believe Hickey’s main point is that we need to reevaluate what we believe is simple. Maybe what we think is simple is complex but familiar. Maybe there is something new that is objectively simpler would become even easier once we’re used to it. (In particular, Hickey would like for us to try his programming language.) Once you practice thinking this way, you’ll see that many familiar things could be made simpler.
Related post: A little simplicity goes a long way
Reminds me of when I was in college — my freshman year I wrote a theme and variations based on pi and tonal interval enumerations. It actually was quite melodic: Mi-Do-Fa-Do-Sol-Re(+8va)-Re(orig. 8va)-La.
I thought about doing e next, but it just wasn’t as melodic.
I’m also reminded of a Guy Steele quote on our preference for brevity over what is actually “simpler” as far as the machine is concerned. It also makes one pause and realize that we often equate simplicity with brevity just as we associate easiness with brevity, so while simplicity may be objective, it is difficult to sort through the mess to get to the “objective” behind our native dispositions.
I’m also glad you commented on why Clojure has [] as well as (). I’ve only just looked at it recently and that seemed like a very “un-Lispy” think to do.
Regarding simplicity and brevity, Hickey gives the example of hundreds of strings hanging straight down, all separate, versus three or four strings knotted together. It’s not the number of strings that makes the situation complex but the interactions between the strings.
Also, the McCabe complexity measure for software counts branches in a function: add up the number of times you see
for
,if
,while
, etc. By this measure, a 10-line function with nested logic is more complex than a 100-line function that simply executes top to bottom with no branches. I think that’s a pretty good measure of complexity. It approximates the amount of psychological effort to understand a function.This was an incredible talk.
Also, it seems like what your touching on here is what’s known as Kolmogorov complexity. Rather, a measure of the computational resources needed to represent something.
One of the conclusions is that the Kolmogorov complexity of any string cannot be more than a few bytes larger than the length of the string itself. Which is interesting, because now we have a maximum complexity of any string given any context. The tricky part is finding a decent context that can represent all of your domain.
Perhaps this is where Clojure (and other lisp-like systems) come into play. You can easily (simply) use a domain-specific sublanguage to represent things, instead of trying to shoehorn them into a more general programming language.
Subjectivity or not, it’s a very useful distinction and can help you rethink your tools. Rather, it can help fight “when all you have is a hammer….”
I’ve been having trouble rememebring the value of Pi in my physics classes, and every morning I wake up feeling terrible. So my question is:
Can I have a quick, vegetable if edible, means for cured ailments diurnally brought oenically?
John,
I’ve been using that mnemoinic in physics class to remember Pi, but every morning I wind up feeling terribe. So my question is:
Can I have a quick, vegetable if edible, means for cured ailments diurnally brought oenically?
@John
But at the same time, what might appear simple may very well be a well-designed API. I’ve personally been in at least one situation where I thought I was choosing a simpler solution, when, in fact, the library I was using contained a gordian knot.
My biggest concern about McCabe complexity is that I feel that it is unnecessarily biased against loops. Judicious use of looping constructs can actually dramatically increase readability, comprehensibility, and maintainability of code.
Just as an aside : 94 bytes in ASCII corresponds to 94 bytes in unicode, at least with the most common UTF-8 encoding, which is ASCII compatible.
Actually, the sentence takes just as much space in Unicode if you use a sensible translation format, like UTF-8.