The AirConf events will be broadcast via G+ hangouts.
Michael Fogus posted on Twitter this morning
Computing: the only industry that becomes less mature as more time passes.
The immaturity of computing is used to excuse every ignorance. There’s an enormous body of existing wisdom but we don’t care.
I don’t know whether computing is becoming less mature, though it may very well be on average, even if individual developers become more mature.
One reason is that computing is a growing profession, so people are entering the field faster than they are leaving. That lowers average maturity.
Another reason is chronological snobbery, alluded to in Fogus’s second tweet. Chronological snobbery is pervasive in contemporary culture, but especially in computing. Tremendous hardware advances give the illusion that software development has advanced more than it has. What could I possibly learn from someone who programmed back when computers were 100x slower? Maybe a lot.
The classical education model is based on the trivium of grammar, logic, and rhetoric. See, for example, Dorothy Sayers’ essay The Lost Tools of Learning.
The grammar stage of the trivium could be literal language grammar, but it also applies more generally to absorbing the basics of any subject and often involves rote learning.
The logic stage is more analytic, examining the relationships between the pieces gathered in the grammar stage. Students learn to construct sound arguments.
The rhetoric stage is focused on eloquent and persuasive expression. It is more outwardly focused than the previous stages, more considerate of others. Students learn to create arguments that are not only logically correct, but also memorable, enjoyable, and effective.
It would be interesting to see a classical approach to teaching programming. Programmers often don’t get past the logic stage, writing code that works (as far as they can tell). The rhetoric stage would train programmers to look for solutions that are not just probably correct, but so clear that they are persuasively correct. The goal would be to write code that is testable, maintainable, and even occasionally eloquent.
Parthenon replica in Nashville, TN.
This morning Aycan Gulez shared on Twitter this quote from Peopleware:
For the majority of the bankrupt projects we studied, there was not a single technological issue to explain the failure.
Gerald Weinberg said something similar in his Second Law of Consulting:
No matter how it looks at first, it’s always a people problem.
The other day Travis Oliphant pointed out an interesting paper: A Comparison of Programming Languages in Economics. The paper benchmarks several programming languages on a computational problem in economics.
All the usual disclaimers about benchmarks apply, your mileage may vary, etc. See the paper for details.
Here I give my summary of their summary of their results. The authors ran separate benchmarks on Mac and Windows. The results were qualitatively the same, so I just report the Windows results here.
Times in the table below are relative to the fastest C++ run.
|Python with Numba||1.57|
|R using compiler package||243.38|
The most striking result is that the authors were able to run their Python code 100x faster, achieving performance comparable to C++, by using Numba.
Comment by Simon Peyton Jones in an interview:
People often dislike static type systems because they’ve only met weak ones. A weak or not very expressive type system gets in your way all the time. It prevents you from writing functions you want to write that you know are fine. … The solution is not to abandon the type system but to make the type system more expressive.
In particular, he mentions Haskell’s polymorphic types and type inference as ways to make strong static typing convenient to use.
Look-behind is one of those advanced/obscure regular expression features that I don’t use frequently enough to remember the syntax, but just frequently enough that I wish I could remember it.
Look-behind can be positive or negative. Look-behind says “match this position only if the preceding text matches (does not match) the following pattern.”
The syntax in Perl and similar regular expression implementations is
(?<= … ) for positive look-behind and
(?<! … ) for negative look-behind. For the longest time I couldn’t remember whether the next symbol after
? was the direction (i.e.
< for behind) or the polarity (
= for positive,
! for negative). I was more likely to guess wrong unless I’d used the syntax recently.
The reason I was tempted to get these wrong is that I thought “positive look-behind” and “negative look-behind.” That’s how these patterns are described. But this means the words and symbols come in a different order. If you think look-behind positive and look-behind negative then the words and the symbols come in the same order:
Maybe this syntax comes more naturally to people who speak French and other languages where adjectives follow the thing they describe. English word order was tripping me up.
By the way, the syntax for look-ahead patterns is simpler: just leave out the
<. The default direction for look-around patterns is forward. You don’t have to remember whether the symbol for direction or parity comes first because there is no symbol for direction.
In the context of programming languages, “magic” is often a pejorative term for code that does something other than what it appears to do.
Programmers seem to have a love/hate relationship with magic. Even people who say that don’t like magic (e.g. because it’s hard to debug) end up using it. The Haskell community prides itself on having a transparent language with no magic, and yet monads are slightly magical. The whole purpose of a monad is to hide explicit data flow, though in a principled way. Haskell’s
do notation is more magical, and templates are even more magical still. (However, I do hear some Haskellers express disdain for templates.)
People who like magic tend to use the word “automagic” instead. It means about the same thing as “magic” but with a positive connotation.
To conclude with a couple sweeping generalizations, magic fans tend to be tool-oriented (such as Microsoft developers) while magic detractors tend to be language-oriented (such as Haskell developers ).
Update: Someone asked me on Twitter about the difference between abstraction and magic. I’d say abstraction hides details, but magic is actively misleading or ironic.
From Leslie Lamport:
Every time code is patched, it becomes a little uglier, harder to understand, harder to maintain, bugs get introduced.
If you don’t start with a spec, every piece of code you write is a patch.
Which means the program starts out from Day One being ugly, hard to understand, and hard to maintain.
There’s an old joke from Henny Youngman:
I told the doctor I broke my leg in two places. He told me to quit going to those places.
Sometimes tech choices are that easy: if something is too hard, stop doing it. A great deal of pain comes from using a tool outside its intended use, and often that’s avoidable.
For example, when regular expressions get too hard, I stop using regular expressions and write a little procedural code. Or when Python is too slow, I try some simple ways of speeding it up, and if that’s not good enough I switch from Python to C++. If something is too hard to do in Windows, I’ll do it in Linux, and vice versa.
Sometimes there’s not a better tool available and you just have to slog through with what you have. And sometimes you don’t have the freedom to use a better tool even though one is available. But a lot of technical pain is self-imposed. If you keep breaking your leg somewhere, stop going there.
Here’s a nice quip from Luke Gorrie on Twitter:
Monads are hard because there are so many bad monad tutorials getting in the way of finally finding Wadler’s nice paper.
Here’s the paper by Philip Wadler that I expect Luke Gorrie had in mind: Monads for functional programming.
Here’s the key line from Wadler’s paper:
Pure functional languages have this advantage: all flow of data is made explicit. And this disadvantage: sometimes it is painfully explicit.
That’s the problem monads solve: they let you leave implicit some of the repetitive code otherwise required by functional programming. That simple but critical point left out of many monad tutorials.
Dan Piponi wrote a blog post You Could Have Invented Monads! (And Maybe You Already Have) very much in the spirit of Wadler’s paper. He starts with the example of adding logging to a set of functions. You could have every function return not just the return value that you’re interested in but also the updated state of the log. This quickly gets messy. For example, suppose your basic math functions write to an error log if you call them with illegal arguments. That means your square root function, for example, has to take as input not just a real number but also the state of the error log before it was called. Monads give you a way of effectively doing the same thing, but conceptually separating the code that takes square roots from the code that maintains the error log.
As for “so many bad monad tutorials,” see Brent Yorgey on the monad tutorial fallacy.
By the way, this post is not Yet Another Monad Tutorial. It’s simply an advertisement for the tutorials by Philip Wadler and Dan Piponi.
I found this line from Software Foundations amusing:
… we can ask Coq to “extract,” from a Definition, a program in some other, more conventional, programming language (OCaml, Scheme, or Haskell) with a high-performance compiler.
Most programmers would hardly consider OCaml, Scheme, or Haskell “conventional” programming languages, but they are conventional relative to Coq. As the authors said, these languages are “more conventional,” not “conventional.”
I don’t mean to imply anything negative about OCaml, Scheme, or Haskell. They have their strengths — I briefly mentioned the advantages of Haskell just yesterday — but they’re odd birds from the perspective of the large majority of programmers who work in C-like languages.
I’m reading Real World Haskell because one of my clients’ projects is written in Haskell. Some would say that “real world Haskell” is an oxymoron because Haskell isn’t used in the real world, as illustrated by a recent xkcd cartoon.
It’s true that Haskell accounts for a tiny portion of the world’s commercial software and that the language is more popular in research. (There would be no need to put “real world” in the title of a book on PHP, for example. You won’t find a lot of computer science researchers using PHP for its elegance and nice theoretical properties.) But people do use Haskell on real projects, particularly when correctness is a high priority. In any case, Haskell is “real world” for me since one of my clients uses it. As I wrote about before, applied is in the eye of the client.
I’m not that far into Real World Haskell yet, but so far it’s just what I was looking for. Another book I’d recommend is Graham Hutton’s Programming in Haskell. It makes a good introduction to Haskell because it’s small (184 pages) and focused on the core of the language, not so much on “real world” complications.
A very popular introduction to Haskell is Learn You a Haskell for Great Good. I have mixed feelings about that one. It explains most things clearly and the informal tone makes it easy to read, but the humor becomes annoying after a while. It also introduces some non-essential features of the language up front that could wait until later or be left out of an introductory book.
 Everyone would say that it’s important for their software to be correct. But in practice, correctness isn’t always the highest priority, nor should it be necessarily. As the probability of error approaches zero, the cost of development approaches infinity. You have to decide what probability of error is acceptable given the consequences of the errors.
It’s more important that the software embedded in a pacemaker be correct than the software that serves up this blog. My blog fails occasionally, but I wouldn’t spend $10,000 to cut the error rate in half. Someone writing pacemaker software would jump at the chance to reduce the probability of error so much for so little money.
On a related note, see Maybe NASA could use some buggy software.