A sort of opposite of Parkinson’s Law

Parkinson’s Law says that work expands to the time allowed. I’ve seen that play out over and over. However, I’ve also seen a sort of opposite of Parkinson’s Law. Sometimes work gets done faster when you have more time for it.

Sometimes when I’ve planned a large block of uninterrupted, say going into the office when hardly anyone else is there, I get through my to do list in the first hour of the day. Knowing that I have plenty of time, I think more clearly and end up not needing the extra time. When that happens, I sometimes think “If I’d known this would just take 30 minutes to solve, I would have done it sooner.” But it’s not that simple. Just because it took 30 minutes on a good day doesn’t mean that it could have been done during just any 30-minute time slot earlier.

In his book Symmetry and the Monster, Mark Ronan shares a story along these lines. Ronan tells how John Conway worked on a famous mathematical problem. Conway and his wife agreed that he would carve out Saturdays from noon to midnight and Wednesdays from 6 PM to midnight for working on this challenge. He started one Saturday and cracked the problem that evening. Perhaps Conway would have been able to solve his problem by working on it an hour at a time here and there. But it seems reasonable that having a large block of time, and knowing that other large blocks were scheduled, helped Conway think more clearly.

My guess is that Parkinson’s law applies best to projects involving several people and to one-person projects that are not well defined. But for well-defined projects, especially projects requiring creative problem solving, having more time might lead to not needing so much time.

Related posts

Opening black boxes

black box

Rookie programmers don’t know how to reuse code. They write too much original code because they either don’t know about libraries or they don’t know how to use them. And if they do reuse someone else’s code, they copy and paste it, creating maintenance problems.

The next step in professional development is learning to reuse code. Encapsulation! Black boxes! Buy, don’t build! etc.

But this emphasis on reuse and black boxes can go too far. We can be intimidated by these black boxes and afraid to open them. We can come to believe the black boxes were created by superior beings. We can spend more time inferring the behavior of the black boxes than it would take to open them up or rewrite them. Then we pile leaky abstraction on top of leaky abstraction when we treat our own code as black boxes.

Joe Armstrong said in Coders at Work

Over the years I’ve kind of made a generic mistake … to not open the black box. … It’s worthwhile seeing if the direct route is quicker than the packaged route.

Several of the programmers who were interviewed in the book made similar remarks. They contribute part of their success to being unafraid of black boxes. They gained experience and confidence by taking things apart to see how they work.

Donald Knuth once said in an interview

I also must confess to a strong bias against the fashion for reusable code. To me, “re-editable code” is much, much better than an untouchable black box or toolkit. I could go on and on about this. … you’ll never convince me that reusable code isn’t mostly a menace.

Knuth returns to this theme in Coders at Work.

There’s this overemphasis on reusable software where you never get to open up the box … It’s nice to have these black boxes but, almost always, if you can look inside the box you can improve it …

Well, Knuth can almost always improve any code he finds. Less talented programmers need to be more humble. But too often programmers who are talented enough to make improvements are reluctant to do so. As Yeats said in his poem The Second Coming,

The best lack all conviction, while the worst are full of passionate intensity.

In any discussion of opening black boxes, someone will bring up the analogy of cars: Not everyone needs to know how a car works inside. I would agree that drivers no longer need to understand how a car works, but automotive engineers do. The problem isn’t users who don’t understand how software works, it’s software developers who don’t understand how software works.

Of course software libraries are extremely valuable. Knuth goes too far when he says reusable code is usually a menace. But I see a disturbing lack of curiosity among programmers. They are far too willing to use code they don’t understand.

Related post: Reusable code versus re-editable code

The opening chord of "A Hard Day’s Night"

The opening chord of the Beatles song “A Hard Day’s Night” has been something of a mystery. Guitarists have tried to reproduce the chord with limited success. Turns out there’s a good reason why they haven’t figured it out: the chord cannot be played on a guitar alone.

Jason Brown has digitally analyzed the chord using Fourier analysis and determined that there must have been a piano in the recording studio playing along with the guitars. Brown has determined what notes each member of the Beatles were playing.

I heard Jason Brown’s story on the Mathematical Moments podcast. In addition to the chord discussed above, Brown talks about other things he has discovered about the Beatles and about the relationship between music and math in general. Unfortunately, Mathematical Moments does not make it easy to link to individual episodes. Here is a link to a PDF file of show notes with the audio embedded. The file is slow to download, and your PDF viewer may not support it. Here’s a link directly to just the MP3 audio file.

The Mathematical Moments podcast also does not make it obvious that you can subscribe to the podcast; they only provide links to individual episodes with fat PDF files. However, you can subscribe by using the URL http://www.ams.org/rss/mathmoments.rss.

Update: Here’s a paper that goes into some details.

Maybe NASA could use some buggy software

In Coders at Work, Peter Norvig quotes NASA administrator Don Goldin saying

We’ve got to do the better, faster, cheaper. These space missions cost too much. It’d be better to run more missions and some of them would fail but overall we’d still get more done for the same amount of money.

NASA has extremely rigorous processes for writing software. They supposedly develop bug-free code; I doubt that’s true, thought I’m sure they do have exceptionally low bug rates. But this quality comes at a high price. Rumor has it that space shuttle software costs $1,500 per line to develop. When asked about the price tag, Norvig said “I don’t know if it’s optimal. I think they might be better off with buggy software.” At some point it’s certainly not optimal. If it doubles the price of a project to increase your probability of a successful mission from 98% to 99%, it’s not worth it; you’re better off running two missions with a 98% chance of success each.

Few people understand that software quality is all about probabilities of errors. Most people think the question is whether you’d rather have bug-free software or buggy software. I’d rather have bug-free software, thank you. But bug-free is not an option. Nothing humans do is perfect. All we can do is lower the probabilities of bugs. But as the probability of bugs goes to zero, the development costs go to infinity. (Actually it’s not all about probabilities of errors. It’s also about the consequences of errors. Sending back a photo with a few incorrect pixels is not the same as crashing a probe.)

Norvig’s comment makes sense regarding unmanned missions. But what about manned missions? Since one of the possible consequences of error is loss of life, the stakes are obviously higher. But does demanding flawless software increase the probability of a safe mission? One of the consequences of demanding extremely high quality software is that some tasks are too expensive to automate and so humans have to be trained to do those tasks. But astronauts make mistakes just as programmers do. If software has a smaller probability of error than an astronaut would have for a given task, it would be safer to rely on the software.

Related post: Software in space

Finding embarrassing and unhelpful error messages

Every time your software displays an error message, you risk losing credibility with your users. If the message is grammatically incorrect, your credibility definitely goes down a notch. And if the message is unhelpful, your credibility goes down at least one more notch. The same can be said for any message, but error messages are particularly important for three reasons.

  1. Users are in a bad mood when they see error messages; this is not the time to make things worse.
  2. Programmers are sloppy with error messages because they’re almost certain the messages will never be displayed.
  3. Error conditions are unusual by their very nature, and so it’s practically impossible to discover them all by black-box testing.

The best way to find error messages is to search the source code for text potentially displayed to users. I’ve advocated this practice for years and usually I encounter indifference or resistance. And yet nearly every time I extract the user-visible text from a software project I find dozens of spelling errors, grammar errors, and incomprehensible messages.

Last year I wrote an article for CodeProject on this topic and provided a script to strip text strings from source code. See PowerShell Script for Reviewing Text Shown to Users. The script looks for all quoted text and text inside XML entities. Then it tries to filter out text strings that are not displayed to users. For example, a string with no spaces is more likely to be a file name or some other code fragment than a user message. The script produces a report that a copy editor can then review. In addition to checking spelling and grammar, an editor can judge whether a message would be comprehensible and useful.

I admit that the parsing in the script is crude. It could miss some strings, and it could filter out some strings that it should keep. But the script is very simple, less than 100 lines. And it works on multiple source code types: C++, C#, XML, VB, etc. Writing a sophisticated parser for each of those languages would be a tremendous amount of work, but a quick-and-dirty script may be 98% as effective. Since most projects review 0% of their source code text, reviewing 98% is an improvement.

In addition to improving the text that a user sees, a text review gives some insight into a program’s structure. For example, if messages are repeated in multiple files, most likely the code has a lot of “clipboard inheritance,” code copied and pasted rather than isolated into reusable functions. A text review could also determine whether a program is concatenating strings to build SQL statements rather than calling stored procedures, possibly indicating a security vulnerability.

Mathematical genealogy

The Mathematics Genealogy Project keeps track mathematics PhD students and advisors. I was surprised to find that such information has been preserved for hundreds of years. I was able to trace my mathematical lineage back to Marin Mersenne (1588–1648) of Mersenne prime fame.

Marin Mersenne (1588-1648)

I did my PhD under Ralph Showalter, who studied under Tsuan Wu Ting, and so on back to Siméon Poisson (1781–1840).

Then things start to become more complicated. Poisson had two advisors: Joseph Louis Lagrange and Pierre-Simon Laplace. Lagrange also had two advisors: Leonhard Euler and Johann Bernoulli. Etc. One line goes back to Mersenne. Another line goes back to Demetrios Kydones (1324–1397).

Update: Thanks to Frederik Hermans for creating a graph by crawling the Mathematics Genealogy Project site and using Graphviz. It’s too big to view as an ordinary image; the graph gets very bushy in the 16th century. Here’s a PDF version that lets you zoom in and out to see the whole thing.

excerpt of full graph

I was surprised to see Erasmus on the graph. I didn’t run across him when I was just clicking around the website.

Related posts