The most subtle of the seven deadly sins

Six of the seven deadly sins are easy to define, but one is more subtle. The seven deadly sins are

  1. lust
  2. gluttony
  3. greed
  4. sloth
  5. wrath
  6. envy
  7. pride.

Sloth is the subtle one.

I discovered recently that I didn’t know what sloth meant. When I first heard of the seven deadly sins, I thought it was odd that sloth was on the list. How would you know whether you’re sufficiently active to avoid sloth? It turns out that the original idea of sloth was only indirectly related to activity.

The idea of a list of deadly sins started in the 4th century and has changed over time. The word in the middle of the list was “acedia” before it became “sloth,” and the word “sloth” has taken on a different meaning since then. So what is acedia? According to Wikipedia,

Acedia is a word from ancient Greek describing a state of listlessness or torpor, of not caring or not being concerned with one’s position or condition in the world. It can lead to a state of being unable to perform one’s duties in life. Its spiritual overtones make it related to but distinct from depression.

In short, “sloth” did not mean inactivity but rather a state of apathy. As Os Guinness says in his book The Call

… sloth must be distinguished from idling, a state of carefree living that can be admirable, as in friends lingering over a meal … [Sloth] can reveal itself in frenetic activism as easily as in lethargy … It is a condition of explicitly spiritual dejection … inner despair at the worthwhileness of the worthwhile …

Sloth and rest could look the same externally while proceeding from opposite motivations. One person could be idle because he lacked the faith to do anything, while another person could be idle because he had faith that his needs would be met even if he rested a while. The key to avoiding sloth is not the proper level of activity but the proper attitude of the heart.

Questioning the Hawthorne effect

The Hawthorne effect is the idea that people perform better when they’re being studied. The name comes from studies conducted at Western Electric’s Hawthorne Works facility. Increased lighting improved productivity in the plant. Later, lowering the lighting also increased productivity. The Hawthorne effect says that the productivity increase wasn’t due to changes in lighting per se but either the variety of changing something about the plant or the attention that workers got by being measured, a sort of placebo effect.

The Alternative Blog has a post this morning entitled Hawthorne effect debunked. The original Hawthorne effect was apparently due to a flaw in the study design; correcting for that flaw eliminates the effect.

The term “debunked” in the post title may imply too much. The effect in the original studies may have been debunked, but that does not necessarily mean there is no Hawthorne effect. Perhaps there are good examples of the Hawthorne effect elsewhere. On the other hand, I expect closer examination of the data could debunk other reported instances of the Hawthorne effect as well.

The Hawthorne effect makes sense. It has been ingrained in pop culture. I heard a reference to it on a podcast just this morning before reading the blog post mentioned above. Everyone knows it’s true. And maybe it is. But at a minimum, there is at least one example suggesting the effect is not as widespread as previously thought.

It would be interesting to track the popularity of the Hawthorne effect in scholarly literature and in pop culture. If the effect becomes less credible in scholarly circles, will it also become less credible in pop culture? And if so, how quickly will pop culture respond?

The Unix Programming Environment

Joel Spolsky recommends the following books to self-taught programmers who apply to his company and need to fill in some gaps in their training.

The one that has me scratching my head is The Unix Programming Environment, first published in 1984. After listening to Joel’s podcast, I thumbed through my old copy of the book and thought “Man, I could never work like this.” Of course I could work like that, because I did, back around 1990. But the world has really changed since then.

I appreciate history and old books. I see the value in learning things you might not directly apply. But imagine telling twentysomething applicants to go read an operating system book that was written before they were born. Most would probably think you’re insane.

Update (16 November 2010): On second thought, I could see recommending that someone read The Unix Programming Environment these days even though technology has changed so much, but I’d still expect resistance.

Upcoming Y2K-like problems

The world’s computer systems kept working on January 1, 2000 thanks to billions of dollars spent on fixing old software. Two wrong conclusions to draw from Y2K are

  1. The programmers responsible for Y2K bugs were losers.
  2. That’s all behind us now.

The programmers who wrote the Y2K bugs were highly successful: their software lasted longer than anyone imagined it would. The two-digit dates were only a problem because their software was still in use decades later. (OK, some programmers were still writing Y2K bugs as late as 1999, but I’m thinking about COBOL programmers from the 1970’s.)

Y2K may be behind us, but we will be facing Y2K-like problems for years to come. Twitter just faced a Y2K-like problem last night, the so called Twitpocalypse. Twitter messages were indexed with a signed 32-bit integer. That means the original software was implicitly designed with a limit of around two billion messages. Like the COBOL programmers mentioned above, Twitter was more successful than anticipated. Twitter fixed the problem without any disruption, except that some third party Twitter clients need to be updated.

We are running out of Internet addresses because these addresses also use 32-bit integers. To make matters worse, an Internet address has an internal structure that greatly reduces the number of possible 32-bit addresses. IPv6 will fix this by using 128-bit addresses.

The US will run out of 10-digit phone numbers at some point, especially since not all 10-digit combinations are possible phone numbers. For example, the first three digits are a geographical area code. One area code can run out of 7-digit numbers while another has numbers left over.

At some point the US will run out of 9-digit social security numbers.

The original Unix systems counted time as the number of seconds since January 1, 1970, stored in a signed 32-bit integer. On January 19, 2038, the number of seconds will exceed the capacity of such an integer and the time will roll over to zero, i.e. it will be January 1, 1970 again. This is more insidious than the Y2K problem because there are many software date representations in common use, including the old Unix method. Some (parts of) software will have problems in 2038 while others will not, depending on the whim of the programmer when picking a way to represent dates.

There will always be Y2K-like problems. Computers are finite. Programmers have to guess at limitations for data. Sometimes these limitations are implicit, and so we can pretend they are not there, but they are. Sometimes programmers guess wrong because their software succeeds beyond their expectations.

Timed exams

I ran across a blog post this morning that makes some excellent points about timed exams. Here are three points from Jon Dron’s blog post What exams have taught me:

  • that slow, steady, careful work is not worth the hassle — a bit of cramming (typically one-three days seemed to work for me) in a mad rush just before the event works much more effectively and saves a lot of time
  • the corollary — adrenalin is necessary to achieve anything worth achieving
  • that the most important things in life generally take around three hours to complete

As Marshal McLuhan said, the medium is the message. That is, the context of a message may speak louder than its content. Still, I’d like to defend timed exams in a limited context. You need to have quick recall of some facts. There are some skills you need to practice to the point that they are second nature. Not because these things are ultimately important but so you don’t have to think about them and can move on to other things.

Joel Spolsky gave an example along these lines in his recent podcast. He said that Serge Lang once began a calculus class with an algebra quiz, one expression to simplify. Thirty seconds into the quiz, it made everyone stop and turn in their work. At the end of the year, he compared the final grades to the grades on his algebra quiz. The students who got A’s in freshman calculus were almost exactly the same as those who were able to simply the algebra expression quickly. (The story begins around 8:12 in the audio file. It’s also on the transcript wiki.)

There are a couple ways to interpret this anecdote. One is that Lang’s exams measured quick reaction time and that students who were able to do algebra quickly were also able to do calculus quickly and thus succeed on Lang’s exams. There may be some truth to that. But I think more fundamentally, those who had mastered algebra were able to pay attention to the new material. Because algebra was second nature to these students, they could think about calculus.

I agree that typical hour-long exams are artificial and create some perverse incentives. I see a place for leisurely evaluation: take-home exams, projects, portfolios, etc. But I also see a place for timed evaluation, even quiz show-like rapid recall, though such evaluation need not factor into assigning grades.  I think Jon Dron’s criticism is that timed exams are usually not created deliberately. I don’t think he would necessarily find fault with someone explicitly identifying a list of fundamental skills and explaining that these need to be performed quickly. I believe his criticism is that everything is evaluated in a rush by default.

Thanks to Daniel Lemire for pointing out Jon Dron’s post. Read Daniel’s commentary here.

More education posts

Create offline, analyze online

Sitting at a computer changes the way you think. You need to know when to walk away from the computer and when to come back.

I think mind mapping software is a bad idea. Mind maps are supposed to capture free associations. But the very act of sitting down at a computer puts you in an analytical frame of mind. In other words, mind mapping is a right-brain activity, but sitting at a computer encourages left-brain thinking. Mind mapping software might be a good way to digitize a map after you’ve created it on paper, but I don’t think it’s a good way to create a map.

When I need to sort out projects and priorities, I do it on paper. After that I may type up the results. I like to capture ideas on paper or on my voice recorder but then store them online.

When I do math, I scribble on paper, then type up my results in LaTeX. Scribbling helps me generate ideas; LaTeX helps me find errors. I’ve found that fairly short cycles of scribbling and typing work best for me, a few cycles a day.

In the past, we did a lot of things on paper because we had no choice. Today we do a lot of things on computers today just because we can. It’s going to take a while to sift through the new options and decide which ones are worthwhile and which are not.

Recommended books

Daniel Pink’s book A Whole New Mind has a good discussion of left-brain versus right-brain thinking. As he points out, the specialization between the left and right hemispheres of the brain is more complicated than once thought. However, the terms “left-brain” and “right-brain” are still useful metaphors even if they’re not precise neuroscience.

Also, to read more on how computers influence our thinking, see Andy Hunt’s book Pragmatic Thinking and Learning.

Related posts

A couple thoughts on typography

Font embedding not such a good idea?

The most recent Boag World podcast interviewed Mark Boulton. Boulton has a contrarian opinion on font embedding. Nearly all web designers are excited about font embedding (the ability to have fonts download on-the-fly if a page uses a font not installed on the user’s computer). Bolton’s not so sure this is a good idea. Fonts are designed for a purpose, and most fonts were designed for print. The handful of fonts that were designed first for online viewing (Verdana, Georgia, etc.) are widely installed. If font embedding were a way to broaden the pallet of fonts designed for use on a computer monitor, that would be great. But the most likely use of font embedding would be to allow designers to use more fonts online that were not designed to be used online.

Comic Sans and dyslexia

Comic Sans is terribly overused. It’s not a bad font, but it’s often used in inappropriate contexts and has become a cliché for poor typographical taste.

However, I heard somewhere that people with dyslexia can read Comic Sans more easily than most other fonts. I think the explanation was that the font breaks some typical symmetries. For example, a “p” is not an exact mirror image of a “q.” (The former has a more pronounced serif on top.) On the other hand, the “b” and “d” do look like near mirror images. I wonder whether anyone has designed a font specifically to help people with dyslexia. Maybe such  fonts would exaggerate the asymmetries that were accidental in the design of Comic Sans. Delivering such fonts would be a good application of font embedding.

Update: Karl Ove Hufthammer left a comment pointing out Andika, a font with “easy-to-perceive letterforms that will not be readily confused with one another.”

Related posts

Comparing the Unix and PowerShell pipelines

This is a blog post I’ve intended to write for some time now. I intended to come up with a great example, but I’ve decided to go ahead and publish it and let you come up with your own examples. Please share your examples in the comments.

One of the great strengths of Unix is the shell pipeline. Unix has thousands of little utilities that can be strung together via a pipeline. The output of one program can be the input to another. But in practice, things don’t go quite so smoothly. Suppose the conceptual pattern is

A | B | C

meaning the output of A goes to B, and the output of B goes to C. This is actually implemented as

A | <grubby text munging> | B | <grubby text munging> | C

because B doesn’t really take the output of A. There’s some manipulation going on to prepare the output of A as the input of B. Strip these characters from these columns, replace this pattern with this other pattern, etc. The key point is the Unix commands spit out text. Maybe at a high level you care about programs A, B, and C, but in between are calls to utilities like grep, sed, or awk to bridge the gaps between output formats and input formats.

The PowerShell pipeline is different because PowerShell commands spit out objects. For example, if the output of a PowerShell command is a date, then the command returns a .NET object representing a date, not a text string. The command may display a string on the command line, but that string is just a human-readable representation. But the string representation of an object is not the object. If the output is piped to another command, the latter command receives a .NET date object, not a string. This is the big idea behind PowerShell. Commands pass around objects, not strings. The grubby, error-prone text munging between commands goes away.

Not all problems go away just because commands pass around objects. For example, maybe one command outputs a COM object and another takes in a .NET object. This is where more PowerShell magic comes in. PowerShell does a lot of work behind the scenes to implicitly convert output types to input types when possible. This sort of magic makes me nervous when I’m programming. I like to know exactly what’s going on, especially when debugging. But when using a shell, magic can be awfully convenient.

Starting a business not as risky as people say

Check out this article from Jason Cohen: Starting a business isn’t as crazy and risky as they say. According to Cohen, popular ideas about failure rates for start-ups are based on misleading analysis of data. Statistics about business failures are muddled by two fundamental questions: (1) What is a business? and (2) What is a failure?

What is a business? There’s a big difference between a side business (a hobby or a casual source of extra income) and a primary business (main source of income) and yet statistics often lump these two together. Presumably failures are more common among side business, inflating the sense of how often serious businesses fail.

What is a failure? Common ideas about the frequency of failures are based on figures that simply track when a business goes out of existence. But a company can disappear for numerous reasons that are not failures. Maybe the company got bought out to the delight of the owner. Maybe the owner grew tired of the business and wanted to do something else. Maybe the owner retired. The figures are more encouraging when you sort out genuine failures from businesses that folded agreeably.

Related post
: Plane crashes, software crashes, and business crashes

Abundance, scarcity, and blueberries

My wife and I took our family to the Chmielewski Blueberry Farm this morning. There were blueberries everywhere, so my wife and I decided we’d only pick the ones that were harder to reach, and we’d look for the biggest ones that were perfectly ripe.

While we were picking, my wife and I talked about how much fun this was since there was an abundance of berries. And we talked about how it would be no fun if you had a scarcity mindset, competing with everyone else and trying to pick every berry possible. Someone a row over from us commented that it was a little competitive because the berries were a somewhat picked over. I don’t know what he meant by picked over: we picked over 14 pounds of blueberries while being selective about what we picked.

As we were leaving the farm, I struck up a conversation with an unpleasant older woman. She was carrying less than a pint of berries and complaining that the berries were so scarce. It’s hard to imagine we had just come from the same farm.