Following an idea to its logical conclusion

Following an idea to its logical conclusion might be extrapolating a model beyond its valid range.

Suppose you have a football field with area A. If you make two parallel sides twice as long, then the area will be 2A. If you double the length of the sides again, the area will be 4A. Following this reason to its logical conclusion, you could double the length of the sides as many times as you wish, say 15 times, and each time the area doubles.

Except that’s not true. By the time you’ve doubled the length of the sides 15 times, you have a shape so big that it is far from being a rectangle. The fact that Earth is round matters a lot for figure that big.

Euclidean geometry models our world really well for rectangles the size of a football field, or even rectangles the size of Kansas. But eventually it breaks down. If the top extends to the north pole, your rectangle becomes a spherical triangle.

The problem in this example isn’t logic; it’s geometry. If you double the length of the sides of a Euclidean rectangle 15 times, you do double the area 15 times. A football field is not exactly a Euclidean rectangle, though it’s close enough for all practical purposes. Even Kansas is a Euclidean rectangle for most practical purposes. But a figure on the surface of the earth with sides thousands of miles long is definitely not Euclidean.

Models are based on experience with data within some range. The surprising thing about Newtonian physics is not that it breaks down at a subatomic scale and at a cosmic scale. The surprising thing is that it is usually adequate for everything in between.

Most models do not scale up or down over anywhere near as many orders of magnitude as Euclidean geometry or Newtonian physics. If a dose-response curve, for example, is linear for based on observations in the range of 10 to 100 milligrams, nobody in his right mind would expect the curve to remain linear for doses up to a kilogram. It wouldn’t be surprising to find out that linearity breaks down before you get to 200 milligrams.

“Any sufficiently advanced logic is indistinguishable from stupidity.” — Alex Tabarrok

Related posts

Customizing conventional wisdom

From Solitude and Leadership by William Deresiewicz:

I find for myself that my first thought is never my best thought. My first thought is always someone else’s; it’s always what I’ve already heard about the subject, always the conventional wisdom. It’s only by concentrating, sticking to the question, being patient, letting all the parts of my mind come into play, that I arrive at an original idea. By giving my brain a chance to make associations, draw connections, take me by surprise. And often even that idea doesn’t turn out to be very good. I need time to think about it, too, to make mistakes and recognize them, to make false starts and correct them, to outlast my impulses, to defeat my desire to declare the job done and move on to the next thing.

Conventional wisdom summarizes the experience of many people. As a result, it’s often a good starting point. But like a blurred photo, it has gone through a sort of averaging process, loosing resolution along the way. It takes hard work to decide how, or even whether, conventional wisdom applies to your particular circumstances.

Bureaucracies are infuriating because they cannot deliberate on particulars the way Deresiewicz recommends. In order to scale up, they develop procedures that work well under common scenarios.

The context of Deresiewicz’s advice is a speech he gave at West Point. His audience will spend their careers in one of the largest and most bureaucratic organizations in the world. Deresiewicz is aware of this irony and gives advice for how to be a deep thinker while working within a bureaucracy.

Related posts:

Scalability and immediate appeal

Paul Graham argues that people take bad jobs for the same reasons they eat bad food. The advantages of both are immediately apparent: convenience and immediate satisfaction. The disadvantages take longer to realize. Bad jobs drag down your soul the way bad food drags down your body.

I first read Graham’s essay You Weren’t Meant to Have a Boss when he wrote it three years ago. I read it again this morning when I saw a link to it on Hacker News. I found his thesis less convincing this time around. But he makes two general points that I think I missed the first time.

  1. Watch out for things that are immediately appealing but harmful in the longer term.
  2. Watch out for being part of someone else’s scalability plans.

The first point is familiar advice, but worth being reminded of. The second point is more subtle.

Companies sell bad food for the same reason they offer bad jobs: it scales. It’s easy to create bland food and bland jobs on a large scale. Fresh food and creative jobs don’t scale so well.

When you choose to eat junk food, you more or less consciously choose convenience or immediate satisfaction over long-term benefit. But it may not be obvious when your range of options has been selected for scalability. For example, few students realize how much the educational system has been designed for the convenience of administrators. Being aware of an organization’s scalability needs can help you interact with it more intelligently.

Related posts:

Appropriate scale

“Scale” became a popular buzz word a couple decades ago. Suddenly everyone was talking about how things scale. At first the term was used to describe how software behaved as problems became larger or smaller. Then the term became more widely used to describe how businesses and other things handle growth.

Now when people say something “won’t scale” they mean that it won’t perform well as things get larger. “Scale” most often means “scale up.” But years ago the usage was more symmetric. For example, someone might have said that a software package didn’t scale well because it took too long to solve small problems, too long relative to the problem size. We seldom use “scale” to discuss scaling down, except possibly in the context of moving something to smaller electronic devices.

This asymmetric view of scaling can be harmful. For example, little companies model themselves after big companies because they hope to scale (up). But running a small software business, for example, as a Microsoft in miniature is absurd. A small company’s procedures might not scale up well, but neither do a large company’s procedures scale down well.

I’ve been interested in the idea of appropriate scale lately, both professionally and personally.

I’ve realized that some of the software I’ve been using scales in a way that I don’t need it to scale. These applications scale up to handle problems I don’t have, but they’re overly complex for addressing the problems I do have. They scale up, but they don’t scale down. Or maybe they don’t scale up in the way I need them to.

I’m learning to make better use of fewer tools. This quote from Hugh MacLeod suggests that other people may come to the same point as they gain experience.

Actually, as the artist gets more into her thing, and gets more successful, the number of tools tends to go down.

On a more personal level, I think that much frustration in life comes from living at an inappropriate scale. Minimalism is gaining attention because minimalists are saying “Scale down!” while the rest of our culture is saying “Scale up!” Minimalists provide a valuable counterweight, but they can be a bit extreme. As Milton Glaser pointed out, less isn’t more, just enough is more. Instead of simply scaling up or down, we should find an appropriate scale.

How do you determine an appropriate scale? The following suggestion from Andrew Kern is a good starting point:

There is an appropriate scale to every human activity and it is the scale of personal responsibility.

Update: See the follow-up post Arrogant ignorance.

Related posts:

Hanlon’s razor and corporations

Hanlon’s razor says

Never attribute to malice that which is adequately explained by stupidity.

At first it seems just an amusing little aphorism, something you might read on a bumper sticker, but I believe it’s profound. It’s a guide to understanding so much of the world. Here I’ll focus on what it says about corporations.

I hear a lot of complaints that corporations are evil. Sometimes corporations in general, but more often specific corporations like Apple, Google, or Microsoft. I don’t deny that large, powerful corporations have the potential to do harm. But many accusations of malice are mis-attributed frustrations with stupidity. As Grey’s law says, any sufficiently advanced incompetence is indistinguishable from malice.

Corporations aren’t evil; they’re stupid. Not stupid in general, but in a specific way: they don’t handle edge cases well.

Organizations scale by creating procedures to replace human judgment. This is mostly a good thing. For example, electronic devices are affordable in part because companies can hire unskilled teenagers rather than electrical engineers to sell them. But if you have a question or problem that’s off the beaten path, you’re out of luck. Many complaints about evil corporations come from outliers, the 1% that corporations strategically decide to ignore. It’s not that that the concerns of the outliers are not legitimate, it’s that they are not profitable to satisfy. When some people say that a corporation is evil, they should just say that they are outside the company’s market.

Large organizations have similar problems internally. Policies written to handle the most common situations don’t handle edge cases well. For example, an HR department told me that my baby girl couldn’t be added to my insurance because she wasn’t born in a hospital. Fortunately I was able to argue with enough people resolve the problem despite her falling outside the usual procedures. It’s harder to deal with corporate rigidity as an employee than as a customer because it’s harder to change jobs than to change brands.

Related posts:

Stupidity scales

I’m fed up with conversations that end something like this.

Yes, that would be the smart thing to do, but it won’t scale. The stupid approach is better because it scales.

We can’t treat people like people because that doesn’t scale well.

We can’t use common sense because it doesn’t fit on a form.

We can’t use a simple approach to solve the problem in front of us unless the same approach would also work on a problem 100x larger that we may never have.

If the smart thing to do doesn’t scale, maybe we shouldn’t scale.

Related posts:

Little programs versus big programs

From You Are Not a Gadget:

Little programs are delightful to write in isolation, but the process of maintaining large-scale software is always miserable. … Technologists wish every program behaved like a brand-new, playful little program, and will use any available psychological strategy to avoid thinking about computers realistically.

Related posts:

Organizational scar tissue

Here’s a quote from Jason Fried I found recently.

Policies are organizational scar tissue. They are codified overreactions to unlikely-to-happen-again situations.

Of course that’s not always true, but quite often it is. Policies can be a way of fighting the last war, defending the Maginot Line.

The entrance to Ouvrage Schoenenbourg along the Maginot Line in Alsace, public domain image from Wikipedia

When you see a stupid policy, don’t assume a stupid person created it. It may have been the decision of a very intelligent person. It probably sounded like a good idea at the time given the motivating circumstances. Maybe it was a good idea at the time. But the letter lives on after the spirit dies. You can make a game out of this. When you run into a stupid policy, try to imagine circumstances that would have motivated an intelligent person to make such a policy. The more stupid the policy, the more challenging the game.

Large organizations will accumulate stupid policies like scar tissue over time. It’s inevitable. Common sense doesn’t scale well.

The scar tissue metaphor reminds me of Michael Nielsen metaphor of organizational immune systems. Nielsen points to organizational immune systems as one factor in the decline of newspapers. The defense mechanisms that allowed newspapers to thrive in the past are making it difficult for them to survive now.

Computer processes, human processes, and scalability

Jeff Atwood had a good post today about database normalization and denormalization recently. A secondary theme of his post is scalability, how well software performs as inputs increase. A lot of software developers worry too much about scalability, or they worry about the wrong kind of scalability.

In my career, scalability of computer processes has usually not been the biggest problem, even though I’ve done a lot of scientific computing. I’ve more often run into problems with the scalability of human processes. When I use the phrase “this isn’t going to scale,” I usually mean something like “You’re not going to be able to remember all that” or “We’re going to go crazy if we do a few more projects this way.” 

Million dollar cutoff for software technique

I listened to a podcast recently interviewing Rob Page from Zope. At one point he talked about having SQL statements in your code vs. having accessor classes, and how as your code gets bigger there’s more need for OO design. No surprise. But then he said something interesting: if your project is smaller than $1M then straight SQL is OK, and over $1M you need accessors.

I don’t know whether I agree with the $1M cutoff, but I agree that there is a cutoff somewhere. I appreciate that Page was willing to take a stab at where the cutoff is. Also, I found it interesting that he measured size by dollars rather than, for example, lines of code. I’d like to see more pundits qualify their recommendations as a function of project budget.

Almost all advice on software engineering is about scaling up: bigger code bases, more users, etc. No one talks about the problem of scaling down. The implicit assumption is that you should concentrate on scaling up because scaling down is easy. I disagree. Over the last few years I’ve managed hundreds of small software projects, and I know that scaling down presents unique challenges. Maybe “scaling down” is the wrong term. My projects have scaled up in a different way: more projects rather than bigger projects.

One challenge of small projects is that they may become large projects; the opposite never happens. Sometimes the transition is so gradual that the project becomes too big for its infrastructure before anyone notices. Having some rule like the $1M cutoff could serve as a prompt for reflection along the way: Hey, now that we’re a $1M project, maybe we should start to refactor now to avoid a complete rewrite down the road.