Appropriate scale

“Scale” became a popular buzz word a couple decades ago. Suddenly everyone was talking about how things scale. At first the term was used to describe how software behaved as problems became larger or smaller. Then the term became more widely used to describe how businesses and other things handle growth.

Now when people say something “won’t scale” they mean that it won’t perform well as things get larger. “Scale” most often means “scale up.” But years ago the usage was more symmetric. For example, someone might have said that a software package didn’t scale well because it took too long to solve small problems, too long relative to the problem size. We seldom use “scale” to discuss scaling down, except possibly in the context of moving something to smaller electronic devices.

This asymmetric view of scaling can be harmful. For example, little companies model themselves after big companies because they hope to scale (up). But running a small software business, for example, as a Microsoft in miniature is absurd. A small company’s procedures might not scale up well, but neither do a large company’s procedures scale down well.

I’ve been interested in the idea of appropriate scale lately, both professionally and personally.

I’ve realized that some of the software I’ve been using scales in a way that I don’t need it to scale. These applications scale up to handle problems I don’t have, but they’re overly complex for addressing the problems I do have. They scale up, but they don’t scale down. Or maybe they don’t scale up in the way I need them to.

I’m learning to make better use of fewer tools. This quote from Hugh MacLeod suggests that other people may come to the same point as they gain experience.

Actually, as the artist gets more into her thing, and gets more successful, the number of tools tends to go down.

On a more personal level, I think that much frustration in life comes from living at an inappropriate scale. Minimalism is gaining attention because minimalists are saying “Scale down!” while the rest of our culture is saying “Scale up!” Minimalists provide a valuable counterweight, but they can be a bit extreme. As Milton Glaser pointed out, less isn’t more, just enough is more. Instead of simply scaling up or down, we should find an appropriate scale.

How do you determine an appropriate scale? The following suggestion from Andrew Kern is a good starting point:

There is an appropriate scale to every human activity and it is the scale of personal responsibility.

Update: See the follow-up post Arrogant ignorance.

Related posts

Digital workflow

William Turkel has a nice four-part series of blog posts entitled A Workflow for Digital Research Using Off-the-Shelf Tools. His four points are

  1. Start with a backup and versioning strategy.
  2. Make everything digital.
  3. Research 24/7 (using RSS feeds).
  4. Make local copies of everything.

Also by William Turkel, The Programming Historian, “an open-access introduction to programming in Python, aimed at working historians (and other humanists) with little previous experience.”

Related post: Create offline, analyze online

Thomas Jefferson and preparing for meetings

Here’s an interesting historical anecdote from Karl Fogel’s Producing Open Source Software on the value of preparing for meetings.

In his multi-volume biography of Thomas Jefferson, Jefferson and His Time, Dumas Malone tells the story of how Jefferson handled the first meeting held to decide the organization of the future University of Virginia. The University had been Jefferson’s idea in the first place, but (as is the case everywhere, not just in open source projects) many other parties had climbed on board quickly, each with their own interests and agendas.

When they gathered at that first meeting to hash things out, Jefferson made sure to show up with meticulously prepared architectural drawings, detailed budgets for construction and operation, a proposed curriculum, and the names of specific faculty he wanted to import from Europe. No one else in the room was even remotely as prepared; the group essentially had to capitulate to Jefferson’s vision, and the University was eventually founded more or less in accordance with his plans.

The facts that construction went far over budget, and that many of his ideas did not, for various reasons, work out in the end, were all things Jefferson probably knew perfectly well would happen. His purpose was strategic: to show up at the meeting with something so substantive that everyone else would have to fall into the role of simply proposing modifications to it, so that the overall shape, and therefore schedule, of the project would be roughly as he wanted.

Related posts

Some programmers really are 10x more productive

One of the most popular post on this site is Why programmers are not paid in proportion to their productivity. In that post I mention that it’s not uncommon to find some programmers who are ten times more productive than others. Some of the comments discussed whether there was academic research in support of that claim.

I’ve seen programmers who were easily 10x more productive than their peers. I imagine most people who have worked long enough can say the same. I find it odd to ask for academic support for something so obvious. Yes, you’ve seen it in the real world, but has it been confirmed in an artificial, academic environment?

Still, some things are commonly known that aren’t so. Is the 10x productivity difference exaggerated folklore? Steve McConnell has written an article reviewing the research behind this claim: Origins of 10x—How valid is the underlying research?. He concludes

The body of research that supports the 10x claim is as solid as any research that’s been done in software engineering.

Related posts

Dumb and gets things done

Someone once asked Napoleon how he decided where to assign soldiers. Napoleon’s reply was that it’s simple: soldiers are either smart or dumb, lazy or energetic.

  • The smart and energetic I make field commanders. They know what to do and can rally the troops to do it.
  • The smart and lazy I make generals. They also know what to do, but they’re not going to waste energy doing what doesn’t need to be done.
  • The dumb and lazy I make foot soldiers.

But what about the dumb and energetic? “Those,” Napoleon replied, “I shoot.”

The Napoleon joke comes to mind when I hear praise for somebody because they can “get things done.” Should we make them a field commander or shoot them?

Joel Spolsky says that the ideal programmer is someone who is smart and gets things done. But what about people who are dumb and get things done?

When Ross Perot ran for president in 1992, his supporters exclaimed “He can get things done!” So I’d ask “What does he want to get done that you’d like to see happen?” I don’t recall ever getting an answer.  What he wanted to get done didn’t matter. (I’m not saying that Perot’s platform was dumb. I’ll stay out of that discussion. I’m only saying that it could have been dumb and some people would not know or care.)

One time I heard someone praised as a good teacher. Not knowledgeable, but a good teacher. I objected that if someone is ignorant but a good teacher, does that mean they’re effective in conveying their ignorance? Wouldn’t that be a bad thing? No, all that mattered was that he was a good teacher.

Computer programs consists of lines of code, and lines of code consist of characters. So it’s good for a programmer to be proficient in producing lines of code and characters. Of course it’s more important that they produce lines of code that are correct, maintainable, and that accomplish something worthwhile.

Why would someone support a presidential candidate without knowing their positions? Why would someone want their children to have an ignorant but effective teacher? Why would someone want a programmer who is proficient at producing bad code?

I don’t think anyone wants these things, though they do lose sight of their goals. People like charismatic presidents, good teachers, and productive programmers. But it’s too easy to fall into reductionism, focusing on elemental components and losing sight of the big picture.

Leaders need to make things happen. Teachers need to teach. Programmers need to write code. These basic skills are necessary, but they are not enough.

There’s an active conversation here (59 comments currently, several of which arrived as I composed this post) on how much typing speed matters. I believe the discussion is lively in part because it touches on the issues in this post, basic skills versus larger goals. Participants are coming from varying levels of abstraction, from keystrokes to software engineering. Some are arguing bottom-up, some top-down. I find the dynamic of the discussion more interesting than its content.

More programming posts

How much does typing speed matter?

How important is it to be able to type quickly? Jeff Atwood has said numerous times that programmer must be a good typist. For example, a few weeks ago he said

I can’t take slow typists seriously as programmers. When was the last time you saw a hunt-and-peck pianist?

But programming is not like playing piano.  Programming is more like composing music than performing music. Most composers can play piano well, but some cannot.

What if you write prose rather than programs? In his book On Writing, Stephen King recommends writing 1000 words per day. If writing were only a matter of typing, how long would that take? Half an hour at a modest rate of 30 words per minute. Say you have to type 2000 words to keep 1000 due to corrections. Now we’re up to an hour. People who write for a living do not literally spend most of their time writing. They spend most of their time thinking.

Clearly it’s good to be able to type quickly. As I’ve argued here, the primary benefit of quick typing is not the time saved in data entry per se, it’s the increased chance that your hands can keep up with your brain.

However, a slow typist can still be productive. Consider physicist Stephen Hawking. He is only able to communicate to the world via a computer, ALS having destroyed nearly all of his motor control. For years he controlled his computer via a switch he could toggle with his hand; he now uses a camera that detects blinks. He says he can type 15 words per minute. Still, he has managed to write a few things, 194 publications from 1965 to 2008.

Learning to type well is a good investment for those who are physically able to do so, but it’s not that important. Once you reach moderate proficiency, improving your speed will not improve your productivity much. If a novelist writing 1000 words per day were able to type infinitely fast, he or she could save maybe an hour per day.

You may not be able to increase your typing speed too much no matter how hard you try. According to Guinness Book of World Records, Barbara Blackburn was the world’s fastest English language typist. She could sustain 150 words per minute. That means she was only 10x faster than Stephen Hawking. Most of us are somewhere between Stephen Hawking and Barbara Blackburn. In other words, nearly everyone types at the same speed, within an order of magnitude.

Related posts

Maybe you only need it because you have it

Some cities need traffic lights because they have traffic lights. If one traffic light goes out, it causes a traffic jam. But sometimes when all traffic lights go out, say due to a storm, traffic flows better than before.

Some buildings need air conditioning because they have air conditioning. Because they were designed to be air conditioned, they have no natural ventilation and would be miserable to inhabit without air conditioning.

Some people need to work because they work. A family may find that their second income is going entirely to expenses that would go away if one person stayed home.

It’s hard to tell when you’ve gotten into a situation where you need something because you have it. I knew someone that worked for a company that sold expensive software development tools. He said that one of the best perks of his job was that he could buy these tools at a deep discount. But he didn’t realize that without his job, he wouldn’t need these tools! He wasn’t using them to develop software. He was only using them so he could demonstrate and sell them.

It may be even harder for an organization to realize it has been caught in a cascade of needs. Suppose a useless project adds staff. These staff need to be managed, so they hire a manager. Then they hire people for IT, accounting, marketing, etc. Eventually they have their own building. This building needs security, maintenance, and housekeeping. No one questions the need for the security guard, but the guard would not have been necessary without the original useless project.

When something seems absolutely necessary, maybe it’s only necessary because of something else that isn’t necessary.

Related post: Defining minimalism

Mathematically correct but psychologically wrong

The snowball strategy says to pay off your smallest debt first, then the next smallest, and so on until you’re out of debt.

When I first heard of this I thought it was silly. Clearly the optimal strategy is to pay off the debt with the highest interest rate first. That assessment is mathematically correct, but psychologically wrong. The snowball strategy provides a sense of accomplishment and encouragement by reducing the number of debts as soon as possible. Ideally someone would be able to pay off at least one debt before their determination to get out of debt wanes.

My point here isn’t to give financial advice. I bring up the snowball strategy because it is an example of a problem with an obvious but naive solution. If someone is overwhelmed by debt, they need encouragement more than a mathematically optimal strategy. However, the snowball strategy may not be psychologically optimal for everyone. This further illustrates the idea that optimal real-life strategies are more complicated than mathematical models.

Many things that don’t look optimal are in fact optimal once you take the necessary constraints into account. For example, software that seems poorly designed may in fact have been brilliantly designed when you consider its economic and historical constraints. (This may even be the norm. Nobody complains about how badly obscure software was designed. We complain about software that has been successful enough to criticize.)

Related posts

Micro distractions

Why are long articles easier to read on paper than on a screen? The explanations I’ve heard most often involve resolution or other properties of screens. But the culprit may not be the screen per se. It may be links, notifications, and other distractions.

Obviously if you follow a link you’ll won’t finish reading your original article as quickly (or possibly ever). But even when you don’t follow any links, you have to decide not to follow each link. These decisions are not as obvious a distraction as say constructi0n noise or flickering lights, but they are still distractions and they take a toll. That is the explanation Nicholas Carr gives in his new book The Shallows. (Sorry for the distraction.)

Paper books don’t offer readers many options, and that may be their strength. If you’re aware of things you could do to interact with an e-reader, you have to decide whether to take these actions. E-readers are expected to get better screen technology as well as ads in the near future. The ads may harm reading efficiency more than increased screen resolution will help.

Two kinds of multitasking

People don’t task switch like computers do.

The earliest versions of Windows and Mac OS used cooperative multitasking. A Windows program would do some small unit of work in response to a message and then relinquish the CPU to the operating system until the program got another message. That worked well, as long as all programs were written with consideration for other programs and had no bugs. An inconsiderate (or inexperienced) programmer might do too much work in a message handling routine and monopolize the CPU. A bug resulting in an infinite loop would keep the program from ever letting other programs run.

Now desktop operating systems use preemptive multitasking. Unix used this form of multitasking from the beginning. Windows starting using preemptive multitasking with Windows NT and Windows 95. Macintosh gained preemptive multitasking with OS X. The operating system preempts programs to tell them it’s time to give another program a turn with the CPU. Programmers don’t have to think about handing over control of the CPU and so programs are easier to write. And if a program runs into an infinite loop, it only hurts itself.

Computers work better with preemptive multitasking, but people work better with cooperative multitasking.

If you want to micro-manage people, if you don’t trust them and want to protect yourself against their errors, treat them like machines. Interrupt them whenever you want. Preemptive task switching works great for machines.

But people take more than a millisecond to regain context. (See Mary Czerwinski’s comments on context re-acquisition.) People do much better if they have some control over when they stop one thing and start another.

Related post: Inside the multitasking and marijuana study