Another reason we don’t apply the 80-20 rule

I’ve written about the 80-20 rule several times because it keeps coming up. I’d like to believe that each time I revisit it I understand it a little better.

In its simplest form the 80-20 rule says 80% of your outputs come from 20% of your inputs. You might find that 80% of your revenue comes from 20% of your customers, or 80% of your headaches come from 20% of your employees, or 80% of your sales come from 20% of your sales reps. The exact numbers 80 and 20 are not important, though they work surprisingly well as a rule of thumb.

The more general principle is that a large portion of your results come from a small portion of your inputs. Maybe it’s not 80-20 but something like 90-5, meaning 90% of your results coming from 5% of your inputs. Or 90-13, or 95-10, or 80-25, etc. Whatever the proportion, it’s usually the case that some inputs are far more important than others. The alternative, assuming that everything is equally important, is usually absurd.

The 80-20 rule sounds too good to be true. If 20% of inputs are so much more important than the others, why don’t we just concentrate on those? In an earlier post, I gave four reasons. These were:

  1. We don’t look for 80/20 payoffs. We don’t see 80/20 rules because we don’t think to look for them.
  2. We’re not clear about criteria for success. You can’t concentrate your efforts on the 20% with the biggest returns until you’re clear on how you measure returns.
  3. We enjoy less productive activities more than more productive ones. We concentrate on what’s fun rather than what’s effective.

I’d like to add another reason to this list, and that is that we may find it hard to believe just how unevenly distributed the returns on our efforts are. We may have an idea of how things are ordered in importance, but we don’t appreciate just how much more important the most important things are. We mentally compress the range of returns on our efforts.

Making a list of options suggests the items on the list are roughly equally effective, say within an order of magnitude of each other. But it may be that the best option would be 100 times as effective as the next best option. (I’ve often seen that, for example, in optimizing software. Several ideas would reduce runtime by a few percent, while one option could reduce it by a couple orders of magnitude.) If the best option also takes the most effort, it may not seem worthwhile because we underestimate just how much we get in return for that effort.

Too easy

When people sneer at a technology for being too easy to use, it’s worth trying out.

If the only criticism is that something is too easy or “OK for beginners” then maybe it’s a threat to people who invested a lot of work learning to do things the old way.

The problem with the “OK for beginners” put-down is that everyone is a beginner sometimes. Professionals are often beginners because they’re routinely trying out new things. And being easier for beginners doesn’t exclude the possibility of being easier for professionals too.

Sometimes we assume that harder must be better. I know I do. For example, when I first used Windows, it was so much easier than Unix that I assumed Unix must be better for reasons I couldn’t articulate. I had invested so much work learning to use the Unix command line, it must have been worth it. (There are indeed advantages to doing some things from the command line, but not the work I was doing at the time.)

There often are advantages to doing things the hard way, but something isn’t necessary better because it’s hard. The easiest tool to pick up may not be best tool for long-term use, but then again it might be.

Most of the time you want to add the easy tool to your toolbox, not take the old one out. Just because you can use specialized professional tools doesn’t mean that you always have to.

Related post: Don’t be a technical masochist

Uniformitarian or Paretoist

A uniformitarian view is that everything is equally important. For example, there are 118 elements in the periodic table, so all 118 are equally important to know about.

The Pareto principle would say that importance is usually very unevenly distributed. The universe is essentially hydrogen and helium, with a few other elements sprinkled in. From an earthly perspective things aren’t quite so extreme, but still a handful of elements make up the large majority of the planet. The most common elements are orders of magnitude more abundant than the least.

The uniformitarian view is a sort of default, not often a view someone consciously chooses. It’s a lazy option. No need to think. Just trudge ahead with no particular priorities.

The uniformitarian view is common in academia. You’re given a list of things to learn, and they all count the same. For example, maybe you have 100 vocabulary words in your Spanish class. Each word contributes one point to your grade on a quiz. The quiz measures what portion of the list you’ve learned, not what portion of that language you’ve learned. A quiz designed to test the latter would weigh words according to their frequency.

It’s easy to slip into a uniformitarian mindset, or a milder version of the same, underestimating how unevenly things are distributed. I’ve often fallen into the latter. I expect things to be unevenly distributed, but then I’m surprised just how uneven they are once I look at some data.

Related posts:

Optimism can be discouraging

Here’s an internal dialog I’ve had several times.

“What will happen when you’re done with this project?”

“I don’t know. Maybe not much. Maybe great things.”

“How great? What’s the best outcome you could reasonably expect?”

“Hmm …  Not that great. Maybe I should be doing something else.”

It’s a little paradoxical to think that asking an optimistic question — What’s the best thing that could happen? — could discourage us from continuing to work on a project, but it’s not too hard to see why this is so. As long as the outcome is unexamined, we can implicitly exaggerate the upside potential. When we look closer, reality may come shining through.

 Related posts:

Iterative linear solvers as metaphor

Gaussian elimination is systematic way to solve systems of linear equations in a finite number of steps. Iterative methods for solving linear systems require an infinite number of steps in theory, but may find solutions faster in practice.

Gaussian elimination tells you nothing about the final solution until it’s almost done. The first phase, factorization, takes O(n^3) steps, where n is the number of unknowns. This is followed by the back-substitution phase which takes O(n^2) steps. The factorization phase tells you nothing about the solution. The back-substitution phase starts filling in the components of the solution one at a time. In application n is often so large that the time required for back-substitution is negligible compared to factorization.

Iterative methods start by taking a guess at the final solution. In some contexts, this guess may be fairly good. For example, when solving differential equations, the solution from one time step gives a good initial guess at the solution for the next time step. Similarly, in sequential Bayesian analysis the posterior distribution mode doesn’t move much as each observation arrives. Iterative methods can take advantage of a good starting guess while methods like Gaussian elimination cannot.

Iterative methods take an initial guess and refine it to a better approximation to the solution. This sequence of approximations converges to the exact solution. In theory, Gaussian elimination produces an exact answer in a finite number of steps, but iterative methods never produce an exact solution after any finite number of steps. But in actual computation with finite precision arithmetic, no method, iterative or not, ever produces an exact answer. The question is not which method is exact but which method produces an acceptably accurate answer first. Often the iterative method wins.

Successful projects often work like iterative numerical methods. They start with an approximation solution and iteratively refine it. All along the way they provide a useful approximation to the final product. Even if, in theory, there is a more direct approach to a final product, the iterative approach may work better in practice.

Algorithms iterate toward a solution because that approach may reach a sufficiently accurate result sooner. That may apply to people, but more important for people is the psychological benefit of having something to show for yourself along the way. Also, iterative methods, whether for linear systems or human projects, are robust to changes in requirements because they are able to take advantage of progress made toward a slightly different goal.

Related post: Ten surprises from numerical linear algebra

Mental callouses

In describing writing his second book, Tom Leinster says

… I’m older and, I hope, more able to cope with stress: just as carpenters get calloused hands that make them insensitive to small abrasions, I like to imagine that academics get calloused minds that allow them not to be bothered by small stresses and strains.

Mental callouses are an interesting metaphor. Without the context above, “calloused minds” would have a negative connotation. We say people are calloused or insensitive if they are unconcerned for other people, but Leinster is writing of people unperturbed by distractions.

You could read the quote above as implying that only academics develop mental discipline, though I’m sure that’s not what was intended. Leinster is writing a personal post about the process of writing books. He’s an academic, and so he speaks of academics.

Not only do carpenters become more tolerant of minor abrasions, they also become better at avoiding them. I’m not sure that I’m becoming more tolerant of stress and distractions as I get older, but I do think I’m getting a little better at anticipating and avoiding stress and distractions.

 

 

Time and Productivity

Contractors were working on my house all last week. I needed to be home to let them in, to answer questions, etc., but the noise and interruptions meant that home wasn’t a good place for me to work. In addition, my Internet connection was out for most of the week and I had a hard disk failure.

Looking back on the week, my first thought was that the week had been an almost total loss, neither productive nor relaxing. But that’s not right. The work I did do made a difference, reinforcing my belief that effort and results are only weakly correlated. (See Weinberg’s law of twins.)

Sometimes you have a burst of insight or creativity, accomplishing more in a few minutes than in an ordinary day. But that didn’t happen last week.

Sometimes your efforts are unusually successful, either because of the preparation of previous work or for unknown reasons. That did happen last week.

Sometimes you simply work on more important tasks out of necessity. Having less time to work gives focus and keeps work from expanding to fill the time allowed. That also happened last week.

* * *

I did get out of the house last Tuesday and wrote about it in my previous post on quality over quantity. This turned out to the theme of the week.

Reducing development friction

Diomidis Spinellis gave an insightful list of ways to reduce software development friction in the Tools of the Trade podcast episode The Frictionless Development Environment Scorecard.

The first item on his list grabbed my attention:

Are my personal settings and preferences consistent on all the computers I’m using? Are they stored under version control? Can I install them on a new computer using a single command?

Listening to the podcast provoked me to finally sync my .emacs files on all my computers so that I now have the exact same file on all computers, maintained under version control. (Xah Lee gave me some sample code for creating the branching logic I needed for a few differences between Windows and Linux.)

Here is a small sample of questions from the podcast.

  • Are my files getting backed up? Is the backup tested, accessible, off site, in multiple media, with regularly retained copies?
  • Can I use the same editor for all my code and documentation editing tasks?
  • Can I get context-sensitive help and code completion?
  • Can I search recursively down a directory tree? Ignoring case? Only in a subset of files? With a regular expression?
  • Can I open a shell from the graphical file explorer and vice versa?
  • Can I quickly build the application I’m working on after a change? Can I test the application with a single command?
  • Can I automatically check my code for common or tricky errors? Are these checks run by default? Are they clean?
  • Does my application log its actions?
  • Is documentation for the tools and APIs I use readily available? Is it hyperlinked? Available offline?

The last question from the podcast summarizes the whole list:

Do I regularly evaluate my development environment to pinpoint and eliminate the sources of friction? Do I help my colleagues do the same?

The difference between machines and tools

From “The Inheritance of Tools” by Scott Russell Sanders:

I had botched a great many pieces of wood before I mastered the right angle with a saw, botched even more before I learned to miter a joint. The knowledge of these things resides in my hands and eyes and the webwork of muscles, not in the tools. There are machines for sale—powered miter boxes and radial arm saws, for instance—that will enable any casual soul to cut proper angles in boards. The skill is invested in the gadget instead of the person who uses it, and this is what distinguishes a machine from a tool.

Related post: Software exoskeletons

Some things can’t be done slowly

Keith Perhac mentioned in a podcast that a client told him he accomplished more in three days than the client had accomplished in six months. That sounds like hyperbole, but it’s actually plausible.

Sometimes a consultant can accomplish in a few days what employees will never accomplish, not because the consultant is necessarily smarter, but because the consultant can give a project a burst of undivided attention.

Some projects can only be done so slowly. If you send up a rocket at half of escape velocity, it’s not going to take twice as long to get where you want it to go. It’s going to take infinitely longer.

Related post: A sort of opposite of Parkinson’s law

Slabs of time

From Some Remarks: Essays and Other Writing by Neal Stephenson:

Writing novels is hard, and requires vast, unbroken slabs of time. Four quiet hours is a resource I can put to good use. Two slabs of time, each two hours long, might add up to the same four hours, but are not nearly as productive as an unbroken four. … Likewise, several consecutive days with four-hour time-slabs in them give me a stretch of time in which I can write a decent book chapter, but the same number of hours spread out across a few weeks, with interruptions in between them, are nearly useless.

I haven’t written a novel, and probably never will, but Stephenson’s remarks describe my experience doing math and especially developing software. I can do simple, routine work in short blocks of time, but I need larger blocks of time to work on complex projects or to be more creative.

Related post: Four hours of concentration

Randomized studies of productivity

A couple days ago I wrote a blog post quoting Cal Newport suggesting that four hours of intense concentration a day is as much as anyone can sustain. That post struck a chord and has gotten a lot of buzz on Hacker News and Twitter. Most of the feedback has been agreement, but a few people have complained that this four-hour limit is based only on anecdotes, not scientific data.

Realistic scientific studies of productivity are often not feasible. For example, people often claim that programming language X makes them more productive than language Y. How could you conduct a study where you randomly assign someone a programming language to use for a career? You could do some rinky-dink study where you have 30 CS students do an artificial assignment using language X and 30 using Y. But that’s not the same thing, not by a long shot.

If someone, for example Rich Hickey, says that he’s three times more productive using one language than another, you can’t test that assertion scientifically. But what you can do is ask whether you think you are similar to that person and whether you work on similar problems. If so, maybe you should give their recommended language a try.

Suppose you wanted to test whether people are more productive when they concentrate intensely for two hours in the morning and two hours in the afternoon. You couldn’t just randomize people to such a schedule. That would be like randomizing some people to run a four-minute mile. Many people are not capable of such concentration. They either lack the mental stamina or the opportunity to choose how they work. So you’d have to start with people who have the stamina and opportunity to work the way you want to test. Then you’d randomize some of these people to working longer, fractured work days. Is that even possible? How would you keep people from concentrating? Harrison Bergeron anyone? And if it’s possible, would it be ethical?

Real anecdotal evidence is sometimes more valuable than artificial scientific data. As Tukey said, it’s better to solve the right problem the wrong way than to solve the wrong problem the right way.

Related posts:

Four hours of concentration

As I’ve blogged about before, and mentioned again in my previous post, the great mathematician and physicist Henri Poincaré put in two hours of work in the morning and two in the evening.

Apparently this is a common pattern. Cal Newport mentions this in his interview with Todd Henry.

Now we also know that if you study absolute world class, best virtuoso violin players, none of them put in more than about four or so hours of practice in a day, because that’s the cognitive limit. And this limit actually shows up in a lot of different fields where people do intense training, that you really can’t do about more than four or so hours of this type of really mental strain.

And they often break this into two sessions, of two hours and then two hours. So there’s huge limits here. I think if you’re able to do three, maybe four hours of this sort of deep work in a typical day, you’re hitting basically the mental speed limit, the amount of concentration your brain is actually able to give.

He goes on to say that you may be able to work 15 hours a day processing email and doing other less demanding work, but nobody can sustain more than about four hours of intense concentration per day.

Update: The comments add examples of authors and physicists who had a similar work schedule.

Related post: Increasing your chances of entering flow

Increasing your chances of entering flow

I recently ran across a tip from Mark Hepburn that caught my eye. The content of the tip isn’t important here but rather his justification of the tip:

It sounds trivial, but it can really help keep you in the flow.

This line jumped out at me because I’ve been thinking about my work habits lately. Now that I’m self-employed, I have the opportunity to develop new habits. My excuses for not trying different ways of working have been stripped away. Maybe my excuses weren’t valid before, but it’s obvious that they are not valid now.

Small customizations like Mark mentioned are under-appreciated in part because they are trivial, at least when viewed one at a time. But the cumulative effect of numerous trivial customizations could be substantial. Together they increase the probability that you can act on an idea before it slips your mind and before you lose the will to pursue it.

Small customizations are also very personal, and so they don’t make good blog posts. I suspect that productivity bloggers primarily write about things they don’t actually do. They write about things that a wide audience will find entertaining if not useful. The little things that make a difference to the blogger may be boring or embarrassing to write about.

* * *

Instead of giving a simple list of related links at the bottom as I usually do, I’ll give some links along with commentary.

Henri Poincaré had a radical work schedule: one two-hour sprint in the morning and another in the afternoon. Some people look at that and think he put in half a normal work day. But if he had four hours of concentrated focus, I imagine he put in four times a typical work day.

Here are posts on changing how you type and how you use a text editor.

And here are a three posts on how mundane things are undervalued:

And finally, here’s a post on customizing conventional wisdom to your circumstances.

Fractured work

Vivek Haldar’s recent post Quantum of Work points out something obvious in retrospect: programming is intrinsically fractured. It does little good to tell a programmer to unplug and concentrate. He or she cannot work for more than a few minutes before needing to look something up online or interact with someone.

A quantum of work is the theoretical longest amount of time you can work purely on your own without needing to break out into looking up something on the web or your mail or needing input from another person. For most modern workers this quantum of work is measured in minutes.

At least that’s the default, the path of least resistance. But it’s not the only way to work. Software developer Joey Hess describes how he works here.

[My home] is nicely remote, and off the grid, relying on solar power. I only get 50 amp-hours of juice on a sunny day, and often less than 15 amp-hours on a bad day. … I seem to live half the time out of range of broadband, and still use dialup since bouncing the Internet off a satellite has too much latency, and no better total aggregate bandwidth. So I’m fully adapted to asynchronous communication.

Joey Hess cannot possibly work the way Vivek Haldar describes. It sounds like his quantum of work is measured in hours if not days. That would not be optimal or even feasible for some kinds of work, but it does suggest that we may not need to be as connected as we are. Maybe your optimal quantum of work is somewhere between the extremes discussed above.

If your quantum of work is 10 minutes, maybe you could increase that. This would require making some changes. Keeping your same way of working but trying to ration your time online would be frustrating and counterproductive. I think it’s significant that Hess says he adapted to working asynchronously. For example, I assume he keeps reference material on his local hard drive that others would access online.

Even if working offline is less efficient, it’s a good idea to be prepared to work that way when necessary. I was reminded of that this weekend. I was using some desktop software that depends on a server component. There was a failure on the vendor’s server and nobody at work to fix it, so I was stuck.

What are some ways to increase your quantum of work and to work less synchronously?

Related posts: