Some programmers really are 10x more productive

One of the most popular post on this site is Why programmers are not paid in proportion to their productivity. In that post I mention that it’s not uncommon to find some programmers who are ten times more productive than others. Some of the comments discussed whether there was academic research in support of that claim.

I’ve seen programmers who were easily 10x more productive than their peers. I imagine most people who have worked long enough can say the same. I find it odd to ask for academic support for something so obvious. Yes, you’ve seen it in the real world, but has it been confirmed in an artificial, academic environment?

Still, some things are commonly known that aren’t so. Is the 10x productivity difference exaggerated folklore? Steve McConnell has written an article reviewing the research behind this claim: Origins of 10x—How valid is the underlying research?. He concludes

The body of research that supports the 10x claim is as solid as any research that’s been done in software engineering.

Related posts

26 thoughts on “Some programmers really are 10x more productive

  1. I think it is quite obvious why some would object. If your claim is that with the right process, all you need is a bunch of monkey-coders implementing your vision… the thought that there might be orders of magnitude differences between the monkeys invalidates your work.

    Of course, the difference is not 10x in the real world. The difference is that without the right talent, you’ll never get the right software. And as Spolsky explained, the difference in salary is never 10x, so the best programmers are bargains.

  2. Question: How can I check my own programming productivity?

    Perhaps an odd question, but I am a lone coder. I am a mathematician by training and by job title a science technician. However what I do about 50% of my job is coding (LAMP). I am self-taught and have never coded with anyone so I have had no one to gauge my skills against.

    (I know the best idea would be adding some code to an OSS project, but I unfortunately do not have the time now.)

  3. Daniel,

    I have seen approximately 10x differences in the real world. Last year, my group at work hired one guy who had a background in numerical computation and in translating MATLAB code to C++, with the idea that he would be able to debug some numerical errors in certain software we had developed, and ideally implement some newer algorithms from our researchers. When we fired this guy 10 or 12 weeks later, he had come up with two optimizations — both hoisting some constant subexpressions out of a loop in a way very similar to what the compiler would do. The developer who took over that project took less than two months to rewrite the algorithms, eliminating the numerical errors while also making them faster than before, and another three weeks to implement a new (relatively more complex) algorithm. It’s hard to say that the first guy would have taken twenty or thirty months to reach the same point, because the company didn’t want to wait that long, but he did not seem to be making his way up a learning curve or otherwise becoming more productive over his several-month stint.

    I have seen enough candidates come through to know that there is a huge difference in productivity, and worked with enough people to know that — for medium- to large-sized efforts — only about a factor of two is due to familiarity with the problem or the code base. Most of the productivity difference is due to practice writing code and better judgment about programming problems. However, even among programmers who write code faster than their peers, some have cultivated habits that make their code more robust and easier to debug; these differences make them more productive than their highly productive peers.

    Wil W, it is hard to compare productivity over the Internet, especially on OSS projects; it is hard to judge how much time someone else spends on the code unless they are in the same work area as you. For the most part, where you fall on the relative productivity spectrum is less important than how well you meet your customers’ needs, and if you want to improve your productivity, the approach is largely independent of how productive you are. (Improvement begins with paying attention to how you spend your time and what kind of errors you make frequently; those factors usually let you identify specific areas that have the most scope for improvement.)

  4. @Wil W

    Some would say that Stephen King is a highly productive writer. Fine. But most people couldn’t write an entertaining novel given two years of free time. So, actually, Stephen King’s productivity is infinite compared to most people.

    I think that “productivity” cannot be measured for what matters. The really big difference is in the project where most programmers would just choke.

    (1) Could you implement LZ77 compression just as efficiently as the available commercial tools, starting from scratch?
    (2) Could you implement your own Turing complete language from scratch?
    (3) Could you write a word processor using only ECMAScript?

    And so on. Of course, you have got to set time limit. There are many programming projects that are such that if you were to hire the average graduate from a Computer Science program, they wouldn’t deliver.

    So I think you can certainly challenge yourself. How challenging a programming project can you complete in 24 hours? Could you write an entire text editor from scratch? Could you write a small game? Could you write a blog engine?

  5. Daniel: I think some programmers take an infinite amount of time because they expect to take a short amount of time.

    Here’s what I have in mind. Suppose you’re asked to write a compiler by the end of the week, but you don’t know now. You just start hacking. Every week your boss pressures you to finish. One day you look up and it has been six months and you have little to show for yourself. What would you have done if the original assignment was to write a compiler in six months? You might have read some books on parsing etc and finished in three months.

    Here’s a post along those lines: A sort of opposite of Parkinson’s law.

  6. @John

    Deadlines are typically used to force someone to do something he wouldn’t do otherwise.

    It is not hard to deduce that the best work, being self-directed, is done without any deadline at all.

    People who work on deadlines have poor motivation. They do poor work.

  7. @luispedro: I agree. I think the 10x figure holds better once you throw out some outliers. As Daniel pointed out, some programmers have zero productivity because they’ll never finish. (Couldn’t you say they accomplished something even if they didn’t finish? Maybe not. The person who cleans up after them will may very well throw out everything they’ve done and start from scratch.) Not only do some programmers have zero productivity, some have negative productivity, creating more work than they accomplish.

    @Titus: Realistic anecdotes can be more reliable than unrealistic scientific studies. Realistic scientific studies of software engineering are very difficult if not impossible to conduct. For example, you can’t randomize people to careers.

  8. I think that the important part of academic studies is that they take measures to remove the inherent biases of the observers. So, in the general case, I would disagree that really good anecdotes are better than academic study. The wikipedia article about biases is more interesting than I expected: http://en.wikipedia.org/wiki/Confirmation_bias

    However, I do agree that software engineering is incredibly hard to test in a useful way, and we may never have good academic results about programmer productivity. Also, someone who has been successfully writing software and managing software teams for 30 years could be considered to have an expert opinion on the subject, and there seems to be consensus about this particular issue.

  9. I think that the important part of academic studies is that they take measures to remove the inherent biases of the observers.

    Academic researchers are just as susceptible to biases as anyone else. Remember that to get and preserve research grants, your ideas have to “pan out”. You cannot work on a methodology for 3 years, and then conclude that it is a failure. You have to claim success no matter what, or else your grant won’t be renewed. You have strong incentive to get positive results confirming your a prioris.

    The theory is that other researchers will then challenge these results and do their own experiments, possibly invalidating the previous results. However, it is almost impossible to publish a study whose sole purpose is to invalidate previous (competing) work. And it is certainly impossible to get a research grant for this purpose. Moreover, it is politically difficult for a research to attack other researchers.

    So, once a “fact” becomes accepted in academia, it takes an outsider to challenge it. Academia is very much a “big organization” type of milieu. It is very conservative. If influential people make a claim, it will rarely be challenged… except by outsiders.

  10. Daniel: Regarding the difficulty of overturning an established research result, two of my colleagues did manage to refute one genetics paper. But it took four years of dogged persistence. You can read more about the saga here and here.

  11. Well, one reason to seek academic confirmation is precisely that the absence of 10x differences in pay is prima facie (suggestive, not definitive) evidence for the absence of 10x differences in productivity. We could be fooling ourselves as to what we think we’re “seeing” in the real world (what Christian Oudard is saying).

    Here is an interesting chart from oDesk statistics: http://www.odesk.com/odb/v/4985.hourly-jobs-rate-distribution-min-1000

    (From http://www.odesk.com/community/oconomy/rate_statistics )

    You would expect, if oDesk were an “efficient market” for programming capacity, that the prices for buying that capacity would come to reflect its distribution.

    For comparison look at the distributions in Lutz Prechelt’s work on productivity variance (quoted by Kaitlin “Ducky” Sherwood):
    http://blog.webfoot.com/2008/11/03/programmer-productivity-update/

    These are much flatter, where the oDesk distribution is sharply peaked.

    How do we explain the discrepancy?

  12. @Laurent

    Ironically, doesn’t your chart show a 10x difference in pay? The range goes from $5 to $60.

    This being said, I don’t expect pay to reflect productivity.

    The best programmers are self-motivated. They learned programming on their own, and without concern for financial gains. Many firms attract programmers by allowing them to use exotic languages. Others, like Google, theoretically give programmers some free time to work on their own projects. Increasingly, great programmers get to work from home, sometimes hundreds of miles from the employer.

    So, if you offer lavish salaries, but relatively low freedom, you are likely to attract the wrong kind of programmers.

  13. The bit “Yes, you’ve seen it in the real world, but has it been confirmed in an artificial, academic environment?” Reminded my of a speech by Clay Shirky. Difference between solidity of edifice vs solidity of process. http://www.youtube.com/watch?v=Xe1TZaElTAs

    I agree with the statement about difference in productivity. I also have seen it in the workplace. I even have seen people many times more productive than the 10x number.

    @Will: I measure productivity as getting stuff done also. I work in Perl and web. I learn and practice based on making myself a better programmer in that area.

  14. James: I love the quote “They didn’t care that they had seen it work in practice because they already knew it couldn’t work in theory.”

  15. Two points:

    1. It’s actually Zipf all the way down. If you take the programmers who are 10x better than average and put them on a project, you’ll find that among them are still developers 10x better than the rest. I’ve seen this on multiple occasions.

    2. Compensation isn’t about engineering prowess, so the argument there is specious. However, there’s easily a 10x range, even if we ignore the startups who do in fact strike it rich. At large software companies, you’ve got tech VPs and distinguished engineers making $250K-1M/yr, and new IT devs making $30-50K.

  16. @Daniel, the little blip at 60 is more noise than data. Or at any rate if the actionable advice we draw from the “fact” of 10x variations is that we should hire these people, there won’t be enough of them to go around and the “solution” isn’t really one.

  17. the little blip at 60 is more noise than data.

    Google pays its engineers 150k$ a year plus benefits. That’s well over $75 an hour. If you want to know how these guys would charge as freelance consultant, you must at least double this: $150 an hour. But you won’t find them on oDesk because the people paying $150 an hour are not on oDesk. And that’s not a crazy sum: I’m sure I am vastly underestimating the going rate of top freelance engineers. Meanwhile, you can hire freelance beginners at $20 an hour.

    And honestly, many people would rather pay $200 an hour for code that rocks than 20$ an hour for code that barely works.

    But back to the original point… I don’t think 10x programmers charge 10x. They are always a bargain.

  18. @Daniel: “many people would rather pay $200 an hour for code that rocks than 20$ an hour for code that barely works.” – that’s what I’d like to think too, but both John’s earlier post and my oDesk investigations have failed to turn up evidence of that. (Quoting John’s earlier post: “salaries usually fall within a fairly small range in any company. Even across the entire profession, salaries don’t vary that much”)

    Keep in mind that the oDesk bar chart I’ve posted isn’t made up solely of programming jobs: the rest of the oDesk stats indicate that their market also includes data entry jobs paid a mean rate of $3/hr. We’d find an even more peaked distribution, clustering around $15/hr, if we filtered only programming jobs in that data.

    Again, I’m not really disputing the 10x variance between extrema of the distribution: my original critique of McConnell was to say that – just as Greg Wilson was saying in his comments to John’s earlier post – calling this variance a “scientific fact” is vastly overstating the quality and quantity of the available evidence.

    You can make this variance between “best” and “worst” come out as arbitrarily high as you like by including more or fewer data points in your sample, which will then contain a proportionate number of radical outliers. So going to the trouble of obtaining evidence, and then discarding most of it by distilling it into this silly 10x number, is doing no one any good.

    What matters is the shape of the distribution: is it flat, two-humped, sharply peaked? What matters too is the correlations: does productivity increase with job experience (as you’d intuitively suspect, buth also as has been often reported is not the case), and what else does it correlate with?

  19. I’ve held software engineer positions at NASA(Caltech), Microsoft, Yahoo, and Amazon. I don’t know about NASA, but at the other three it’s just not true that salaries “fall within a fairly small range,” unless you have an unusual definition of small. Most level bands are ~30% wide (e.g., if the median comp target for a level is $150K then the range at that level would go from $105K – $195K) and the variance from highest-compensated engineer to lowest is 5-20x, even if you restrict the comparison to SDE roles.

    I get that the industry as a whole may have a different distribution. There are many small businesses, startups, and employers outside the software domain (banks, universities, etc.), and perhaps in such places engineer compensation has less range. But then I’d suggest that developer productivity in such places probably also tends to have a more restricted range.

Comments are closed.