100x better approach to software?

Alan Kay speculates in this talk that 99% or even 99.9% of the effort that goes into creating a large software system is not productive. Even if the ratio of overhead and redundancy to productive code is not as high as 99 to 1, it must be pretty high.

Note that we’re not talking here about individual programmer productivity. Discussions of 10x or 100x programmers usually assume that these are folks who can use basically the same approach and same tools to get much more work done. Here we’re thinking about better approaches more than better workers.

Say you have a typical system of 10,000,000 lines of code. How many lines of code would a system with the same desired features (not necessarily all the actual features) require?

  1. If the same team of developers had worked from the beginning with a better design?
  2. If the same team of developers could rewrite the system from scratch using what they learned from the original system?
  3. If a thoughtful group of developers could rewrite the system without time pressure?
  4. If a superhuman intelligence could rewrite the system, something approaching Kolmogorov complexity?

Where does the wasted effort in large systems go, and how much could be eliminated? Part of the effort goes into discovering requirements. Large systems never start with a complete and immutable specification. That’s addressed in the first two questions.

I believe Alan Kay is interested in the third question: How much effort is wasted by brute force mediocrity? My impression from watching his talk is that he believes this is the biggest issue, not evolving requirements. He said elsewhere

Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves.

There’s a rule that says bureaucracies follow a 3/2 law: to double productivity, you need three times as many employees. If a single developer could produce 10,000 lines of code in some period of time, you’d need to double that 10 times to get to 10,000,000 lines of code. That would require tripling your workforce 10 times, resulting in over 57,000 programmers. That sounds reasonable, maybe even a little optimistic.

Is that just the way things must be, that geniuses are in short supply and ordinary intelligence doesn’t scale efficiently? How much better could we do with available talent if took a better approach to developing software?

Related post

28 thoughts on “100x better approach to software?

  1. Adding to the above wisdom…

    The big productivity gains come from recasting the problem.

    Having done my share of UI, this means picking a mental model that’s closer to the real world problem.

    Having done my share of CRUD apps, this means simplifying the underlying business processes, versus computerizing the existing process (the difference between automation and mere digitization).

  2. My experience is that I have sometimes to try a few times before I can get the right design.

    The problem with group work is that you can’t keep throwing stuff away because people start to rely on it.

    Hence, you get a lot of crap.

    It seems that it has little to do with genius. Even very brilliant people don’t always hit the right design the first time around.

  3. How much better could we do using some paradigm that hasn’t been invented?

    This makes me think we should invest more in research like this, Haskell vs. Ada vs. C++ vs. Awk vs. … An Experiment in Software Prototyping Productivity, proposing problems and asking for optimal solutions from experts.

    Another approach would evaluate the programs in 101companies to see what are the common issues exist in current languages/libraries. Then we could try to figure out an optimal solution to the problem that has none of the issues.

  4. @Daniel Lemire

    :-) As a great man once said:

    “Where a new system concept or new technology is used, one has to
    build a system to throw away, for even the best planning is not so
    omniscient as to get it right the first time. Hence plan to throw one
    away; you will, anyhow.”
    — Fred Brooks, The Mythical Man-Month: Essays on Software Engineering

  5. We still don’t know how to program (paraphrasing Sussman).

    Our languages, libraries, and systems fail badly at composition, we always end up writing monolithic programs with huge baggage. If we pick a simple problem, e.g. a contact web crud with acid semantics, most of it is unrelated to the particular problem because we end up using a web server that implements much more than we require (e.g. we don’t need multipart for our crud), ditto for the database system (e.g. no need to support joins for a single table system). The libraries to interface with these systems expose a very large API, much of which we don’t require and often require configuration for these unnecessary parts. The languages don’t offer any way out.

    In an ideal system one could just pick the parts they need and end up with the minimal program that works. Unfortunately writing composable programs is unbelievably difficult. Even genius programmers end up writing monolithic programs that contains more than necessary and end up contributing to cruft in somebody else’s programs.

  6. We know what we’re doing now is not optimal. But how much better could we reasonably expect to do? How much better could we do if we simply adopted existing practices that have been proven effective but are not widely practiced?

    Perhaps more interesting is this question: How much better could we do using some paradigm that hasn’t been invented? Of course this is pure speculation, but we implicitly answer this question by how we allocate research funding. We’re placing bets based on how big an improvement we think is out there to be found and what we think our chances are of finding it.

  7. I think this understates the value of brute force methods. And just how well do we know that we we are doing is not as optimal as it could be? That is, is this akin to someone musing about an engine that had 100% efficiency?

    Reading through More Programming Pearls, I’m struck by how much nicer reuse used to be. Before, to reuse code, it seems you were literally given the code and a chance to pull it into your program. Possibly having to rewrite it, but with full visibility in case you spotted an inefficiency or realized you had assumptions the author did not. Of course, I realize I’m likely wearing rose tinted glasses.

    I just can’t get past the thought that practice reuse in programming is extremely removed from any sort of reuse in any other field. I wonder if practice improvement suffers for it.

  8. Josh: The advantage of mechanical systems is that we know what the ideal is. We know 100% efficiency isn’t possible, but we have a standard to measure efficiency against. I’m suggesting we speculate about what the analogous ideal might be for software.

    If we think we’re near the practical limit of productivity, we won’t work very hard to improve. If we think tremendous gains are possible, we’ll search harder for them. There’s anecdotal evidence to suggest huge gains are possible.

  9. My experience has been very different.

    On truly large systems, all the effort and churn goes not so much into tending the large codebase, but dealing with the emergent properties of the system on large deployments with large amounts of data and users, and then iterating to change it.

  10. Sorry, I had meant to more emphasize my other point (in fact, I should have just removed that first one). I think the way we “share” code nowdays is often setup in such a way that there is too little inspection of previous process. That is, it seems that in other fields to realize the same improvements others made, you almost have to take up their process as much as you could just use their results. Now, we have ready access to a black box product very easily, such that even though many of us could get visibility into what was done, we typically don’t.

    More directly put, in reading how algorithms were analyzed and shared at a time when working code was typically not directly available, I think more scrutiny went into the processes.

  11. “Perhaps more interesting is this question: How much better could we do using some paradigm that hasn’t been invented?”

    I’m not sure how that’s interesting, given that the vast majority of programs aren’t written in the best paradigms that have already been invented. We already have languages and environments that are 10 to 100 times (or more) more efficient to develop software in than C++ or Java, yet these mundane languages are still extremely popular, especially at big companies. It’s a social problem, not a technical one.

    IME, managers usually don’t want productivity or efficiency. They want replaceability and low risk. 40 programmers is a solid team; 2 programmers are a car crash waiting to happen. Even if those 2 programmers are more efficient than the 40, they’d rather have the 40.

  12. Alex: Perhaps I could rephrase the question. Given the constraints we’re under (political, social, cultural, etc.) how much could we improve productivity? Given that managers want replaceability and low risk as you say, can we do better? Managers wanted the same things 20 years ago, and we’ve made some progress since then.

    Managers probably do place too much value on fungibility and predictability, but placing some value on these factors is quite rational. I’ve worked with people who have the opposite bias, who see no risk in having one person work entirely alone on a complex system that no one else could possibly maintain.

  13. Refactoring is often thought of as “fixing as you go” but the original intent is that as your understanding of the problem grows, code should be changed to reflect that better understanding.

    As for implementation productivity, I’m leaning toward functional for reuse, testability and correctness. The “thought up front” of functional programming seems to me to be more directly addressing the actual problems to be solved than with OO. OO gives the feeling of rapid progress but I often find it to be illusory. Inside all of those methods is so much imperative code with all the usual pathologies.

    I’ve found Scala to be a useful melding of functional and OO. Does it tilt far enough toward best of both worlds vs worst of both worlds? That is up to the developer.

    Process problems such as assuming near complete knowledge of the requirements and solutions will forever be a problem. We really want to believe that we know exactly what needs to be done and how long it will take. Developers know that there is always much to learn along the way. Non-developers, well…they really really want to believe.

  14. Lines of code is one of the worst ways to measure productivity. Maybe just slightly better than “time in office”.

  15. I estimate a few problems come into play.

    1) addiction to bad technology stacks that inhibit productivity

    2) legacy code written in incompatible ways

    3) addiction to paradigms that are inferior.

    4) impedance mismatch between stacks being used.

    For myself, I use Common Lisp as much as I can, and plan to learn Erlang and Haskell to see what they can bring me for better & more productive code. CL is *amazing*, but interfacing with outside systems can be troublesome if the libraries haven’t matured yet.

  16. Having spent most of the last 20 years in the trenches of silicone valley companies that stumble through creating their system (and that are in a much better position with respect to this topic then the average company out there), i would say that the problem is mostly management. I’ve worked in many positions over the years and quite a few of them were senior management and architecture type positions. So I’ve witnessed the messy decision making process first hand many times.

    Management teams consistently make very poor decisions when it comes to building, maintaining, and fixing the software they are charged to manage. Their dominant feelings are uncertainty, fear and distrust of their best employees. At time there is also a blind belief in a the “magic bullet du jour” to add to the trouble.

    Management is prone to believe that the young and inexperienced developers (who are also cheaper) are “hipper”, more “in on the new technology” and generally brighter than proven experience (I’ve not personally suffered very much from this bias, but I witnessed it plenty and sometimes was exposed myself). While hand picked, personally mentored young stars are worth their weight in gold, it is very rare to find young engineers who can actually successfully build large and/or complex systems without getting into serious trouble. These days any kind who can pronounce “Hadoop” can get a chance to try for it, even though often they are trying to solve a puny problem that can easily be handled in-memory on a single node if you know what you’re doing. But chances are that management will ignore advice to that effect (witnessed numerous times) because of the magic combination of a young engineer and the magic bullet. So you end up with a heavy inefficient mess and a pile of code that doesn’t really do much for you.

    Management is afraid to throw away bad, bloated code bases that need wholesale replacement. Said code, created by typically inept young engineers with no serious mentor in sight, lives on forever. Then a team of “fixers” much come and pick up the mess, but typically they would not be given the authority (and rarely would have the guts themselves) to to the right thing and rebuild. Instead more is added to the pile of excrement to try and get it under control.

    The consultant effect – management loves consultants. Regardless of all evidence, an outside consultant’s opinion will always trump the opinion of your best internal person. This can be even though your best internal person has a far better knowledge of what’s going on and how it may be fixed (or how to build a new system that will work well in the context of this company). If the internal expert promotes an unconventional solution and the consultant is armed with a menu of popular magic bullets (and when are they ever not) – then the decision will invariably go to the consultant, who will quite quickly go away taking far more money than they are worth, and leaving the company with yet another messy system that they would not dare throw away. Note that also the expensive consultant would either come with their own staff of inept young engineers (armed with a “process”) or will be given control of an internal team made out of the cheapest available resources, many of whom may be in a remote country we shall not name and of whose real qualifications you know little, except that they can (mostly) correctly spell the name of your current magic bullet.

    An earlier comment about inflexibility about changing business processes is a variant of the inability of management to embrace change. Indeed, you will rarely find management agreeing to simplify or adapt the business process to the technology. If there is a team of bureaucrats that or powerful stakeholder in said process then the situation would be hopeless and would lead to an endless string of special cases riddled with bugs and inducing coupling everywhere.

    So bottom line – we will not have a solution until the business culture will learn how to manage technology. Failure ranges from trying to use industrial age methods (the only thing that business schools seem to teach) that were developed for assembly lines to just playing it by ear, while pretending to be “agile”.

    Management teams that successfully avoid these typical traps often are more successful in keeping their internal technology under control. While in the short run they may seem confused – in the long run they run a leaner, meaner shop. They are unafraid to say “I’ve made a mistake” and unafraid to take a route that they previously fought against of reality proved them wrong.

  17. Alan Kay has an NFS funded project where he and his colleagues at VPRI are attempting to add some substance to that assertion in the context of desktop systems by attempting to build a complete desktop system including productivity apps in under 20 KLOC. See here for more info http://vpri.org/html/work/ifnct.htm

  18. The reference in JohnH’s comment (complete desktop system + productivity apps in 20K LOC) is interesting. It’s better to write less code, which is another way of stating that it’s better to rely on existing code for a lot of what you’re trying to do. That obviously points to higher-level languages and a rich runtime and set of libraries.

    But I think it also means that you need to spend a lot of time simplifying — taking the requirements of your application and finding a very small number of concepts that will support them. Then implement those concepts as well as you can (fast, obvious, reusable code), and keep relying on them. This is just another way of saying what Kay does in the referenced paper: “Math wins”. He goes on to reference the idea of building things with algebras, which I have found to be very powerful in my work over the years.

    I’ve been trying to apply this approach to a project. I’m writing software for a persistent, transactional ordered map, i.e. a B-tree, (except that it isn’t very much like a classical B-tree). It’s taken a long time, partially because it’s a part-time project, but mainly because this approach seems to take a long time. I’ve spent a lot of time refactoring and rewriting when code drifts away from the basic principles of the system or gets too complicated. So after two years, I have only about 15K lines of code (1/3 tests). But the system is pretty close to done, and takes 5-10% of the code size of comparable systems.

    Getting back to John’s list of four approaches, (ignoring the last): Kay is right, a better design is one based on very few concepts which are probably mathematical in nature. If your software is just one piece of code after another and another and another, all with different kinds of interactions, that’s probably not a good design. Rewrites taking lessons into account are great, but if you’re going to substitute one complex design for another, you’ll have another slightly less-complex mess. Lack of time pressure is also wonderful, and can avoid the problem of code degrading because you’re in a rush. But I think this can lead to a very clean implementation of a complex design. What these three approaches dance around is the need for an extremely simple design at the beginning.

  19. “I would have written a shorter letter, but I did not have the time.” as someone famously wrote – reducing the redundancy from the code takes effort – so 99% less code does not mean 99% less work but rather the opposite.

  20. Zbigniew: Sometimes it’s worth the effort to write a short letter.

    If software is important and will live a long time, it’s worthwhile to (re)write it to be more compact. Code that isn’t there has no bugs, no security holes, doesn’t have to be maintained, etc.

  21. Yeah – I actually fully agree and believe it it and I do it myself frequently :) What I commented about is the authors words about ‘wasted effort’ – it sounds as if you could from the start write the short letter and save the effort to writing the additional sentences. This is obviously not true – and probably the author did not mean it – but the post is not clear on this.

  22. You don’t offer a cite for the “3/2 rule” but I think you may be referring to this article (or a related one from the site): http://www.cybaea.net/Blogs/Journal/employee_productivity.html Which I think would yield the following formula for output N * N^-0.68 since productivity for an individual employee follows the N^-0.68 and total productivity would be the product of all employees times average productivity. From the “3/2 rule” post:

    “Fitting a power law give a slope of -0.68. This is scary. Three raised to the power of -0.68 is 0.47. This means that when you triple the number of employees, you halve their productivity. Or: When you add 10% employees the productivity of each drops by 6.3%. Of course, since 3 times half is greater than one, your total profits are typically growing. ”

    What this really implies is that to double the output would require 9 times the staff.

    They revisit their numbers in http://www.cybaea.net/Blogs/Journal/employee_productivity_sector.html and believe a more accurate average productivity for most industries would be closer to N*N^-0.1 they note:

    “In total, there is probably a downward trend with size but with a slope of perhaps -0.1 or thereabouts. That still means that when you add 10% employees you lose 1% productivity per employee, which is clearly problematic. It is a much smaller number than the one we found before, primarily because the previous data set (the S&P 500) is biased against small companies with low revenues per employee. In the current data set we still have a bias in that they are all quoted companies which implies a certain size or at least cash position, but much less biased than before. ”

    This implies that a team of 22 would double the output of a team of ten and a team of 216 would double the output of a team of 100.

  23. Sean: Thanks for the references. I wish I could remember where I first saw that 3/2 rule.

    I imagine the exponent varies quite a bit according to industry. I doubt hiring another waiter diminishes the productivity of other waiters very much. But hiring a new programmer can really drag other programmers down.

Comments are closed.