Time exchange rate

At some point in the past, computer time was more valuable than human time. The balance changed long ago. While everyone agrees that human time is more costly than computer time, it’s hard to appreciate just how much more costly.

You can rent time on a virtual machine for around $0.05 per CPU-hour. You could pay more or less depending on on-demand vs reserved, Linux vs Windows, etc.

Suppose the total cost of hiring someone—salary, benefits, office space, equipment, insurance liability, etc. —is twice their wage. This implies that a minimum wage worker in the US costs as much as 300 CPUs.

This also implies that programmer time is three orders of magnitude more costly than CPU time. It’s hard to imagine such a difference. If you think, for example, that it’s worth minutes of programmer time to save hours of CPU time, you’re grossly under-valuing programmer time. It’s worth seconds of programmer time to save hours of CPU time.

19 thoughts on “Time exchange rate

  1. This is absolutely true, but I think there is a time metric programmers should optimize for. User time. User time is IMHO more valuable than programmer time (especially on the web) as there are perhaps 10-100x users per programmer. That’s a number I pulled out of a hat, sure, but I think we should really make an effort to not waste user time.

  2. @CHAD Absolutely, and that brings up the other adage of optimisation: don’t waste time optimising what you *think* is slow, but profile to measure what’s *actually* slow (as measured in time-on-the-user’s-watch and not time-on-the-CPU)

  3. As Chad said. And it’s more than that – software responsiveness has a quality all of its own. Being able to try things without breaking your flow of though makes a big difference to how much you can get done with software. In absolute terms a delay of 5 seconds in a program’s response a few times a day isn’t much but if it causes you to avoid checking something or seeing more context or whatever the knock on costs could be vast.

    The other day I tweaked a program to roughly halve its runtime down to less than half a second. Only I’ll ever use it and not very often – I’ll never get the time back in absolute terms (particularly now I’ve typed this about it and this parenthetical comment …) but I did it partly for curiosity as to where the time was going and also to make any delay in actually using it vanish for practical purposes – encouraging my to try more things.

  4. I’m only comparing the cost of hiring a programmer and the cost of renting a CPU here. It could be worthwhile to spend man-years making software only slightly faster, for example in a missile defense system. But the benefit in that case is not just saving commodity compute cycles.

  5. Doesn’t this mean that everything that is run more than 1000 times starts to break even in terms of cost? Programmer time is involved in development and maintenance, while CPU time is involved in all executions from here to those lines of code retirement (could easily be millions of times in a medium-sized application).

  6. The lessons here are:
    1. If you are paying your programmers to optimize your software to reduce your operating cost – you are doing it wrong.
    2. The best bang for your programming buck is to get good at making your software scale out across many of these commodity CPUs.

  7. Giorgio Sironi nailed the problem, you seem to be assuming low execution counts.

    Let’s expand using a simplified real world example:

    You have a website with 10,000,000 pageviews a day. You shave 200ms ( from 300ms to 100ms ) off the response time with some caching tricks and refactoring some of the code, saving a total of 2,000,000 seconds of computer time, or 555.56 hours, or $27.78 at $0.05 per hour of machine time.

    So, if your time is worth $200/hour and you spend 8 hours working on this, you have spent 1600$ worth of effort, and your investment starts to turn a profit after 57.6 days. At the end of a year, the 1600$ investment has turned into a net savings of $8539.

    As I said, this is a simplified example, but it shows how in high execution counts your logic breaks down.

    XKCD had something along the lines in a “time saved” chart, http://xkcd.com/1205/ , illustrating nicely that repetition count and time saved define how long you can spend working on the problem.

  8. This makes perfect sense if you’re in the position of a manager deciding how to allocate a budget.

    As a programmer who has made this argument several times, with varying degrees of success, I’ve encountered a few objections. First, salaries are a fixed predictable cost, and new equipment, if any, usually comes out of a budget surplus. Secondly, managers (the ones I’ve worked with, at least) are often hesitant to rent CPU time, because they’d like something tangible and because rental costs are more unpredictable than one-time sunk costs.

    Also, if you are working with big datasets, then data transfer rates and caps can become a bottleneck when renting cloud compute time. Migrating software development from local to cloud takes time too: you have to either develop an image or a repeatable build system to set up on each cloud instance, which takes time to do. And a cloud CPU-hour tends not to be equivalent to a owned server/desktop CPU-hour because you are sharing the host machine and in some cases paying a virtualization tax.

  9. One addition.

    I think the biggest bottleneck of all in this argument is that there is a moderate in application complexity when you try to go from single core to multicore, and a much bigger jump when you go to multi-machine. Since handling this often requires you to write more code (the very thing you’re trying to avoid), it’s not always clear when buying more CPU power is the solution.

  10. 1) Typically, time of CPU is the time of result delay for user ( maybe the same or another programmer )
    2) CPU time is multiplied by number of invocations ( typically more than once even for “run-once” scripts )

  11. You’re right, John, in that circumstance, but when does anyone ever face a choice of “hiring a programmer” or “renting a CPU”? I’ve been in this industry for 20 years and I believe that it happens sometimes but I can’t recall a time I’ve ever faced it myself.

    For people working on consumer-facing software, the equation is completely different. If my program takes even just 10 seconds longer than the competition’s program, all the customers are going to buy from my competitor instead.

  12. CPU time is often entangled with other objectives, such as user satisfaction. Making software faster generally makes users happier, but the relationship isn’t simple. A small increase in efficiency might lead to a large increase in customer happiness. On the other hand, a large increase in efficiency might make no noticeable difference to the users. In any case, increasing user happiness is a different objective than reducing CPU time, even if they’re related.

    Sometimes there are bureaucratic barriers to buying cheap compute time, as Cory pointed out above. At a former employer I often asked programmers to spend days on a problem that could be solved much faster by spending a few bucks on virtual machines, simply because the latter was not administratively possible.

  13. It depends on how many CPUs are executing the code x how many times the code is executed.

    Suppose I am optimizing a routine, like malloc(), that virtually all machines uses at all time, and suppose I can save 1 milliwatt of power per use by executing less instruction, that could be a good deal.

  14. I agree with the post, but the main reason to speed up code is not to save CPU time, it is to save developer time! We make our code fast to speed the development cycle, not the total run time for the final version. At least in academic settings (like mine) where all code is prototype code. In this context (speeding the development cycle), it *is* rational to take ten minutes to speed up the code by ten minutes. Even taking many hours to do that can be worth it if development cycle is run enough times.

  15. This XKCD has a nice lookup table for determining if you should do an optimization. If you consider the number of users you have as the “how often you do the task” metric, it applies nicely. http://xkcd.com/1205/

  16. Most computing tasks are not CPU-bound.

    As an aside, I also have to wonder how many externalities are not being factored into the $0.05c per CPU-hour figure. I suspect someone isn’t paying their carbon offsets.

  17. “Making software faster generally makes users happier, but the relationship isn’t simple.” (John D. Cook)

    Yeah, the classic example is the relationship to user context-switch costs. Going from 5 minutes to 2 seconds (a 150x improvement) can lead to a worse user experience because the time goes from long enough that running in the background is reasonable (or taking a coffee break) but two seconds is a very noticeable delay but not enough to justify a switch unless one has several such tasks that can be started in sequence (some web forums justify such; opening several threads in different tabs can hide the medium-scale latency). Variable delay can be even worse than medium-scale delay; there is an incentive to wait to see if the delay will be long and then one needs to make a decision.

    Of course, the degree of parallelism available to the user (and how much is lost by multitasking) will vary among users and workloads.

    This question of relative cost is not that dissimilar to the differences caused by skill. Skilled people might be biased in thinking that all jobs should be done by the most skilled worker available (so a programmer might be biased to think that programming effort is better than computer effort). The stereotypical bean-counter would hire 50 code monkeys as temporaries even if 5 expert programmers could do the same task (so some people would rather throw hardware at problems than consider optimizing the software).

    Sadly, their is also a cost in making a decision and in changing direction; making a decision about whether to make a decision has costs and changing management processes has cost (turtles all the way down ☺).

  18. One of my objections to this was already addressed, in terms of improvements to program performance impacting the scalability and thus the user experience, when the program is run many times.

    The other situation where performance matter is when the CPU cost assumption is invalid from the start, as in embedded systems where the computation cost is much higher, and that’s even before considering that a lot of them run on batteries, which is an additional material cost and user experience cost. A more efficient program finishes quicker, allowing the system to go to sleep state as soon as possible. (Come to think of it, this applies even to laptops and tablets and phones, which aren’t embedded systems anymore)

    In my experience, in some constrained environments, performance still matter and it is worth spending the programmer time to save CPU time.

  19. Important caveat: there is a cut-over point when your “app” runs on thousands or tens of thousands of machines, after which machine time does become more expensive than engineer time. Granted that there are perhaps only a handful of places where this occurs.

Comments are closed.