Small batch sizes

Don Reinertsen gave a great keynote address at YOW 2012 entitled The Practical Science of Batch Size. I recommend watching the video when it’s posted, probably in January. In the mean time, I want to relate one small illustration from the talk. It’s a parable of why agile methods can save money.

Someone generates a two-digit number at random. You can pay $2 for a chance to guess the number, and if you are correct, you win $200. It’s a fair bet: both sides have net expected return zero.

Now let’s change the rules. Same reward, but now you pay $1 to guess the first digit and you find out whether you were right. If you were, you then have the option of spending another $1 to guess the second digit. Now you have a winning bet.

Your chances of winning are the same as before: there’s a 1% chance that you’ll win $200. But the cost of playing the game has gone down. There’s a 90% chance that you’ll spend $1 and a 10% chance you’ll spend $2. So your expected return is $2, but your expected cost is $1.10, so on average you make $0.90 every time you play the game.

Don argues that we often come out ahead by doing things in smaller batches: committing less money at a time, taking on smaller projects at a time, etc. Not that smaller batches are always better. As batch sizes decrease, holding costs decrease but transaction costs increase. You want to minimize the sum of holding costs and transaction costs. But we more often err on the side of making batches too large.

In the example above, the batch size is how much of the number we want to guess at one time: both digits or just one. By guessing the digits one at a time, we reduce our holding cost. And in this example, there are no transaction costs: you can guess half the digits for half the cost. But if there were some additional cost to playing the game — say you had to pay a tax to make a guess, the same tax whether you guess one or two digits — then it may or may not be optimal to guess the digits one at a time, depending on the size of the tax.

Books by Don Reinertsen

5 thoughts on “Small batch sizes

  1. You have to compute the cost of deciding to go on or not. In this case it’s trivial, but in other cases maybe not.

  2. True. But another point Don made is that the trade-off curve often has a moderately flat bottom. This means that you get about the same cost even if you’re not very close to the optimum. His advice was to start by simply cutting your batch size and see what happens, rather than trying to calculate the optimum.

  3. Some thoughts:
    This reminds me of “Little Bets”, a book by peter Sims. It also reminds me of the value of working in areas where other people are not; for a given amount of time exploring, you have a higher likelihood of discovering something new. That seems to be the path of many Nobel and MacArthur Fellow winners. And similarly, entering small or low-profile competitions increases your chances of winning something.

  4. Re Cost cutting @Manoel: I agree with John at worst the smaller batch size doesn’t let you know that the project should die until it is done but best case scenario it takes one small chunk to realize it doesn’t make sense rather than one multi month/year big chunk.

    Kind of a good/bad thing: at my work my boss is part time so I only meet with him every 2 weeks or so on a project that takes ~50% of my time. What ends up happening often is the size of the chunks of work I get aren’t sufficient to keep me busy (find a smarter way of doing something than initially thought, end up agreeing on a simpler solution etc).

    This gives me time to research different things and tackle problems elsewhere at my work. Does my manager do this by design to allow other tasks to get some throughput, lack of proper management, etc? Who knows? But it is the nature of R & D type things that you don’t really get a large product backlog in my experience especially in the more operations research side of things I tend to work on.

    So tying it all up: smaller chunks that fail quickly could decouple you to look at other opportunities while the coordination happens to get the next stakeholder meeting arranged rather than tie you up to 100% utilization churning on something that is so large it takes you the best part of a cycle to find out it isn’t going to work. In agile land might be a argument for lots of small stories in a sprint rather than “monolithic” one story per dev per sprint kind of planning.

  5. Great post. Gotta think about it some more, but can say it’s behind a lot of counterintuitive things in management. Eliyahu Goldratt’s ‘The Goal’ gives many examples, without ever really stating the underlying reason so clearly.

Comments are closed.