Just what do you mean by ‘scale’?

“Fancy algorithms are slow when n is small, and n is usually small.” — Rob Pike

Someone might object that Rob Pike’s observation is irrelevant. Everything is fast when the problem size n is small, so design your code to be efficient for large n and don’t worry about small n. But it’s not that simple.

Suppose you have two sorting algorithms, Simple Sort and Fancy Sort. Simple Sort is more efficient for lists with less than 50 element and Fancy Sort is more efficient for lists with more than 50 elements.

You could say that Fancy Sort scales better. What if n is a billion? Fancy Sort could be a lot faster.

But there’s another way a problem could scale. Instead of sorting longer lists, you could sort more lists. What if you have a billion lists of size 40 to sort?

People toss around the term “scaling,” assuming everyone has the same notion of scaling. But projects could scale along different dimensions. Whether Simple Sort or Fancy Sort scales better depends on how the problem scales.

The sorting example just has two dimensions: the length of each list and the number of lists. Software trade-offs are often much more complex. The more dimensions a problem has, the more opportunities there are for competing solutions to each claim that it scales better.

More posts on scaling

14 thoughts on “Just what do you mean by ‘scale’?

  1. It’s nice when one method is best by all criteria. Unfortunately that doesn’t always happen.

  2. Come on! The developers I know who work on products that sell profile before optimizing.
    Are all the commentators academics?

  3. You can only profile code you’ve written. If you decide “Fancy Sort scales better” during design, you’ll only code it up. If you’re in doubt whether Fancy Sort or Simple Sort works better in your application, then yes, you may code them both up and profile them. Or, as Daniel suggests, you may have your code branch between the two algorithms depending on list size.

    (The sorting example here is trivial. It’s meant to be a placeholder for much more complex problems that are domain specific.)

  4. Good point, scaling is often multidimensional. Like Daniel says, this is why we need algorithms that can choose which algorithms to use!

  5. Did my comment got gobbled up? It was about how the SGI STL sort algorithm uses introsort, which is quicksort switching to heapsort after log(n) elements. And GCC STL is “this smart” by doing an introsort up to 2*log(n) and then switching to insertion sort. It seems to give the best trade-off (particularly the best worst time complexity).

  6. Great example! The instinctive reaction (“create a sort algorithm which works good in both cases”) demonstrates it nicely: Nice (tech) solutions, but not for the problem. The behavior of the sort algorithm is a red herring: As I see it, this is about understanding what solutions we miss by ignoring problem dimensions because we think we already have a solution for another dimension of the problem.

  7. Yes ketchup. Like “sort 6 billions of people by age” is not best solved with any sort algorithm on the top of our head. Ask your not-in-tech friend or family member, he may have a better solution than QuickSort! ;-)

  8. @Gabriel: The article isn’t about any specific sort algorithm, it is about understanding the question (more specific: “scale”, but I think it’s not restricted to this concept) before committing to an answer. Simplified: Read (all of it), think, act, don’t skip a step. Please correct me if I got that wrong. :)

  9. Even more so, QSort for instance breaks the array into smaller pieces, so if you use a simple algorithm for small pieces, you’ll win in any case.

  10. How could one approach this as an optimization problem, and how would you model it? LP or NLP seems suited to many dimensions. Or maybe just clearly limit the dimensions. e.g. limit it to algorithmic running-time costs.

Comments are closed.