“Fancy algorithms are slow when *n* is small, and *n* is usually small.” — Rob Pike

Someone might object that Rob Pike’s observation is irrelevant. Everything is fast when the problem size *n* is small, so design your code to be efficient for large *n* and don’t worry about small *n*. But it’s not that simple.

Suppose you have two sorting algorithms, Simple Sort and Fancy Sort. Simple Sort is more efficient for lists with less than 50 element and Fancy Sort is more efficient for lists with more than 50 elements.

You could say that Fancy Sort scales better. What if *n* is a billion? Fancy Sort could be a lot faster.

But there’s another way a problem could scale. Instead of sorting *longer* lists, you could sort *more* lists. What if you have a billion lists of size 40 to sort?

People toss around the term “scaling,” assuming everyone has the same notion of scaling. But projects could scale along different dimensions. Whether Simple Sort or Fancy Sort scales better depends on how the problem scales.

The sorting example just has two dimensions: the length of each list and the number of lists. Software trade-offs are often much more complex. The more dimensions a problem has, the more opportunities there are for competing solutions to each claim that it scales better.

**Related posts**:

The tricky thing is to switch from basic to fancy algorithms automatically.

There should be a balance, following your example, if you read Timsort [1] you realize that’s the best on both cases, and, in the end, that’s the correct approach. Same can apply to complete applications.

[1] http://en.wikipedia.org/wiki/Timsort

It’s nice when one method is best by all criteria. Unfortunately that doesn’t always happen.

s/always/often

Come on! The developers I know who work on products that sell profile before optimizing.

Are all the commentators academics?

You can only profile code you’ve written. If you decide “Fancy Sort scales better” during design, you’ll only code it up. If you’re in doubt whether Fancy Sort or Simple Sort works better in your application, then yes, you may code them both up and profile them. Or, as Daniel suggests, you may have your code branch between the two algorithms depending on list size.

(The sorting example here is trivial. It’s meant to be a placeholder for much more complex problems that are domain specific.)

Good point, scaling is often multidimensional. Like Daniel says, this is why we need algorithms that can choose which algorithms to use!

Did my comment got gobbled up? It was about how the SGI STL sort algorithm uses introsort, which is quicksort switching to heapsort after log(n) elements. And GCC STL is “this smart” by doing an introsort up to 2*log(n) and then switching to insertion sort. It seems to give the best trade-off (particularly the best worst time complexity).

Great example! The instinctive reaction (“create a sort algorithm which works good in both cases”) demonstrates it nicely: Nice (tech) solutions, but not for the problem. The behavior of the sort algorithm is a red herring: As I see it, this is about understanding what solutions we miss by ignoring problem dimensions because we think we already have a solution for another dimension of the problem.

Yes ketchup. Like “sort 6 billions of people by age” is not best solved with any sort algorithm on the top of our head. Ask your not-in-tech friend or family member, he may have a better solution than QuickSort! ;-)

@Gabriel: The article isn’t about any specific sort algorithm, it is about understanding the question (more specific: “scale”, but I think it’s not restricted to this concept) before committing to an answer. Simplified: Read (all of it), think, act, don’t skip a step. Please correct me if I got that wrong. :)

This comment thread reminds me of this article I recently read: http://patshaughnessy.net/2012/1/4/never-create-ruby-strings-longer-than-23-characters

It’s about how Ruby’s internal C representation of a String has a built-in switch that uses the same in-memory struct as the data source for both the simple storage mechanism and the complex storage mechanism. Really clever solution that reminds me more of mechanical engineering than programming.

Even more so, QSort for instance breaks the array into smaller pieces, so if you use a simple algorithm for small pieces, you’ll win in any case.

How could one approach this as an optimization problem, and how would you model it? LP or NLP seems suited to many dimensions. Or maybe just clearly limit the dimensions. e.g. limit it to algorithmic running-time costs.