The science of waiting in line

people standing in line

There’s a branch of math that studies how people wait in line: queueing theory. It’s not just about people standing in line, but about any system with clients and servers.

An introduction to queueing theory, about what you’d learn in one or two lectures, is very valuable for understanding how the world around you works. But after a brief introduction, you quickly hit diminishing return on your effort, unless you want to specialize in the theory or you have a particular problem you’re interested in.

The simplest models in queueing theory have nice closed-form solutions. They’re designed to be easy to understand, not to be highly realistic in application. They make simplifying assumptions, and minor variations no longer have nice solutions. But they make very useful toy models.

One of the insights from basic queueing theory is that variability in arrival and service times matters a great deal. This is unlike revenue, for example, where you can estimate total revenue simply by multiplying the estimated number of customers by the estimated average revenue per customer.

A cashier who can handle a stream of customers on average could be in big trouble. If customers arrived at regular intervals and all required the same amount of time to serve, then if things are OK on average, they’re OK. But if there’s not much slack, and there’s the slightest bit of variability in arrival or service times, long lines will form.

I work through a specific example here. The example also shows that when a system is near capacity, adding a new server can make a huge improvement. In that example, going from one server to two doesn’t just cut the waiting time in half, but reduces it by a couple orders of magnitude.

As I mentioned at the beginning, queueing theory doesn’t just apply to shoppers and cashiers. It also applies to diverse situations such as cars entering a highway and page requests hitting a web server. See more on the latter here.

More queueing theory posts

11 thoughts on “The science of waiting in line

  1. My friend was the floor supervisor for cashiers at a Newark Target store. He would get flack from his managers if they either had (1) cashiers without customers at the register or (2) long lines forming. I tried to tell him he needed to lecture his in queuing theory, but he found a new job instead.

  2. Lemma: IF I’m shopping at any major grocery store, with 2 pints of Ben & Jerry’s ice cream in my cart, then whichever checkout line I choose WILL TURN OUT TO BE the slowest one.

  3. Thank you for this nice reminder and introduction to queuing theory. I am involved in university computer science education (bachelors, masters, PhD), and realize that computer networking and its underlying queuing theory is not really appreciated. When the community needed to develop a world wide internet from scratch and businesses needed to evaluate their cases, then this theory and “toy models” were really helpful. Once the model suggests a potential outcome, then one next step is event-driven simulation. From this, standards and regulation might evolve, and eco-systems can be created. That’s how networking standards come to life.
    Maybe with the advent of autonomous vehicles, a new community will go back to this toy model approach. What infrastructure (2D streets, underground tunnels, airways) will be required in the future? Figuring this out with queuing theory and simulation is a reasonable approach.

  4. My dissertation, back in the dark ages, had to do with trying to combine queueing theory and scheduling theory (and a lot of simulation) to analyze (and possibly optimize) systems with random arrivals and processing times AND due dates. My brother-in-law likes to tell people that I studied the fact that the more people get in line, the longer the line gets.

    I am tickled that you chose to focus on what I have always considered the key result of elementary queueing theory — namely, the dependence of nearly every performance measure in an M/G/s queue on the variance of the processing times.

    As for applications to computer networks… it’s no accident that Leonard Kleinrock is a founding father of both disciplines.

  5. I think it is important to highlight that in say an M/M/64 queue, as you approach high utilization, performance will degrade much more quickly than in an M/M/4. If very busy, at ~ 97 or 98 percent utilization the wait time gets close to infinite very quickly. To me, it would appear as a binary on or off. At 90%, the load is well handled, at 98% it appears that the switch got tuned off.

Comments are closed.