Beethoven, Beatles, and Beyoncé: more on the Lindy effect

This post is a set of footnotes to my previous post on the Lindy effect. This effect says that creative artifacts have lifetimes that follow a power law distribution, and hence the things that have been around the longest have the longest expected future.

Works of art

The previous post looked at technologies, but the Lindy effect would apply, for example, to books, music, or movies. This suggests the future will be something like a mirror of the present. People have listened to Beethoven for two centuries, the Beatles for about four decades, and Beyoncé for about a decade. So we might expect Beyoncé to fade into obscurity a decade from now, the Beatles four decades from now, and Beethoven a couple centuries from now.


Lindy effect estimates are crude, only considering current survival time and no other information. And they’re probability statements. They shouldn’t be taken too seriously, but they’re still interesting.

Programming languages

Yesterday was the 25th birthday of the Perl programming language. The Go language was announced three years ago. The Lindy effect suggests there’s a good chance Perl will be around in 2037 and that Go will not. This goes against your intuition if you compare languages to mechanical or living things. If you look at a 25 year-old car and a 3 year-old car, you expect the latter to be around longer. The same is true for a 25 year-old accountant and a 3 year-old toddler.

Life expectancy

Someone commented on the original post that for a British female, life expectancy is 81 years at birth, 82 years at age 20, and 85 years at age 65. Your life expectancy goes up as you age. But your expected additional years of life does not. By contrast, imagine a pop song that has a life expectancy of 1 year when it comes out. If it’s still popular a year later, we could expect it to be popular for another couple years. And if people are still listening to it 30 years after it came out, we might expect it to have another 30 years of popularity.

Mathematical details

In my original post I looked at a simplified version of the Pareto density:

f(t) = c/tc+1

starting at t = 1. The more general Pareto density is

f(t) = cac/tc+1

and starts at t = a. This says that if a random variable X has a Pareto distribution with exponent c and starting time a, then the conditional distribution on X given that X is at least b is another Pareto distribution, now with the same exponent but starting time b. The expected value of X a priori is ac/(c-1), but conditional on having survived to time b, the expected value is now bc/(c-1). That is, the expected value has gone up in proportion to the ratio of starting times, b/a.

Pure possibility

Peter Lawler wrote a blog post yesterday commenting on a quote from Walter Percy’s novel The Last Gentleman:

For until this moment he had lived in a state of pure possibility, not knowing what sort of man he was or what he must do, and supposing therefore that he must be all men and do everything. But after this morning’s incident his life took a turn in a particular direction. Thereafter he came to see that he was not destined to do everything but only one or two things. Lucky is the man who does not secretly believe that every possibility is open to him.

As Lawler summarizes,

Without some such closure — without knowing somehow that you’re “not destined to do everything but only one or two things” — you never get around to living.

It’s taken me a long time to understand that deliberately closing off some options can open more interesting options.

More creativity posts

Nobody's going to steal your idea

When I was working on my dissertation, I thought someone might scoop my research and I’d have to start over. Looking back, that was ridiculous. For one thing, my research was too arcane for many others to care about. And even if someone had proven one of my theorems, there would still be something original in my work.

Since then I’ve signed NDAs (non-disclosure agreements) for numerous companies afraid that someone might steal their ideas. Maybe they’re doing the right thing to be cautious, but I doubt it’s necessary.

I think Howard Aiken got it right:

Don’t worry about people stealing your ideas. If your ideas are any good, you’ll have to ram them down people’s throats.

One thing I’ve learned from developing software is that it’s very difficult to transfer ideas. A lot of software projects never completely transition from the original author because no one else really understands what’s going on.

It’s more likely that someone will come up with your idea independently than that someone would steal it. If the time is ripe for an idea, and all the pieces are there waiting for someone to put them together, it may be discovered multiple times. But unless someone is close to making the discovery for himself, he won’t get it even if you explain it to him.

And when other people do have your idea, they still have to implement it. That’s the hard part. We all have more ideas than we can carry out. The chance that someone else will have your idea and have the determination to execute it is tiny.

Maybe you don’t need to

One life-lesson from math is that sometimes you can solve a problem without doing what the problem at first seems to require. I’ll give an elementary example and a more advanced example.

The first example is finding remainders. What is the remainder when 5,000,070,004 is divided by 9? At first it may seem that you need to divide 5,000,070,004 by 9, but you don’t. You weren’t asked the quotient, only the remainder, which in this case you can do directly. By casting out nines, you can quickly see the remainder is 7.

The second example is definite integrals. The usual procedure for computing definite integrals is to first find an indefinite integral (i.e. anti-derivative) and take the difference of its values at the two end points. But sometimes it’s possible to find the definite integral directly, even when you couldn’t first find the indefinite integral. Maybe you can evaluate the definite integral by symmetry, or a probability argument, or by contour integration, or some other trick.

Contour integration is an interesting example because you don’t do what you might think you need to — i.e. find an indefinite integral — but you do have to do something you might never imagine doing before you’ve seen the trick, i.e. convert an integral over the real line to an integral in the complex plane to make it simpler!

What are some more examples, mathematical or not, of solving a problem without doing something that at first seems necessary?

Related posts

Being useful

Chuck Bearden posted this quote from Steve Holmes on his blog the other day:

Usefulness comes not from pursuing it, but from patiently gathering enough of a reservoir of material so that one has the quirky bit of knowledge … that turns out to be the key to unlocking the problem which someone offers.

Holmes was speaking specifically of theology. I edited out some of the particulars of his quote to emphasize that his idea applies more generally.

Obviously usefulness can come from pursuing it. But there’s a special pleasure in applying some “quirky bit of knowledge” that you acquired for its own sake. It can feel like simply walking up to a gate and unlocking it after unsuccessful attempts to storm the gate by force.

Avoiding difficult problems

The day after President Kennedy challenged America to land a man on the moon,

… the National Space Agency didn’t suit up an astronaut. Instead their first goal was to hit the moon — literally. And just over three years later, NASA successfully smashed Ranger 7 into the moon … It took fifteen ever-evolving iterations before the July 16, 1969, gentle moon landing …

Great scientists, creative thinkers, and problem solvers do not solve hard problems head-on. When they are faced with a daunting question, they immediately and prudently admit defeat. They realize there is no sense in wasting energy vainly grappling with complexity when, instead, they can productively grapple with smaller cases that will teach them how to deal with the complexity to come.

From The 5 Elements of Effective Thinking.

Some may wonder whether this contradicts my earlier post about how quickly people give up thinking about problems. Doesn’t the quote above say we should “prudently admit defeat”? There’s no contradiction. The quote advocates retreat, not surrender. One way to be able to think about a hard problem for a long time is to find simpler versions of the problem that you can solve. Or first, to find simpler problems that you cannot solve. As George Polya said

If you can’t solve a problem, then there is an easier problem that you can’t solve; find it.

Bracket the original problem between the simplest version of the problem you cannot solve and the fullest version of the problem you can solve. Then try to move your brackets.

How long can you think about a problem?

The main difficulty I’ve seen in tutoring math is that many students panic if they don’t see what to do within five seconds of reading a problem, maybe two seconds for some. A good high school math student may be able to stare at a problem for fifteen seconds without panicking. I suppose students have been trained implicitly to expect to see the next step immediately. Years of rote drill will do that to you.

A good undergraduate math student can think about a problem for a few minutes before getting nervous. A grad student may be able to think about a problem for an hour at a time. Before Andrew Wiles proved Fermat’s Last Theorem, he thought about the problem for seven years.

Related posts

Pushing an idea

From The 5 Elements of Effective Thinking:

Calculus may hold a world’s record for how far an idea can be pushed. Leibniz published the first article on calculus in 1684, an essay that was a mere 6 pages long. Newton and Leibniz would surely be astounded to learn that today’s introductory calculus textbook contains over 1,300 pages. A calculus textbook introduces two fundamental ideas, and the remaining 1,294 pages consists of examples, variations, and applications—all arising from following the consequences of just two fundamental idea.

Design for outcomes

Designing a device to save lives is not enough. People may not use it, or may not use it correctly. Or be unable to maintain it. Or …

Link to video. (If you know why the embedded video doesn’t appear in some RSS readers and how to fix it, please let me know.)

I’ve seen analogous problems with statistical methods. People will not necessarily adopt a new statistical method just because it is better. And if they do use it, they may use it wrongly, just like medical devices.

(“Better” in the previous paragraph is a loaded term. Statistical methods are evaluated by many criteria: power, robustness, bias, etc. When someone says his new method is better, he means better by the criteria he cares most about. But even when there is agreement on statistical criteria, a superior statistical method may be rejected for non-statistical reasons.)

Related posts

When a good author writes a bad book

The other day I read a terribly bland book by an author I’ve previously enjoyed. (I’d rather not name the book or the author.) The book was remarkably unremarkable.

It reminded me that even the best strike out now and then. You have to evaluate someone by their best work, not their worst. If someone produces one masterpiece and a dozen clunkers, then they’ve produced a masterpiece. And that puts them ahead of people who crank out nothing but inoffensive mediocrities.

I also thought about how the author is likely to make a lot of money off his terrible book. That’s oddly encouraging. Even when you put out a clunker, not everyone will think it’s a clunker. It’s not necessary to do great work in order to make money, though doing great work is more satisfying.