Hanlon’s razor and corporations

Hanlon’s razor says

Never attribute to malice that which is adequately explained by stupidity.

At first it seems just an amusing little aphorism, something you might read on a bumper sticker, but I believe it’s profound. It’s a guide to understanding so much of the world. Here I’ll focus on what it says about corporations.

I hear a lot of complaints that corporations are evil. Sometimes corporations in general, but more often specific corporations like Apple, Google, or Microsoft. I don’t deny that large, powerful corporations have the potential to do harm. But many accusations of malice are mis-attributed frustrations with stupidity. As Grey’s law says, any sufficiently advanced incompetence is indistinguishable from malice.

Corporations aren’t evil; they’re stupid. Not stupid in general, but in a specific way: they don’t handle edge cases well.

Organizations scale by creating procedures to replace human judgment. This is mostly a good thing. For example, electronic devices are affordable in part because companies can hire unskilled teenagers rather than electrical engineers to sell them. But if you have a question or problem that’s off the beaten path, you’re out of luck. Many complaints about evil corporations come from outliers, the 1% that corporations strategically decide to ignore. It’s not that that the concerns of the outliers are not legitimate, it’s that they are not profitable to satisfy. When some people say that a corporation is evil, they should just say that they are outside the company’s market.

Large organizations have similar problems internally. Policies written to handle the most common situations don’t handle edge cases well. For example, an HR department told me that my baby girl couldn’t be added to my insurance because she wasn’t born in a hospital. Fortunately I was able to argue with enough people resolve the problem despite her falling outside the usual procedures. It’s harder to deal with corporate rigidity as an employee than as a customer because it’s harder to change jobs than to change brands.

More corporate life posts

Daily tips update

RegexTip, a Twitter account for learning regular expressions, starts over today with basics and will progress to more advanced properties over time.

SansMouse, an account for Windows keyboard shortcuts, started over with basics two weeks ago.

Both RegexTip and SansMouse are in a loop, progressing from most basic to more advanced features. (Or perhaps I should say progressing from most familiar to less familiar. Calling some features “basic” and others “advanced” isn’t quite right, especially for keyboard shortcuts.)

The other daily tip accounts don’t post in any particular sequence. I try to alternate elementary and advanced content to some extent, but other than that there’s no order.

Six weeks ago I started two new accounts: CompSciFact and StatFact. In a few days CompSciFact will be the most popular of the daily tip accounts if the current trend continues.

Here are all the accounts:

SansMouse icon RegexTip icon TeXtip icon ProbFact icon StatFact icon AlgebraFact icon TopologyFact icon AnalysisFact icon CompSciFact icon

I use Hoot Suite to schedule these accounts. I use the paid version because I have too many accounts for the free version and because the paid version has an API that lets me upload files to schedule tips in bulk. (Hoot Suite has an affiliate program, so I make a little money if you sign up through this link.)

If you have suggestions for tweets, please contact me.

Scientific results fading over time

A recent article in The New Yorker gives numerous examples of scientific results fading over time. Effects that were large when first measured become smaller in subsequent studies. Firmly established facts become doubtful. It’s as if scientific laws are being gradually repealed. This phenomena is known as “the decline effect.” The full title of the article is The decline effect and the scientific method.

The article brings together many topics that have been discussed here: regression to the mean, publication bias, scientific fashion, etc. Here’s a little sample.

“… when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.” … After a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.

This excerpt happens to be talking about “fluctuating asymmetry,” the idea that animals prefer more symmetric mates because symmetry is a proxy for good genes. (I edited out references to fluctuating asymmetry from the quote to emphasize that the remarks could equally apply to any number of topics. ) Fluctuating asymmetry was initially confirmed by numerous studies, but then the tide shifted and more studies failed to find the effect.

When such a shift happens, it would be reassuring to believe that the initial studies were simply wrong and that the new studies are right. But both the positive and negative results confirmed the prevailing view at the time they were published. There’s no reason to believe the latter studies are necessarily more reliable.

Related posts

Your job is trivial. (But I couldn’t do it.)

Ever had a conversation that could be summarized like this?

Your job is trivial. (But I can’t do it.)

This happens in every profession. Everyone’s job has difficulties that outsiders dismiss. I’ve seen it in everything I’ve done, but especially in software development. Here are some posts along those lines.

How long computer operations take

The following table is from Peter Norvig’s essay Teach Yourself Programming in Ten Years. All times are in units of nanoseconds.

execute typical instruction 1
fetch from L1 cache memory 0.5
branch misprediction 5
fetch from L2 cache memory 7
Mutex lock/unlock 25
fetch from main memory 100
send 2K bytes over 1Gbps network 20,000
read 1MB sequentially from memory 250,000
fetch from new disk location (seek) 8,000,000
read 1MB sequentially from disk 20,000,000
send packet US to Europe and back 150,000,000

 

Occam’s razor and Bayes’ theorem

Occam’s razor says that if two models fit equally well, the simpler model is likely to be a better description of reality. Why should that be?

A paper by Jim Berger suggests a Bayesian justification of Occam’s razor: simpler hypotheses have higher posterior probabilities when they fit well.

A simple model makes sharper predictions than a more complex model. For example, consider fitting a linear model and a cubic model. The cubic model is more general and fits more data. The linear model is more restrictive and hence easier to falsify. But when the linear and cubic models both fit, Bayes’ theorem “rewards” the linear model for making a bolder prediction. See Berger’s paper for a details and examples.

From the conclusion of the paper:

Ockham’s razor, far from being merely an ad hoc principle, can under many practical situations in science be justified as a consequence of Bayesian inference. Bayesian analysis can shed new light on what the notion of “simplest” hypothesis consistent with the data actually means.

Related links

Demand for simplicity?

From Donald Norman’s latest book Living with Complexity:

… the so-called demand for simplicity is a myth whose time has passed, if it ever existed.

Make it simple and people won’t buy. Given a choice, they will take the item that does more. Features win over simplicity, even when people realize that features mean more complexity. You do too, I’ll bet. Haven’t you ever compared two products side by side, feature by feature, and preferred the one that did more? …

Would you pay more money for a washing machine with fewer controls? In the abstract, maybe. At the store, probably not.

Donald Norman’s assessment sounds wrong at first. Don’t we all like things to be simple? Not if by “simple” we mean “fewer features.”

A general theme in Living with Complexity is that complexity is inevitable and often desirable, but it can be managed. We say we want things that are simple, but we really want things that are easy to use. The book gives several examples to illustrate how different those two ideas are.

If something is complex but familiar and well designed, it’s easy to use. If something is simple but unfamiliar or poorly designed, it’s hard to use.

Simplicity posts

Some programmers really are 10x more productive

One of the most popular post on this site is Why programmers are not paid in proportion to their productivity. In that post I mention that it’s not uncommon to find some programmers who are ten times more productive than others. Some of the comments discussed whether there was academic research in support of that claim.

I’ve seen programmers who were easily 10x more productive than their peers. I imagine most people who have worked long enough can say the same. I find it odd to ask for academic support for something so obvious. Yes, you’ve seen it in the real world, but has it been confirmed in an artificial, academic environment?

Still, some things are commonly known that aren’t so. Is the 10x productivity difference exaggerated folklore? Steve McConnell has written an article reviewing the research behind this claim: Origins of 10x—How valid is the underlying research?. He concludes

The body of research that supports the 10x claim is as solid as any research that’s been done in software engineering.

Related posts