What is an object?

What is an “object” in object-oriented programming?

It’s a clump of name->value mappings, some functions that take such clumps as their first arguments, and a dispatch function that decides which function the programmer meant to call. This reality is usefully obscured by the language so that programmers can, without thinking, do wonderful things, blissfully pretending that the pictures in their head are what the computer is really doing.

From Functional Programming for the Object-Oriented Programmer, in the appropriately named chapter “Objects as a Software-Mediated Consensual Hallucination.”

Avoiding difficult problems

The day after President Kennedy challenged America to land a man on the moon,

… the National Space Agency didn’t suit up an astronaut. Instead their first goal was to hit the moon — literally. And just over three years later, NASA successfully smashed Ranger 7 into the moon … It took fifteen ever-evolving iterations before the July 16, 1969, gentle moon landing …

Great scientists, creative thinkers, and problem solvers do not solve hard problems head-on. When they are faced with a daunting question, they immediately and prudently admit defeat. They realize there is no sense in wasting energy vainly grappling with complexity when, instead, they can productively grapple with smaller cases that will teach them how to deal with the complexity to come.

From The 5 Elements of Effective Thinking.

Some may wonder whether this contradicts my earlier post about how quickly people give up thinking about problems. Doesn’t the quote above say we should “prudently admit defeat”? There’s no contradiction. The quote advocates retreat, not surrender. One way to be able to think about a hard problem for a long time is to find simpler versions of the problem that you can solve. Or first, to find simpler problems that you cannot solve. As George Polya said

If you can’t solve a problem, then there is an easier problem that you can’t solve; find it.

Bracket the original problem between the simplest version of the problem you cannot solve and the fullest version of the problem you can solve. Then try to move your brackets.

Limits of statistics

When statisticians analyze data, they don’t just by look at the data you bring to them. They also consider hypothetical data that you could have brought. In other words, they consider what could have happened as well as what actually did happen.

This may seem strange, and sometimes it does lead to strange conclusions. But often it is undeniably the right thing to do. It also leads to endless debates among statisticians. The cause of the debates lies at the root of statistics.

The central dogma of statistics is that data should be viewed as realizations of random variables. This has been a very fruitful idea, but it has its limits. It’s a reification of the world. And like all reifications, it eventually becomes invisible to those who rely on it.

Data are what they are. In order to think of the data as having come from a random process, you have to construct a hypothetical process that could have produced the data. Sometimes there is near universal agreement on how this should be done. But often different statisticians create different hypothetical worlds in which to place the data. This is at the root of such arguments as how to handle multiple testing.

You can debunk any conclusion by placing the data in a large enough hypothetical model. Suppose it’s Jake’s birthday, and when he comes home, there are Scrabble tiles on the floor spelling out “Happy birthday Jake.” You might conclude that someone arranged the tiles to leave him a birthday greeting. But if you are so inclined, you could attribute the apparent pattern to chance. You could argue that there are many people around the world who have dropped bags of Scrabble tiles, and eventually something like this was bound to happen. If that seems to be an inadequate explanation, you could take a “many worlds” approach and posit entire new universes. Not only are people dropping Scrabble tiles in this universe, they’re dropping them in countless other universes too. We’re only remarking on Jake’s apparent birthday greeting because we happen to inhabit the universe in which it happened.

Related posts

How long can you think about a problem?

The main difficulty I’ve seen in tutoring math is that many students panic if they don’t see what to do within five seconds of reading a problem, maybe two seconds for some. A good high school math student may be able to stare at a problem for fifteen seconds without panicking. I suppose students have been trained implicitly to expect to see the next step immediately. Years of rote drill will do that to you.

A good undergraduate math student can think about a problem for a few minutes before getting nervous. A grad student may be able to think about a problem for an hour at a time. Before Andrew Wiles proved Fermat’s Last Theorem, he thought about the problem for seven years.

Related posts

Software fences

David Curran pointed out the following parable from G. K. Chesterton in reply to something I’d said on Google+. Though Chesterton had other things in mind, Curran pointed out that the quote applies to software maintenance.

In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”

This paradox rests on the most elementary common sense. The gate or fence did not grow there. … Some person had some reason for thinking it would be a good thing for somebody. And until we know what the reason was, we really cannot judge whether the reason was reasonable. It is extremely probable that we have overlooked some whole aspect of the question, if something set up by human beings like ourselves seems to be entirely meaningless and mysterious. There are reformers who get over this difficulty by assuming that all their fathers were fools; but if that be so, we can only say that folly appears to be a hereditary disease. … If he knows how it arose, and what purposes it was supposed to serve, he may really be able to say that they were bad purposes, that they have since become bad purposes, or that they are purposes which are no longer served.

Complex for whom?

From Out of the Tar Pit:

… the type of complexity we are discussing in this paper is that which makes large systems hard to understand. It is this that causes us to expend huge resources in creating and maintaining such systems. This type of complexity has nothing to do with complexity theory — the branch of computer science which studies the resources consumed by a machine executing a program. The two are completely unrelated — it is a straightforward matter to write a small program in a few lines which is incredibly simple (in our sense) and yet is of the highest complexity class (in the complexity theory sense).

More posts on complexity