The difference between machines and tools

From “The Inheritance of Tools” by Scott Russell Sanders:

I had botched a great many pieces of wood before I mastered the right angle with a saw, botched even more before I learned to miter a joint. The knowledge of these things resides in my hands and eyes and the webwork of muscles, not in the tools. There are machines for sale—powered miter boxes and radial arm saws, for instance—that will enable any casual soul to cut proper angles in boards. The skill is invested in the gadget instead of the person who uses it, and this is what distinguishes a machine from a tool.

Related post: Software exoskeletons

Tragedies and messes

Dorothy Parker said “It’s not the tragedies that kill us; it’s the messes.”

Sometime that’s how I feel about computing. I think of messes such as having to remember that arc tangent is atan in R and Python, but arctan in NumPy and a in bc. Or that C, Python, and Perl use else if, elif, and elsif respectively. Or did I switch those last two?

These trivial but innumerable messes keep us from devoting our full energy to bigger problems.

One way to reduce these messes is to use fewer tools. Then you know less to be confused about. If you only use Python, for example, then elif is just how it is. But knowing more tools is worth the added mess, up to a point. Past some point, however, new tools add more mental burden than utility. You have to find the optimal combination of tools for yourself, and that combination will change over time.

To use fewer tools, you may need to use more complex tools. Maybe you can replace a list of moderately complex but inconsistent tools with one tool that is more complex but internally consistent.

Maybe you’re doing more than you need to

Suppose a = 2485144657 and b = 7751389993.

  1. What is the last digit of a*b?
  2. What is the median digit of a*b?

In both questions it is conceptually necessary to compute a*b, but not logically necessary. Both are a question about a*b, so computing the product is conceptually necessary. But there is no logical necessity to actually compute a*b in order to answer a question about it.

In the first question, there’s an obvious short cut: multiply the last two digits and see that the last digit of the product must be 1.

In the second question, it is conceivable that there is some way to find the median digit that is less work than computing a*b first, though I don’t see how. Conceptually, you need to find the digits that make up a*b, sort them, and select the one in the middle. But it is conceivable, for example, that there is some way to find the digits of a*b that is less work than finding them in the right order, i.e. computing a*b.

I brought up the example above to use it as a metaphor.

In your work, how can you tell whether a problem is more like the first question or the second? Are you presuming you have to do something that you don’t? Are you assuming something is logically necessary when it is only conceptually necessary?

When I’m stuck on a problem, I often ask myself whether I really need to do what I assume I need to do. Sometimes that helps me get unstuck.

Related post: Maybe you only need it because you have it

Slabs of time

From Some Remarks: Essays and Other Writing by Neal Stephenson:

Writing novels is hard, and requires vast, unbroken slabs of time. Four quiet hours is a resource I can put to good use. Two slabs of time, each two hours long, might add up to the same four hours, but are not nearly as productive as an unbroken four. … Likewise, several consecutive days with four-hour time-slabs in them give me a stretch of time in which I can write a decent book chapter, but the same number of hours spread out across a few weeks, with interruptions in between them, are nearly useless.

I haven’t written a novel, and probably never will, but Stephenson’s remarks describe my experience doing math and especially developing software. I can do simple, routine work in short blocks of time, but I need larger blocks of time to work on complex projects or to be more creative.

Related post: Four hours of concentration

Not complex enough

One time a professor asked me about a problem and I suggested a simple solution. He shot down my idea because it wasn’t complex enough. He said my idea would work, but it wasn’t something he could write a paper about in a prestigious journal.

I imagine that sort of thing happens in the real world, though I can’t recall an example. On the contrary, I can think of examples where people were thrilled by trivial solutions such as a two-line Perl script or a pencil-and-paper calculation that eliminated the need for a program.

The difference is whether the goal is to solve a problem or to produce an impressive solution.

Pure possibility

Peter Lawler wrote a blog post yesterday commenting on a quote from Walter Percy’s novel The Last Gentleman:

For until this moment he had lived in a state of pure possibility, not knowing what sort of man he was or what he must do, and supposing therefore that he must be all men and do everything. But after this morning’s incident his life took a turn in a particular direction. Thereafter he came to see that he was not destined to do everything but only one or two things. Lucky is the man who does not secretly believe that every possibility is open to him.

As Lawler summarizes,

Without some such closure — without knowing somehow that you’re “not destined to do everything but only one or two things” — you never get around to living.

It’s taken me a long time to understand that deliberately closing off some options can open more interesting options.

More creativity posts

Nobody's going to steal your idea

When I was working on my dissertation, I thought someone might scoop my research and I’d have to start over. Looking back, that was ridiculous. For one thing, my research was too arcane for many others to care about. And even if someone had proven one of my theorems, there would still be something original in my work.

Since then I’ve signed NDAs (non-disclosure agreements) for numerous companies afraid that someone might steal their ideas. Maybe they’re doing the right thing to be cautious, but I doubt it’s necessary.

I think Howard Aiken got it right:

Don’t worry about people stealing your ideas. If your ideas are any good, you’ll have to ram them down people’s throats.

One thing I’ve learned from developing software is that it’s very difficult to transfer ideas. A lot of software projects never completely transition from the original author because no one else really understands what’s going on.

It’s more likely that someone will come up with your idea independently than that someone would steal it. If the time is ripe for an idea, and all the pieces are there waiting for someone to put them together, it may be discovered multiple times. But unless someone is close to making the discovery for himself, he won’t get it even if you explain it to him.

And when other people do have your idea, they still have to implement it. That’s the hard part. We all have more ideas than we can carry out. The chance that someone else will have your idea and have the determination to execute it is tiny.

Maybe you don’t need to

One life-lesson from math is that sometimes you can solve a problem without doing what the problem at first seems to require. I’ll give an elementary example and a more advanced example.

The first example is finding remainders. What is the remainder when 5,000,070,004 is divided by 9? At first it may seem that you need to divide 5,000,070,004 by 9, but you don’t. You weren’t asked the quotient, only the remainder, which in this case you can do directly. By casting out nines, you can quickly see the remainder is 7.

The second example is definite integrals. The usual procedure for computing definite integrals is to first find an indefinite integral (i.e. anti-derivative) and take the difference of its values at the two end points. But sometimes it’s possible to find the definite integral directly, even when you couldn’t first find the indefinite integral. Maybe you can evaluate the definite integral by symmetry, or a probability argument, or by contour integration, or some other trick.

Contour integration is an interesting example because you don’t do what you might think you need to — i.e. find an indefinite integral — but you do have to do something you might never imagine doing before you’ve seen the trick, i.e. convert an integral over the real line to an integral in the complex plane to make it simpler!

What are some more examples, mathematical or not, of solving a problem without doing something that at first seems necessary?

Related posts

Being useful

Chuck Bearden posted this quote from Steve Holmes on his blog the other day:

Usefulness comes not from pursuing it, but from patiently gathering enough of a reservoir of material so that one has the quirky bit of knowledge … that turns out to be the key to unlocking the problem which someone offers.

Holmes was speaking specifically of theology. I edited out some of the particulars of his quote to emphasize that his idea applies more generally.

Obviously usefulness can come from pursuing it. But there’s a special pleasure in applying some “quirky bit of knowledge” that you acquired for its own sake. It can feel like simply walking up to a gate and unlocking it after unsuccessful attempts to storm the gate by force.