Technologies never die

There are incentives to use the latest technology, just because it’s the latest, even if it’s no better than its predecessor. Being up-to-date makes it easier to

  • Find a job
  • Work on new projects
  • Demonstrate enthusiasm for your profession.

In addition, there are advantages to staying with the mainstream. If most people think something new is better but you disagree, you might do well to  acquiesce. When you’re in the mainstream, it’s easier to find parts, documentation, people to answer questions, etc.

That said, here are some bad reasons to adopt the latest thing:

  • Believing marketing hype
  • Not considering your particular circumstances
  • Under-estimating learning time
  • Fearing a technology will die

Not every new release of every product is an improvement. If a new product truly is an improvement for most people, that doesn’t mean it’s necessarily better for your particular needs. And if your are sure the new thing will make you more productive, you have to also ask whether you will use it long enough to repay the time you invest learning it.

Many programmers live in inordinate fear that a technology will die. But technologies seldom disappear. They may become less fashionable, less visible,  less common, or less lucrative, but hardly anything ever goes away. Programmers may suffer more pain from technology that won’t die than from technology that does.

Technologies don’t drop out of use nearly as quickly as they drop out of fashion or out of sight.

Update: As an example, this podcast claims that 72% of financial transactions are still processed in COBOL.

Related posts

Place, privacy, and dignity

Richard Weaver argues in Visions of Order that our privacy and dignity depend on our being rooted in space. He predicted that as people become less attached to a geographical place, privacy and dignity erode.

There is something protective about “place”; it means isolation, privacy, and finally identity. … we must again become sensitive enough to realize that “place” means privacy and dignity …

When Weaver wrote those words in 1964, he was concerned about physical mobility. Imagine what he would have thought of online life.

I find it interesting that Weaver links privacy and dignity. There is a great deal of talk about loss of privacy online, but not much about loss of dignity. The loss of dignity is just as real, and more under our control. We may lose privacy through a third party mishandling data, but our loss of dignity we often bring on ourselves.

Related posts

Why AT&T licensed UNIX to universities

Here are  a couple details of UNIX history I ran across this week.

Why AT&T first licensed UNIX to universities:

At this time [1974], AT&T held a government-sanctioned monopoly on the US telephone system. The terms of AT&T’s agreement with the US government prevented it from selling software, which meant that it could not sell UNIX as a product. Instead … AT&T licensed UNIX for use in universities for a nominal distribution fee.

And why later they turned it into a commercial product:

… US antitrust legislation forced the breakup of AT&T (… the break-up became effective in 1982) with the consequence that, since it no longer held a monopoly on the telephone system, the company was permitted to market UNIX.

Source: The Linux Programming Interface

More Unix posts

 

Perpendicular and relatively prime

Donald Knuth recommends using the symbol ⊥ between two numbers to indicate that they are relatively prime. For example:

m perp n

The symbol is denoted \perp in TeX because it is used in geometry to denote perpendicular lines. It corresponds to Unicode character U+27C2.

I mentioned this on TeXtip yesterday and someone asked for the reason for the symbol. I’ll quote Knuth’s original announcement and explain why I believe he chose that symbol.

When gcd(m, n) = 1, the integers m and n have no prime factors in common and we way that they’re relatively prime.

This concept is so important in practice, we ought to have a special notation for it; but alas, number theorists haven’t agreed on a very good one yet. Therefore we cry: Hear us, O Mathematicians of the World! Let us not wait any longer! We can make many formulas clearer by adopting a new notation now! Let us agree to write ‘mn ’, and to say “m is prime to n,” if m and n are relatively prime.

This comes from Concrete Mathematics. In the second edition, the text is on page 115. The book explains why some notation is needed, but it does not explain why this particular symbol.

[Correction: The book does explain the motivation for the symbol. The justification is in a marginal note and I simply overlooked it. The note says “Like perpendicular lines don’t have a common direction, perpendicular numbers don’t have common factors.”]

Here’s what I believe is the reason for the symbol.

Suppose m and n are two positive integers with no factors in common. Now pick numbers at random between 1 and mn. The probability of being divisible by m and n is 1/mn, the product of the probabilities of being divisible by m and n. This says that being divisible by m and being divisible by n are independent events. Also, independent events are analogous to perpendicular lines.  The analogy is made precise in this post where I show the connection between correlation and the law of cosines.

So in summary, the ideas of being relatively prime, independent, and perpendicular are all related, and so it makes sense to use a common symbol to denote each.

Related posts

The middle size of the universe

From Kevin Kelly’s book What Technology Wants:

Our body size is, weirdly, almost exactly in the middle of the size of the universe. The smallest things we know about are approximately 30 orders of magnitude smaller than we are, and the largest structures in the universe are about 30 orders of magnitude bigger.

Related posts

Why the horse in Magician’s Nephew is named Fledge

In C. S. Lewis’ book The Magician’s Nephew, the horse Strawberry becomes Fledge, the father of winged horses. It didn’t occur to me until today why Lewis chose that name. I just thought it was an odd, arbitrary choice.

This morning I saw something that referred to a bird as unfledged which made me suspect the base “fledge” had something to do with flight, which it does. I knew the word fledgling—a bird just beginning to fly—but I had not made the connection between fledglings and Fledge.

If you’d like to read another etymology post, see Cats, Calendars, and Connections.

The trouble with wizards

It’s usually a compliment to call someone a “wizard.” For example, the stereotypical Unix wizard is a man with a long gray beard who can solve any problem in minutes by typing furiously at a command prompt.

Here’s a different take on wizards. Think about wizards, say, in the Harry Potter novels. Wizards learn to say certain spells in certain situations. There’s never any explanation of why these spells work. They just do. Unless, of course, they don’t. Wizards are powerful, but they can be incompetent.

You might use wizard to describe someone who lacks curiosity about what they’re doing. They don’t know why their actions work, or sometimes even whether they work. They’ve learned a Pavlovian response to problems: when you see this, do this.

Wizards can be valuable. Sometimes you just need a problem solved and you don’t care why the solution works. Someone who doesn’t understand what they’re doing but can fix your problem quickly may be better than someone who knows what they’re doing but works too slowly. But if your problem doesn’t quite fit the intended situation for a spell, the wizard is either powerless or harmful.

Wizards can’t learn a better way of doing anything because “better” makes no sense. When you see problem A, carry out procedure B. That’s just what you do. How can you address problem A better when “solving A” means “do B“? Professional development for a wizard consists of learning more spells for more situations, not learning a better spell or learning why spells work.

Wizards may be able to solve your problem for you, but they can’t teach you how to solve your own problems.

Related posts

The grand unified theory of 19th century math

The heart of 19th century math was the study of special functions arising from mathematical physics.

It is well known that the central problem of the whole of modern mathematics is the study of the transcendental functions defined by differential equations.

The above quote was the judgment of  Felix Klein (of Klein bottle fame) in 1893. The differential equations he had in mind were the second order differential equations of mathematical physics.

Special functions were the core of 19th century math, and hypergeometric series were the grand unifying theory of special functions. (Not every special function is hypergeometric, but quite a few are.) And yet they’re hardly taught any more. I never heard of hypergeometric series in college, even though I studied differential equations and applied math. Later I encountered hypergeometric functions first in combinatorics and only later in differential equations.

It’s odd that what was “the central problem of the whole of modern mathematics” could become almost a lost art a century later. How could this be? I believe part of the explanation is that special functions, and hypergeometric function in particular, fall between two stools: too advanced for undergraduate programs but not a hot enough of a research area for graduate programs.

RelatedConsulting in differential equations

Accelerated learning

Derek Sivers tells how a mentor was able to teach him a semester’s worth of music theory in three hours. His mentor also prepared him to place out of four more classes in four sessions. He gives the details in his blog post There’s no speed limit. It’s an inspiring story.

However, Sivers didn’t go through his entire education this way. He finished his degree in 2.5 years, but at the rate he started he could have finished in under a semester. Obviously he wasn’t able to blow through everything as fast as music theory.

Some classes compress better than others. Theoretical classes condense better than others. A highly motivated student could learn a semester of music theory or physics in a short amount of time. But it would take longer to learn a semester of French or biology no matter how motivated you are because these courses can’t be summarized by a small number of general principles. And while Sivers learned basic music theory in three hours, he says it took him 15 years to learn how to sing.

Did Sivers’ mentor expose him to everything students taking music theory classes are exposed to? Probably not. But apparently Sivers did learn the most important material, both in the opinion of his mentor and in the opinion of the people who created the placement exams. His mentor not only taught him a lot of ideas in a short amount of time, he also told him when it was time to move on to something else.

It’s hard to say when you’ve learned something. Any subject can be explored in infinite detail. But there comes a point when you’ve learned a subject well enough. Maybe you’ve learned it to your personal satisfaction or you’ve learned it well enough for an exam. Maybe you’ve reached diminishing return on your efforts or you’ve learned as much as you need to for now.

One way to greatly speed up learning is to realize when you’ve learned enough. A mentor can say something like “You don’t know everything, but you’ve learned about as much as you’re going to until you get more experience.”

Occasionally I’ll go from feeling I don’t understand something to feeling I do understand it in a moment, and not because I’ve learned anything new. I just realize that maybe I do understand it after all. It’s a feeling like eating a meal quickly and stopping before you feel full. A few minutes later you feel full, not because you’ve eaten any more, but only because your body realizes you’re full.

Related posts