Curious, exciting, and slightly disturbing

This weekend I’ve been wrapping up unfinished projects. One of those projects was reading Kraken: The Curious, Exciting, and Slightly Disturbing Science of Squid (ISBN 0810984652).

The book is exactly what you might expect from the title: a quirky little book about squid. I didn’t particularly enjoy it, but that’s my fault. I just wasn’t as interested in reading a quirky little book about squid as I thought I would when the publisher offered me a copy. Squid are bizarre creatures, and some other time I might enjoy reading more about them.

The title is terrific. I probably wouldn’t have given the book a second thought if it had been entitled, for example, Teuthology. And the title isn’t just sensational; squid really are curious, possibly exciting, and at least slightly disturbing.

Scripting and the last mile problem

From Bruce Payette’s book Windows PowerShell in Action:

Why do we care about command-line management and automation? Because it helps to solve the Information Technology professional’s version of the last mile problem. The last mile problem is a classical problem that comes from the telecommunications industry. It goes like this: the telecom industry can effectively amortize its infrastructure costs across all its customers until it gets to the last mile where the service is finally run to an individual location. … In the Information Technology (IT) industry, the last mile problem is figuring out how to manage each IT installation effectively and economically.

To manage this infrastructure you need a toolkit. Why not use the same toolkit for operations that was used for development?

This toolkit cannot merely be the same tools used to develop the overall infrastructure as the level of detail required is too great. Instead, you need a set of tools with a higher level of abstraction.

Related posts

Where OO has succeeded most

Eric Raymond makes an interesting observation on where object oriented programming has been most successful.

The OO design concept initially proved valuable in the design of graphics systems, graphical user interfaces, and certain kinds of simulation. To the surprise and gradual disillusionment of many, it has proven difficult to demonstrate significant benefits of OO outside those areas. It’s worth trying to understand why. …

One reason that OO has succeeded most where it has (GUIs, simulation, graphics) may be because it’s relatively difficult to get the ontology of types wrong in those domains. In GUIs and graphics, for example, there is generally a rather natural mapping between manipulable visual objects and classes.

I believe he overstates his case when he says it has been difficult to show benefits of OO outside graphics and simulation. But I agree that it’s harder to create good object oriented software when there isn’t a clear physical model to guide the organization of the code. A programmer may partition functionality into classes according to a mental model that other programmers do not share.

When you’re developing graphics or simulation software, it’s more likely that multiple people will share the same mental model. But even in simulation the boundaries are not so clear. I’ve seen a lot of software to simulate clinical trials. There the objects are fairly obvious: patients, treatments, etc. But there are many ways to decide, for example, what functionality should be part of a software object representing a patient. What seems natural to one developer may seem completely arbitrary to another.

More posts on OOP

Numerical exceptions

Someone sent me a question yesterday that boiled down to the difference between kinds of numeric exceptions. I’ll give my response below, but first a little background.

Numeric exceptions occur when a computer does some operation on a number that produces an error. The abstraction that lets us think of computer numbers as mathematical numbers has sprung a leak. On Windows you may see output that looks like -1.#IND or 1.#INF. On Linux you may see nan or inf. These values do not correspond to mathematical real numbers but instead are codes saying what went wrong.

If you’ve heard of NaNs (NaN stands for “not a number”) you might call every numerical exception a NaN. That’s reasonable since indeed an exception is “not a number”, or at least not an ordinary number. The problem is that NaN has a more restricted technical meaning that excludes some kinds of exceptions.

An infinite values is an exception but not a NaN. The difference is important. Some mathematical operations still make sense on an infinite value, but no operations make sense on a NaN. For example, if two floating point values are infinite and have the same sign, they are equal. But a NaN cannot equal anything, not even itself. So in C, if x is a double then the test x == x will return true if x is infinite but not if x is a NaN.

The question that motivated this post had assumed that an infinite value was a NaN.

No, and infinity is not a NaN.

It all makes sense when you think about it. A NaN is a computer’s way of saying “I don’t know what else to do.” An infinity is the computer saying “It’s bigger than I can handle, but I’ll preserve the sign.”

For example, let x be the largest finite number a computer can represent. What is 2*x? Too big to represent, but it’s positive, so it’s +infinity. What’s -2*x? It’s -infinity.

But what is sqrt(-1)? It’s not big, so it’s not infinity. It’s just complex. Nothing else makes sense, so the computer returns NaN.

Windows displays infinite results as 1.#INF or -1.#INF depending on the sign. Linux displays inf or -inf. Windows displays NaNs as -1.#IND (“ind” for “indeterminate”) and Linux displays nan.

For more details see these notes: IEEE floating point exceptions in C++

More floating point posts

The myth of the Lisp genius

I’m fascinated by the myth of the Lisp genius, the eccentric programmer who accomplishes super-human feats writing Lisp. I’m not saying that such geniuses don’t exist; they do. Here I’m using “myth” in the sense of a story with archetypical characters that fuels the imagination.  I’m thinking myth in the sense of Joseph Campbell, not Mythbusters.

Richard Stallman is a good example of the Lisp genius. He’s a very strange man, amazingly talented, and a sort of tragic hero. Plus he has the hair and beard to fit the wizard archetype.

Let’s assume that Lisp geniuses are rare enough to inspire awe but not so rare that we can’t talk about them collectively. Maybe in the one-in-a-million range. What lessons can we draw from Lisp geniuses?

One conclusion would be that if you write Lisp, you too will have super-human programming ability. Or maybe if Lisp won’t take you from mediocrity to genius level, it will still make you much more productive.

Another possibility is that super-programmers are attracted to Lisp. That’s the position taken in The Bipolar Lisp Programmer. In that case, lesser programmers turning to Lisp in hopes of becoming super productive may be engaging in a bit of cargo cult thinking.

I find the latter more plausible, that exceptional programmers are often attracted to Lisp. It may be that Lisp helps very talented programmers accomplish more. Lisp imposes almost no structure, and that could be attractive to highly creative people. More typical programmers might benefit from languages that provide more structure.

I’m skeptical when I hear someone say that he was able to program circles around his colleagues and it’s all because he writes Lisp. Assuming such a person accurately assesses his productivity relative to his peers, it’s hard to attribute such a vast difference to Lisp (or any other programming language).

Programming languages do make a difference in productivity for particular tasks. There are reasons why different tasks are commonly done in different kinds of languages. But I believe talent makes even more of a difference, especially in the extremes. If one person does a job in half the time of another, maybe it can be attributed to their choice of programming languages. If one does it in 1% of the time of another, it’s probably a matter of talent.

There are genius programmers who write Lisp, and Lisp may suit them well. But these same folks would also be able to accomplish amazing things in other languages. I think of Donald Knuth writing TeX in Pascal, and a very conservative least-common-denominator subset of Pascal at that. He may have been able to develop TeX faster using a more powerful language, but perhaps not much faster.

Related posts

How much time do scientists spend chasing grants?

Computer scientist Matt Welsh said that one reason he left Harvard for Google was that he was spending 40% of his time chasing grants. At Google, he devotes all his time to doing computer science. Here’s how he describes it in his blog post The Secret Lives of Professors:

The biggest surprise is how much time I have to spend getting funding for my research. Although it varies a lot, I guess that I spent about 40% of my time chasing after funding, either directly (writing grant proposals) or indirectly (visiting companies, giving talks, building relationships). It is a huge investment of time that does not always contribute directly to your research agenda—just something you have to do to keep the wheels turning.

According to this Scientific American editorial, 40% is typical.

Most scientists finance their laboratories (and often even their own salaries) by applying to government agencies and private foundations for grants. The process has become a major time sink. In 2007 a U.S. government study found that university faculty members spend about 40 percent of their research time navigating the bureaucratic labyrinth, and the situation is no better in Europe.

Not only do scientists on average spend a large amount of time pursuing grants, they tend to spend more time on grants as their career advances. (This has an element of tautology: you advance your career in part by obtaining grants, so the most successful are the ones who have secured the most grant money.)

By the time scientists are famous, they may no longer spend much time actually doing science. They may spend nearly all their research time chasing grants either directly or, as Matt Welsh describes, indirectly by traveling, speaking, and schmoozing.

History is strange

From historian Patrick Allitt of Emory University:

History is strange, it’s alien, and it won’t give us what we would like to have. If you hear a historical story and at the end you feel thoroughly satisfied by it and find that it perfectly coincides with your political inclinations, it probably means that you’re actually listening to ideology or mythology. History won’t oblige us, and much of its challenge and interest comes from its immovable differentness from us and our own world.

Teaching Bayesian stats backward

Most presentations of Bayesian statistics I’ve seen start with elementary examples of Bayes’ Theorem. And most of these use the canonical example of testing for rare diseases. But the connection between these examples and Bayesian statistics is not obvious at first. Maybe this isn’t the best approach.

What if we begin with the end in mind? Bayesian calculations produce posterior probability distributions on parameters. An effective way to teach Bayesian statistics might be to start there. Suppose we had probability distributions on our parameters. Never mind where they came from. Never mind classical objections that say you can’t do this. What if you could? If you had such distributions, what could you do with them?

For starters, point estimation and interval estimation become trivial. You could, for example, use the distribution mean as a point estimate and the area between two quantiles as an interval estimate. The distributions tell you far more than  point estimates or interval estimates could; these estimates are simply summaries of the information contained in the distributions.

It makes logical sense to start with Bayes’ Theorem since that’s the tool used to construct posterior distributions. But I think it makes pedagogical sense to start with the posterior distribution and work backward to how one would come up with such a thing.

Bayesian statistics is so named because Bayes’ Theorem is essential to its calculations. But that’s a little like classical statistics Central Limitist statistics because it relies heavily on the Central Limit Theorem.

The key idea of Bayesian statistics is to represent all uncertainty by probability distributions. That idea can be obscured by an early emphasis on calculations.

More posts on Bayesian statistics

The first FORTRAN program

The first FORTRAN compiler shipped this week in 1957. Herbert Bright gives his account of running his first FORTRAN program with the new compiler here.

(Bright gives the date as Friday, April 20, 1957, but April 20 fell on a Saturday that year. It seems more plausible that he correctly remembered the day of the week—he says it was late on a Friday afternoon—than that he remembered the day of the month, so it was probably Friday, April 19, 1957.)

For more history, see Twenty Five Years of FORTRAN by J. A. N. Lee written in 1982.

Thanks to On This Day in Math for the story.