Curious, exciting, and slightly disturbing

This weekend I’ve been wrapping up unfinished projects. One of those projects was reading Kraken: The Curious, Exciting, and Slightly Disturbing Science of Squid (ISBN 0810984652).

The book is exactly what you might expect from the title: a quirky little book about squid. I didn’t particularly enjoy it, but that’s my fault. I just wasn’t as interested in reading a quirky little book about squid as I thought I would when the publisher offered me a copy. Squid are bizarre creatures, and some other time I might enjoy reading more about them.

The title is terrific. I probably wouldn’t have given the book a second thought if it had been entitled, for example, Teuthology. And the title isn’t just sensational; squid really are curious, possibly exciting, and at least slightly disturbing.

Scripting and the last mile problem

From Bruce Payette’s book Windows PowerShell in Action:

Why do we care about command-line management and automation? Because it helps to solve the Information Technology professional’s version of the last mile problem. The last mile problem is a classical problem that comes from the telecommunications industry. It goes like this: the telecom industry can effectively amortize its infrastructure costs across all its customers until it gets to the last mile where the service is finally run to an individual location. … In the Information Technology (IT) industry, the last mile problem is figuring out how to manage each IT installation effectively and economically.

To manage this infrastructure you need a toolkit. Why not use the same toolkit for operations that was used for development?

This toolkit cannot merely be the same tools used to develop the overall infrastructure as the level of detail required is too great. Instead, you need a set of tools with a higher level of abstraction.

Related post:

Where OO has succeeded most

Eric Raymond makes an interesting observation on where object oriented programming has been most successful.

The OO design concept initially proved valuable in the design of graphics systems, graphical user interfaces, and certain kinds of simulation. To the surprise and gradual disillusionment of many, it has proven difficult to demonstrate significant benefits of OO outside those areas. It’s worth trying to understand why. …

One reason that OO has succeeded most where it has (GUIs, simulation, graphics) may be because it’s relatively difficult to get the ontology of types wrong in those domains. In GUIs and graphics, for example, there is generally a rather natural mapping between manipulable visual objects and classes.

I believe he overstates his case when he says it has been difficult to show benefits of OO outside graphics and simulation. But I agree that it’s harder to create good object oriented software when there isn’t a clear physical model to guide the organization of the code. A programmer may partition functionality into classes according to a mental model that other programmers do not share.

When you’re developing graphics or simulation software, it’s more likely that multiple people will share the same mental model. But even in simulation the boundaries are not so clear. I’ve seen a lot of software to simulate clinical trials. There the objects are fairly obvious: patients, treatments, etc. But there are many ways to decide, for example, what functionality should be part of a software object representing a patient. What seems natural to one developer may seem completely arbitrary to another.

Related posts:

Numerical exceptions

Someone sent me a question yesterday that boiled down to the difference between kinds of numeric exceptions. I’ll give my response below, but first a little background.

Numeric exceptions occur when a computer does some operation on a number that produces an error. The abstraction that lets us think of computer numbers as mathematical numbers has sprung a leak. On Windows you may see output that looks like -1.#IND or 1.#INF. On Linux you may see nan or inf. These values do not correspond to mathematical real numbers but instead are codes saying what went wrong.

If you’ve heard of NaNs (NaN stands for “not a number”) you might call every numerical exception a NaN. That’s reasonable since indeed an exception is “not a number”, or at least not an ordinary number. The problem is that NaN has a more restricted technical meaning that excludes some kinds of exceptions.

An infinite values is an exception but not a NaN. The difference is important. Some mathematical operations still make sense on an infinite value, but no operations make sense on a NaN. For example, if two floating point values are infinite and have the same sign, they are equal. But a NaN cannot equal anything, not even itself. So in C, if x is a double then the test x == x will return true if x is infinite but not if x is a NaN.

The question that motivated this post had assumed that an infinite value was a NaN.

No, and infinity is not a NaN.

It all makes sense when you think about it. A NaN is a computer’s way of saying “I don’t know what else to do.” An infinity is the computer saying “It’s bigger than I can handle, but I’ll preserve the sign.”

For example, let x be the largest finite number a computer can represent. What is 2*x? Too big to represent, but it’s positive, so it’s +infinity. What’s -2*x? It’s -infinity.

But what is sqrt(-1)? It’s not big, so it’s not infinity. It’s just complex. Nothing else makes sense, so the computer returns NaN.

Windows displays infinite results as 1.#INF or -1.#INF depending on the sign. Linux displays inf or -inf. Windows displays NaNs as -1.#IND (“ind” for “indeterminate”) and Linux displays nan.

For more details see these notes: IEEE floating point exceptions in C++

Click to find out more about consulting for numerical computing

 

Related posts:

The myth of the Lisp genius

I’m fascinated by the myth of the Lisp genius, the eccentric programmer who accomplishes super-human feats writing Lisp. I’m not saying that such geniuses don’t exist; they do. Here I’m using “myth” in the sense of a story with archetypical characters that fuels the imagination.  I’m thinking myth in the sense of Joseph Campbell, not Mythbusters.

Richard Stallman is a good example of the Lisp genius. He’s a very strange man, amazingly talented, and a sort of tragic hero. Plus he has the hair and beard to fit the wizard archetype.

Let’s assume that Lisp geniuses are rare enough to inspire awe but not so rare that we can’t talk about them collectively. Maybe in the one-in-a-million range. What lessons can we draw from Lisp geniuses?

One conclusion would be that if you write Lisp, you too will have super-human programming ability. Or maybe if Lisp won’t take you from mediocrity to genius level, it will still make you much more productive.

Another possibility is that super-programmers are attracted to Lisp. That’s the position taken in The Bipolar Lisp Programmer. In that case, lesser programmers turning to Lisp in hopes of becoming super productive may be engaging in a bit of cargo cult thinking.

I find the latter more plausible, that exceptional programmers are often attracted to Lisp. It may be that Lisp helps very talented programmers accomplish more. Lisp imposes almost no structure, and that could be attractive to highly creative people. More typical programmers might benefit from languages that provide more structure.

I’m skeptical when I hear someone say that he was able to program circles around his colleagues and it’s all because he writes Lisp. Assuming such a person accurately assesses his productivity relative to his peers, it’s hard to attribute such a vast difference to Lisp (or any other programming language).

Programming languages do make a difference in productivity for particular tasks. There are reasons why different tasks are commonly done in different kinds of languages. But I believe talent makes even more of a difference, especially in the extremes. If one person does a job in half the time of another, maybe it can be attributed to their choice of programming languages. If one does it in 1% of the time of another, it’s probably a matter of talent.

There are genius programmers who write Lisp, and Lisp may suit them well. But these same folks would also be able to accomplish amazing things in other languages. I think of Donald Knuth writing TeX in Pascal, and a very conservative least-common-denominator subset of Pascal at that. He may have been able to develop TeX faster using a more powerful language, but perhaps not much faster.

Related posts:

For a daily dose of computer science and related topics, follow @CompSciFact on Twitter.

CompSciFact twitter icon

How much time do scientists spend chasing grants?

Computer scientist Matt Welsh said that one reason he left Harvard for Google was that he was spending 40% of his time chasing grants. At Google, he devotes all his time to doing computer science. Here’s how he describes it in his blog post The Secret Lives of Professors:

The biggest surprise is how much time I have to spend getting funding for my research. Although it varies a lot, I guess that I spent about 40% of my time chasing after funding, either directly (writing grant proposals) or indirectly (visiting companies, giving talks, building relationships). It is a huge investment of time that does not always contribute directly to your research agenda — just something you have to do to keep the wheels turning.

According to this Scientific American editorial, 40% is typical.

Most scientists finance their laboratories (and often even their own salaries) by applying to government agencies and private foundations for grants. The process has become a major time sink. In 2007 a U.S. government study found that university faculty members spend about 40 percent of their research time navigating the bureaucratic labyrinth, and the situation is no better in Europe.

Not only do scientists on average spend a large amount of time pursuing grants, they tend to spend more time on grants as their career advances. (This has an element of tautology: you advance your career in part by obtaining grants, so the most successful are the ones who have secured the most grant money.)

By the time scientists are famous, they may no longer spend much time actually doing science. They may spend nearly all their research time chasing grants either directly or, as Matt Welsh describes, indirectly by traveling, speaking, and schmoozing.

History is strange

From historian Patrick Allitt of Emory University:

History is strange, it’s alien, and it won’t give us what we would like to have. If you hear a historical story and at the end you feel thoroughly satisfied by it and find that it perfectly coincides with your political inclinations, it probably means that you’re actually listening to ideology or mythology. History won’t oblige us, and much of its challenge and interest comes from its immovable differentness from us and our own world.

Teaching Bayesian stats backward

Most presentations of Bayesian statistics I’ve seen start with elementary examples of Bayes’ Theorem. And most of these use the canonical example of testing for rare diseases. But the connection between these examples and Bayesian statistics is not obvious at first. Maybe this isn’t the best approach.

What if we begin with the end in mind? Bayesian calculations produce posterior probability distributions on parameters. An effective way to teach Bayesian statistics might be to start there. Suppose we had probability distributions on our parameters. Never mind where they came from. Never mind classical objections that say you can’t do this. What if you could? If you had such distributions, what could you do with them?

For starters, point estimation and interval estimation become trivial. You could, for example, use the distribution mean as a point estimate and the area between two quantiles as an interval estimate. The distributions tell you far more than  point estimates or interval estimates could; these estimates are simply summaries of the information contained in the distributions.

It makes logical sense to start with Bayes’ Theorem since that’s the tool used to construct posterior distributions. But I think it makes pedagogical sense to start with the posterior distribution and work backward to how one would come up with such a thing.

Bayesian statistics is so named because Bayes’ Theorem is essential to its calculations. But that’s a little like classical statistics Central Limitist statistics because it relies heavily on the Central Limit Theorem.

The key idea of Bayesian statistics is to represent all uncertainty by probability distributions. That idea can be obscured by an early emphasis on calculations.

Related posts:

Click to learn more about Bayesian statistics consulting

The first FORTRAN program

The first FORTRAN compiler shipped this week in 1957. Herbert Bright gives his account of running his first FORTRAN program with the new compiler here.

(Bright gives the date as Friday, April 20, 1957, but April 20 fell on a Saturday that year. It seems more plausible that he correctly remembered the day of the week — he says it was late on a Friday afternoon — than that he remembered the day of the month, so it was probably Friday, April 19, 1957.)

For more history, see Twenty Five Years of FORTRAN by J. A. N. Lee written in 1982.

Thanks to On This Day in Math for the story.

Learn one sed command

You may have seen sed programs even if you didn’t know that’s what they were. In online discussions it’s common to hear someone say

s/foo/bar/

as a shorthand to mean “replace foo with bar.” The line s/foo/bar/ is a complete sed program to do such a replacement.

sed comes with every Unix-like operating system and is available for Windows here. It has a range of features for editing files, but sed is worth using even if you only know how to do one thing with it:

sed "s/pattern1/pattern2/g" file.txt > newfile.txt

This will replace every instance of pattern1 with pattern2 in the file file.txt and will write the result to newfile.txt. The original file file.txt is unchanged.

I used to think there was no reason to use sed when other languages like Python will do everything sed does and much more. Suppose you agree with that. Now suppose you find you often have to make global search-and-replace operations and so you write a script to do this, say a Python script. You’ve got to call your script something, remember what you called it, and put it in your path. How about calling it sed? Or better, don’t write your script, but pretend that you did. If you’re on Linux, it’s already in your path. One advantage of the real sed over your script named sed is that the former can do a lot more, should you ever need it to.

Now for a few details regarding the sed command above. The “s” on the front stands for “substitute” and the “g” on the end stands for “global.” Without the “g” on the end, sed would only replace the first instance of the pattern on each line. If that’s what you want, then remove the “g.”

The patterns inside a sed command are regular expressions, so it’s best to get in the habit of always quoting sed commands. This isn’t necessary for simple string substitutions, but regular expressions often contain characters that you’ll need to prevent the shell from interpreting.

You may find the default regular expression support in sed odd or restrictive. If you’re used to regular expressions in Perl, Python, JavaScript, etc. and you’re using a Gnu implementation of sed, you can add the -r option for more familiar regular expression syntax.

I got the idea for this post from Greg Grouthaus’ post Why you should learn just a little Awk. He makes a good case that you can benefit from learning just a few commands of a language like Awk with no intention to learn more of the language.

Related posts:

For daily tips on regular expressions, follow @RegexTip on Twitter.

Regex tip icon

Third-system effect

The third-system effect describes a simple system rising like a phoenix out of the ashes of a system that collapsed under its own complexity.

A notorious ‘second-system effect’ often afflicts the successors of small experimental prototypes. The urge to add everything that was left out the first time around all too frequently leads to huge and overcomplicated design. Less well known, because less common, is the ‘third-system effect’: sometimes, after the second system has collapsed of its own weight, there is a chance to go back to simplicity and get it right.

From The Art of Unix Programming by Eric S. Raymond. Available online here.

Raymond says that Unix was such a third system. What are other examples of the third-system effect?

Related posts:

Personal organization software

I’ve tried various strategies and pieces of software for personal organization and haven’t been happy with most of them. I’ll briefly describe my criteria and what I’ve found.

My needs are fairly simple. I don’t need or want something that could scale to running a multinational corporation.

I’d like something with a portable, transparent data format. I don’t want the data stored in a hidden file or in a proprietary format. I’d like to be able to read the data without the software that was used to write it.

I’d like to be as structured or unstructured as I choose and not have to conform to a rigid database schema. I’d like to be able to do ad hoc queries as well as strongly typed queries.

I’d like something that exports to paper easily.

Here’s what I found: org-mode. It’s an Emacs mode for editing text files. It provides sophisticated functionality, but all the sophistication is in the software, not the data format. It’s more convenient to work with org-mode files in Emacs, but the raw file format is just a light-weight mark-down, easy for a person or a computer to parse.

When I went back to using Emacs a year ago after a 15-year hiatus, I heard good things about org-mode but didn’t understand what people liked about it. I heard it described as a to-do list manager and was not impressed. I’m not interested in the features I was first introduced to: tracking the status of to-do items and making agendas. I still don’t use those features. It took me a while to realize that org-mode was what I had been looking for. It was similar in spirit to something I’d thought about writing.

Emacs is an acquired taste. But someone who doesn’t use Emacs could get some good ideas from looking at org-mode. I imagine some people have borrowed its ideas and implemented them for other editors. If not, someone should.

The org-mode site has links to numerous introductions and tutorials. I like the FLOSS Weekly interview with org-mode’s creator Carsten Dominik. In it he explains his motivation for writing org-mode and gives a high-level overview of its features.

Related posts:

Significance testing and Congress

The US Supreme Court’s criticism of significance testing has been in the news lately. Here’s a criticism of significance testing involving the US Congress. Consider the following syllogism.

  1. If a person is an American, he is not a member of Congress.
  2. This person is a member of Congress.
  3. Therefore he is not American.

The initial premise is false, but the reasoning is correct if we assume the initial premise is true.

The premise that Americans are never members of Congress is clearly false. But it’s almost true! The probability of an American being a member of Congress is quite small, about 535/309,000,000. So what happens if we try to salvage the syllogism above by inserting “probably” in the initial premise and conclusion?

  1. If a person is an American, he is probably not a member of Congress.
  2. This person is a member of Congress.
  3. Therefore he is probably not American.

What went wrong? The probability is backward. We want to know the probability that someone is American given he is a member of Congress, not the probability he is a member of Congress given he is American.

Science continually uses flawed reasoning analogous to the example above. We start with a “null hypothesis,” a hypothesis we seek to disprove. If our data are highly unlikely assuming this hypothesis, we reject that hypothesis.

  1. If the null hypothesis is correct, then these data are highly unlikely.
  2. These data have occurred.
  3. Therefore, the null hypothesis is highly unlikely.

Again the probability is backward. We want to know the probability of the hypothesis given the data, not the probability of the data given the hypothesis.

We can’t reject a null hypothesis just because we’ve seen data that are rare under this hypothesis. Maybe our data are even more rare under the alternative. It is rare for an American to be in Congress, but it is even more rare for someone who is not American to be in the US Congress!

I found this illustration in The Earth is Round (p < 0.05) by Jacob Cohen (1994). Cohen in turn credits Pollard and Richardson (1987) in his references.

 

Click to learn more about Bayesian statistics consulting

 

Related posts:

A magic king’s tour

After posting about a magic square made from knight’s tour, I wondered whether there are magic squares made from a king’s tour. (A king can move one square in any direction. A tour is a sequence of moves that lands on each square of a chess board exactly once.) I found George Jelliss’ site via the comments to that post and found out that there are indeed magic king’s tours. Here’s one published in 1917.

Here’s the path a king would take in the square above:

The knight’s tour magic square had rows and columns that sum to 260, though the diagonals did not. In fact, someone has proved that a knight’s tour on an 8×8 board cannot be diagonally magic. (Thanks John V.)

In the king’s tour above, however, the rows, columns, and diagonals all sum to 260. George Jelliss has posted notes that classify all such magic squares that have biaxial symmetry. See his site for much more information.