Programmers without computers

When I started my first job as a programmer, I was surprised how much time my colleagues spent at their computers. Of course a computer programmer needs to spend a fair amount of time sitting at a computer, but why did people spend nearly 100% of their time in front of a monitor? This seemed strange to me since I hadn’t worked this way before. I had always alternated thinking away from a computer and sitting down at a computer.

I was even more puzzled when the network went down, which it often did. Half of us worked on Windows PCs and half worked on Unix workstations. When the network was down, the PC folks kept working because they had self-contained local work environments.

But the Unix folks would stand in the halls until the network came back up or go home if it looked like the network wasn’t going to come up soon.  They had computers on their desks, but these were primarily used as terminals to connect to servers. So without a network, the Unix folks essentially had no computers. Everyone agreed that meant they couldn’t get any work done. That seemed bizarre to me.

At that time, I knew how to program, but I knew almost nothing about professional software development. Many of my ideas were naive. But looking back, I think I was right about one thing: programmers need to stand up and think more. Too often, that’s the last thing we do.

Related links:

Programmer problem solving sequence
Step Away from the Computer by Rich Hickey
Why programmers are not paid in proportion to their productivity
Chinese translation of this post

Weekend miscellany

Typography and design

11 important digital fonts
The pilcrow
Vintage posters

Music

Five seconds of every #1 hit song
Hymnopedia

Economics

The Great Stagnation
Make everyone hurt

Computer humor

An update is available for your computer
How a programmer reads your résumé

Theology

Bono interview: Grace over Karma

Open source software

Producing Open Source Software
German foreign office moves back to Windows from Linux

Absence of evidence

Here’s a little saying that irritates me:

Absence of evidence is not evidence of absence.

It’s the kind of thing a Sherlock Holmes-like character might say in a detective novel. The idea is that we can’t be sure something doesn’t exist just because we haven’t seen it yet.

What bothers me is that the statement misuses the word “evidence.” The statement would be correct if we substituted “proof” for “evidence.” We can’t conclude with absolute certainty that something doesn’t exist just because we haven’t yet proved that it does. But evidence is not the same as proof.

Why do we believe that dodo birds are extinct? Because no one has seen one in three centuries. That is, there is an absence of evidence that they exist. That is tantamount to evidence that they do not exist. It’s logically possible that a dodo bird is alive and well somewhere, but there is overwhelming evidence to suggest this is not the case.

Evidence can lead to the wrong conclusion. Why did scientists believe that the coelacanth was extinct? Because no one had seen one except in fossils. The species was believed to have gone extinct 65 million years ago. But in 1938 a fisherman caught one. Absence of evidence is not proof of absence.

coelacanth, a fish once thought to be extinct

Though it is not proof, absence of evidence is unusually strong evidence due to subtle statistical result. Compare the following two scenarios.

Scenario 1: You’ve sequenced the DNA of a large number prostate tumors and found that not one had a particular genetic mutation. How confident can you be that prostate tumors never have this mutation?

Scenario 2: You’ve found that 40% of prostate tumors in your sample have a particular mutation. How confident can you be that 40% of all prostate tumors have this mutation?

It turns out you can have more confidence in the first scenario than the second. If you’ve tested N subjects and not found the mutation, the length of your confidence interval around zero is proportional to N. But if you’ve tested N subjects and found the mutation in 40% of subjects, the length of your confidence interval around 0.40 is proportional to √N. So, for example, if N = 10,000 then the former interval has length on the order of 1/10,000 while the latter interval has length on the order of 1/100. This is known as the rule of three. You can find both a frequentist and a Bayesian justification of the rule here.

Absence of evidence is unusually strong evidence that something is at least rare, though it’s not proof. Sometimes you catch a coelacanth.

Related posts:

Estimating the chances of something that hasn’t happened
Complementary validation

Weekend miscellany

Photography

Egypt 1920s in color

Productivity

The 3/2 rule of employee productivity

Computing

NumPy array introduction
IPv4 X-day arrived and no one died

Music

Against background music
Adult culture

Math

Algebraic surfaces
Ramanujan formulas for computing pi

Statistics

The fourth quadrant
Statistical data mining tutorials

History

How the term “scientist” came to be with updates from Will Fitzgerald and Pat Ballew.

How the term "scientist" came to be

For most of history, scientists have been called natural philosophers. You might expect that scientist gradually and imperceptibly replaced natural philosopher over time. Surprisingly, it’s possible pinpoint exactly when and where the term scientist was born.

It was June 24, 1835 at a meeting of the British Association for the Advancement of Science. Romantic poet Samuel Taylor Coleridge was in attendance. (He had previously written about the scientific method.) Coleridge declared that although he was a true philosopher, the term philosopher should not be applied to the association’s members. William Whewell responded by coining the word scientist on the spot. He suggested

by analogy with artist, we may form scientist.

Since those who practice art are called artists, those who practice science should be called scientists.

This story is comes from the prologue of Laura Snyder’s new book The Philosophical Breakfast Club. The subtitle is “Four Remarkable Friends Who Transformed Science and Changed the World.” William Whewell was one of these four friends. The others were John Herschel, Richard Jones, and Charles Babbage.

Update 1: Will Fitzgerald created the following Google Books ngram that suggests that scientist was used occasionally before 1835 and would take another 30 years to start being widely used in books. Click on the image to visit the original ngram.

So it is with many innovations: the person credited with the innovation may not have been entirely original or immediately successful. Still, perhaps Whewell’s public confrontation with Coleridge gave scientist a push on the road to acceptance.

Update 2: Pat Ballew fills in more of the story on his blog including editorial opposition to the term scientist. Pat brings more famous people into the story, including H. L. Mencken, Michael Faraday, and William Cullen Bryant.

Update 3: Here’s an excerpt from The Philosophical Breakfast Club.

More 19th century science:

Why Mr. Scott is Scottish
Victorian method for predicting height
Grand unified theory of 19th century math

The end of hard-edged science?

Bradley Efron says that science is moving away from things like predicting sunrise times and toward predicting things like the weather. The trend is away from studying precisely predictable systems, what Efron calls “hard-edged science,” and toward studying systems “where predictability is tempered by a heavy dose of randomness.”

Hard-edged science still dominates public perceptions, but the attention of modern scientists has swung heavily toward rainfall-like subjects, the kind where random behavior plays a major role. … Deterministic Newtonian science is majestic, and the basis of modern science too, but a few hundred years of it pretty much exhausted nature’s storehouse of precisely predictable events. Subjects like biology, medicine, and economics require a more flexible scientific world view, the kind we statisticians are trained to understand.

Certainly there is increased interest in systems containing “a heavy dose of randomness” but can we really say that we have “pretty much exhausted nature’s storehouse of precisely predictable effects”?

Source: Modern Science and the Bayesian-Frequentist Controversy

Related posts:

Scientific results fading over time
Occam’s razor and Bayes’ theorem
The law of medium numbers

Final velocity

My daughter and I were going over science homework this evening. A ball falls for 10 seconds. What is its final velocity?

JC: So how fast is the ball going when it hits the ground?

RC: Zero. It stops before it bounces back up.

JC: Well, how fast is it going just before it hits the ground?

RC: They didn’t ask the almost final velocity. They asked for the final velocity.

Weekend miscellany

Science

Glass melts near absolute zero
Bioengineered blood vessels

Neal Stephenson essays

What the strange persistence of rockets can teach us about innovation
In the beginning was the command line

Math

History of non-Euclidean geometry
Nineteen dubious ways to compute the exponential of a matrix

Statistics

Could Fisher, Jeffreys and Neyman have agreed on testing?
Top 500 data blogs ranked by influence

Music

Electronically enhanced acoustica

Why Food for the Hungry runs Ubuntu

Rick Richter is CIO of Food for the Hungry. In this interview Rick explains why his organization is moving all of its computers to Ubuntu.

Ethiopian farmer Ato Admasu

Ethiopian farmer Ato Admasu. Photo credit Food for the Hungry.

John: Tell me a little about Food for the Hungry and what you do there.

Rick: Food for the Hungry is a Christian relief and development organization. We go in to relief situations — maybe there has been a natural disaster or war — and provide life-sustaining needs: food, shelter, whatever the need may be.  For example, the recent earthquake in Haiti. But the other part of what we do is the sustained, long-term development on the community level. The idea is to work with leaders and churches to better take care of themselves rather than relying on outside organizations for support.

I’m the CIO. I’m in charge of the information and technology for the organization. We’re in 25 countries. I have staff all over the world, about 25 people. There are about 12 who work directly for global IT, mostly in Phoenix, and the rest in various countries.  There are also people who work directly for local offices, for example in Kenya, that coordinate with global IT. We’re responsible for about 900 computers.

John: You and I were talking the other day about your organization’s project to move all its computers over to Ubuntu.

Rick: We started an informal process to convert to Ubuntu two and a half years ago. It started when my son went to Bangladesh. He spent the summer there and converted some of their computers to Ubuntu. At first we didn’t have full management support for the process. They don’t really understand it and it scares them.

There were individual country directors interested in the project and I talked it up. There’s some independence in the organization to make those kind of decisions. Now, for the first time, we have full support of management for the conversion on a wide scale. I’m going to Cambodia next week. Right now they’re all running Windows but before I leave they’ll be running Ubuntu. In Asia we probably have about 80% of our computers on Ubuntu. We don’t have big offices in Asia. Our bigger offices are in Africa and they’re a little slower to adopt. Until now, a lot of it depended on whether the local country director was ready to change.

We found it was important for a number of reasons. One is security. Linux is not as vulnerable to viruses. We have so many places where entire computer systems have been totally crippled because of viruses. A lot of networks are very primitive, so the network is basically a thumb drive between offices in a country. A thumb drive is the best way to transmit viruses you can find.

We’ve also found in the last few years anti-virus software has become less and less effective. Three or four years ago, if you had up-to-date anti-virus software you wouldn’t get a virus. These days, you still get them. Some of our staff have other jobs within FH besides their IT responsibilities and may not have a lot of IT experience. As a result, staff often do not have the time to pro-actively manage IT.

Another issue is maintainability. Windows computers don’t run as well over time. With Ubuntu, when we come back to a computer two years later it’s in as good a shape as we left it.

Linux requires much less hardware to run than Windows. We have eight- or nine-year-old computers at a lot of our sites that will no longer run or barely run Windows.

John: So saving money on software licenses is a benefit, but not the main consideration.

Rick: Saving money on licenses is important, but it’s not the driving force. We’re a non-profit and we have a contract with Microsoft where we get pretty good prices.

Another reason for moving to Ubuntu is that in some countries it is very difficult to legally obtain licenses. Sometimes it’s next to impossible. You can’t buy legal Microsoft licenses in some places, or if you can, the price is outrageous. So many legalities and so many weird hoops you have to jump through.

As a Christian organization we need to set a good example and make sure all our licenses are legal. We want to be clear and up-front about our software. Ubuntu eliminates that problem.

John: What experience have you had retraining your IT people to support Linux?

Rick: We have IT professionals and we have people who are much less skilled. Most of the IT people who do the support have really bought into it. They’re excited about it and they’re pushing it. Those who do support in the field who have had less exposure, some of them have bought into it, some have not as much. It requires time. It requires dedication. It also required commitment from their management.

Related posts:

Robust, scalable, and the keyboard works
Free Ubuntu Linux book
Geek fatigue
New spin on The Cathedral and the Bazaar