Richard St. John gave a three-minute presentation at TED on the secrets of success, summarizing his interviews of 500 successful people. His outline:

- Passion
- Work
- Good
- Focus
- Push
- Serve
- Ideas
- Persist

**Related posts**:

(832) 422-8646

Richard St. John gave a three-minute presentation at TED on the secrets of success, summarizing his interviews of 500 successful people. His outline:

- Passion
- Work
- Good
- Focus
- Push
- Serve
- Ideas
- Persist

**Related posts**:

The latest episode of the Science and the Sea podcast explains how a protein that gives a certain species of jellyfish a faint glow is useful in research into cancer and other diseases.

**Related posts**:

I’ve started a new page to list CDs for music mentioned here.

The following is a direct quote from Anthony O’Hagan’s book Bayesian Inference. I’ve edited the quote only to enumerate the points.

Why should one use Bayesian inference, as opposed to classical inference? There are various answers. Broadly speaking, some of the arguments in favour of the the Bayesian approach are that it is

- fundamentally sound,
- very flexible,
- produces clear and direct inferences,
- makes use of all available information.

I’ll elaborate briefly on each of O’Hagan’s points.

Bayesian inference has a solid philosophical foundation. It is consistent with certain axioms of rational inference. Non-Bayesian systems of inference, such as fuzzy logic, must violate one or more of these axioms; their conclusions are rationally satisfying to the extent that they approximate Bayesian inference.

Bayesian inference is at the same time rigid and flexible. It is rigid in the sense that all inference follows the same form: set up a likelihood and a prior, then calculate the posterior by conditioning on observed data via Bayes theorem. But this rigidity channels creativity into useful directions. It provides a template for setting up complex models when necessary.

Frequentist inferences are awkward to explain. For example, confidence intervals and p-values are tedious to define rigorously. Most consumers of confidence intervals and p-values do not know what they mean and implicitly assume Bayesian interpretations. The difference is not simply pedantic. Particularly with regard to p-values, the common understanding can be grossly inaccurate. By contrast, Bayesian counterparts are simple to define and interpret. Bayesian credible intervals are exactly what most people think confidence intervals are. And a Bayesian hypotheses test simply compares the probability of each hypothesis via Bayes factors.

Sometimes the *necessity* of specifying prior distributions is seen as a drawback to Bayesian inference. On the other hand, the *ability* to specify prior distributions means that more information can be incorporated in an inference. See Musicians, drunks, and Oliver Cromwell for a colorful illustration from Jim Berger on the need to incorporate prior information.

**Related posts**:

The previous post was an answer to a reader question. I would like to write more posts answering questions you have. Please send me your questions or suggestions for blog posts. You might ask me something I don’t know or something I don’t have the time to work on, so I’ll have to be selective with what questions I answer, but I’d like to answer a few questions now and then.

Someone sent me email regarding my online calculator for computing the distance between to locations given their longitude and latitude values. He wants to do sort of the opposite. Starting with the longitude and latitude of one location, he wants to find the longitude and latitude of locations moving north/south or east/west of that location. I like to answer reader questions when I can, so here goes. I’ll give a theoretical derivation followed by some Python code.

Longitude and latitude and are usually measured in degrees, but theoretical calculations are cleaner in radians. Someone using the Python code below could think in terms of degrees; radians will only be used inside function implementations. We’ll use the fact that on a circle of radius *r*, an arc of angle θ radians has length *r*θ. We’ll assume the earth is a perfect sphere. See this post for a discussion of how close the earth is to being a sphere.

I’ll start with moving north/south since that’s simpler. Let R be the radius of the earth. An arc of angle φ radians on the surface of the earth has length *M* = *R*φ, so an arc *M* miles long corresponds to an angle of φ = *M*/*R* radians. Moving due north or due south does not change longitude.

Moving east/west is a little more complicated. At the equator, the calculation is just like the calculation above, except that longitude changes rather than latitude. But the distance corresponding to one degree of longitude changes with latitude. For example, one degree of longitude along the Arctic Circle doesn’t take you nearly as far as it does at the equator.

Suppose you’re at latitude φ degrees north of the equator. The circumference of a circle at constant latitude φ, a circle parallel to the equator, is cos φ times smaller than the circumference of the equator. So at latitude φ an angle of θ radians describes an arc of length *M* = *R* θ cos φ. A distance M miles east or west corresponds to a change in longitude of θ = *M*/(*R* cos φ). Moving due east or due west does not change latitude.

The derivation above works with angles in radians. Python’s cosine function also works in radians. But longitude and latitude are usually expressed in degrees, so function inputs and outputs are in degrees.

import math # Distances are measured in miles. # Longitudes and latitudes are measured in degrees. # Earth is assumed to be perfectly spherical. earth_radius = 3960.0 degrees_to_radians = math.pi/180.0 radians_to_degrees = 180.0/math.pi def change_in_latitude(miles): "Given a distance north, return the change in latitude." return (miles/earth_radius)*radians_to_degrees def change_in_longitude(latitude, miles): "Given a latitude and a distance west, return the change in longitude." # Find the radius of a circle around the earth at given latitude. r = earth_radius*math.cos(latitude*degrees_to_radians) return (miles/r)*radians_to_degrees

Here are three of my favorite podcast intro themes.

.NET Rocks by Carl Franklin and Richard Campbell.

Carl Franklin composed the intro theme, Toy Boy, and recorded the song with his brother Jay. The tune is catchy, the words are clever, and Carl’s a great musician. Richard and Carl talk over the intro, but you can hear these odd phrases poking out, such as “got a transmitter banned by the FCC.” After listening to the podcast for a while, I decided I had to find the theme song and listen to Toy Boy without the voice overs. Here’s more music by Carl Franklin.

Hanselminutes by Scott Hanselman.

The theme song is just a short loop, but it’s fun music. I wrote Scott a note asking him about the intro. I was hoping the loop taken from a longer song I could buy somewhere and thought I’d like to find more music by the same composer. Scott said that his theme song was written for his podcast by Carl Franklin. I was surprised that Carl came up again, but this isn’t totally unexpected since Carl’s company Pwop Productions produces Hanselminutes.

Accidental Creative by Todd Henry.

The theme song is My City In Healing from A Slave Left Dreaming by Joshua Seurkamp. The song is a blend of Eastern and Western music, appropriate for a podcast that emphasizes creatively combining ideas.

I also wanted to mention the theme from the Science Magazine podcast. It’s not music I particularly enjoy listening to, but it is written in 5/4 time, something that has come up for discussion on this blog.

**Related post**: Interview with Carl Franklin

The long-awaited 51st Carnival of Mathematics is up at squareCircleZ.

Everybody thinks Dilbert is about their job. But this cartoon really *is* about my job. It does a remarkably good job of summarizing what it’s like to work in cancer research.

Related posts on cancer research

One strategy for increasing job security is to make yourself indispensable by never documenting anything. *Deliberately* following such a strategy is unethical. *Passively* falling into such a situation is more understandable, and more common, but it’s not very smart either.

If you’re indispensable, you can hold on to your job — maybe. But the flip side is that you can’t let go of your job either. You can never wash your hands of a project, never hand it over to someone else. You cannot be promoted. You’ll need to take your laptop with you on vacation, if you’re able to take vacation.

I’ve seen this play out in software projects that are never quite finished. The project minimally works, but only with the developer’s intervention. The developer isn’t trying to be indispensable. Quite the opposite: the developer desperately wants to get away from the project. But the software isn’t stable. Bugs are discovered every time a new part of the code is exercised. These may be fixed quickly, but only by the original developer. Or maybe the code is stable, but only the original developer can reproduce the build. Or some part of the code ought to be configurable, but instead the developer has to constantly tweak the source code. For whatever reason, the project isn’t wrapped up and the developer cannot extricate himself from it.

The solution is to plan to make yourself dispensable from the beginning. Ask yourself throughout the project, “How am I going to be able to hand this over to someone else?” Or more graphically, “What if I get hit by a bus?”

Make yourself valuable for what you’re expected to accomplish in the future, not for what you’ve accomplished in the past.

**Related posts**:

Nicholas Carr has an interesting post entitled simply Clutter. The post begins by discussing Tim Bray’s vision of a sort of high-tech monastic cell and moves into an explanation of why electronic books are fundamentally different from paper books.

Tim Bray has gotten rid of his CD cases and is now talking about getting rid of his books. From Nicholas Carr’s blog post:

He [Tim Bray] has a sense that removing the “clutter” of his books, along with his other media artifacts, will turn his home into a secular version of a “monastic cell”: “I dream of a mostly-empty room, brilliantly lit, the outside visible from inside. The chief furnishings would be a few well-loved faces and voices because it’s about people not things.” He is quick to add, though, that it will be a monastic cell outfitted with the latest data-processing technologies. Networked computers will “bring the universe of words and sounds and pictures to hand on demand. But not get dusty or pile up in corners.”

(Tim Bray’s ideal of a secular monastery made me think of musician John Michael Talbot, a real monk living in a real monastery. I heard someone describe Talbot’s living quarters as a sparse cell with a fantastic sound system.)

Carr is dubious that Bray can achieve his goal by digitizing his books. Paper books are more conducive to the serenity Bray desires.

The irony in Bray’s vision of a bookless monastic cell is that it was the printed book itself that brought the ethic of the monastery — the ethic of deep attentiveness, of contemplativeness, of singlemindedness — to the general public.

I find Tim Bray’s ideal attractive, but I would selectively digitize books. For example, I would be fine converting my copy of The Python Cookbook to digital form. But I cannot imagine reading Will Durant’s Story of Civilization from a screen.

**Related posts**:

Judging from the comments on previous posts, it seems a good number of Dave Brubeck fans read this blog. Everyone familiar with Dave Brubeck knows about Take Five from his album Time Out.

But I wonder how many know about his album “To Hope! A Celebration.”

The album is a Roman Catholic mass containing beautiful mixture of classical and jazz music. It features the Cathedral Choral Society Chorus & Orchestra as well as the Dave Brubeck Quartet. It was recorded live at Washington National Cathedral on June 12, 1995.

According to Wikipedia, Brubeck was not a Catholic when the mass was commissioned but joined the Catholic church shortly after the piece was finished.

**Related posts**:

The latest .NET Rocks podcast interviews Pat Hynds on why projects fail. Toward the end of his interview he mentions a simple template for status reports.

- What did you work on?
- What did you get done?
- What did you do that you didn’t anticipate having to do?
- What did you plan to do that you didn’t get done?
- What do you plan to do?
- What do you need from others?

When I started managing a group of programmers, I’d focus on #1 and #2. But in some ways #3 is the most important question. That question can alert you to a major time sink that’s not include in your project estimates. That question can let you know of problems beyond an individual developer’s ability to resolve. That question that can tell you it’s time to buy something you were planning on building yourself.

I’m teaching an introduction to Bayesian statistics. My first thought was to start with Bayes theorem, as many introductions do. But this isn’t the right starting point. Bayes’ theorem is an indispensable tool for Bayesian statistics, but it is not the foundational principle. The foundational principle of Bayesian statistics is the decision to represent uncertainty by probabilities. Unknown parameters have probability distributions that represent the uncertainty in our knowledge of their values.

Once you decide to use probabilities to express parameter uncertainty, you inevitably run into the need for Bayes theorem to work with these probabilities. Bayes theorem is applied constantly in Bayesian statistics, and that is why the field takes its name from the theorem’s author, Reverend Thomas Bayes (1702-1761). But “Bayesian” doesn’t describe Bayesian statistics quite the same way that “Frequentist” described frequentist statistics. The term “frequentist” gets to the heart of how frequentist statistics interprets probability. But “Bayesian” refers to a Bayes theorem, a **computational tool** for carrying out probability calculations in Bayesian statistics. If frequentist statistics were analogously named, it might be called “Bernoullian statistics” after Jacob Bernoulli’s law of large numbers.

The term “Bayesian” statistics might imply that frequentist statisticians dispute Bayes’ theorem. That is not the case. Bayes’ theorem is a simple mathematical result. What people dispute is the interpretation of the probabilites that Bayesians want to stick into Bayes’ theorem.

I don’t have a better name for Bayesian statistics. Even if I did, the name “Bayesian” is firmly established. It’s certainly easier to say “Bayesian statistics” than to say “that school of statistics that represents uncertainty in unknown parameters by probabilities,” even though the latter is accurate.

**Related posts**: