Go anywhere in the universe in two years

Here’s a totally impractical but fun back-of-the-envelope calculation from Bob Martin.

Suppose you have a space ship that could accelerate at 1 g for as long as you like. Inside the ship you would feel the same gravity as on earth. You could travel wherever you like by accelerating at 1 g for the first half of the flight then reversing acceleration for the second half of the flight. This approach could take you to Mars in three days.

If you could accelerate at 1 g for a year you could reach the speed of light, and travel half a light year. So you could reverse your acceleration and reach a destination a light year away in two years. But this ignores relativity. Once you’re traveling at near the speed of light, time practically stops for you, so you could keep going as far as you like without taking any more time from your perspective. So you could travel anywhere in the universe in two years!

Of course there are a few problems. We have no way to sustain such acceleration. Or to build a ship that could sustain an impact with a spec of dust when traveling at relativistic speed. And the calculation ignores relativity until it throws it in at the end. Still, it’s fun to think about.

Update: Dan Piponi gives a calculation on G+ that addresses the last of the problems I mentioned above, sticking relativity on to the end of a classical calculation. He does a proper relativistic calculation from the beginning.

If you take the radius of the observable universe to be 45 billion light years, then I think you need about 12.5 g to get anywhere in it in 2 years. (Both those quantities as measured in the frame of reference of the traveler.)

If you travel at constant acceleration a for time t then the distance covered is c^2/a (cosh(a t/c) – 1) (Note that gives the usual a t^2/2 for small t.)

Read More

Timid medical research

Cancer research is sometimes criticized for being timid. Drug companies run enormous trials looking for small improvements. Critics say they should run smaller trials and more of them.

Which side is correct depends on what’s out there waiting to be discovered, which of course we don’t know. We can only guess. Timid research is rational if you believe there are only marginal improvements that are likely to be discovered.

Sample size increases quickly as the size of the effect you’re trying to find decreases. To establish small differences in effect, you need very large trials.

If you think there are only small improvements on the status quo available to explore, you’ll explore each of the possibilities very carefully. On the other hand, if you think there’s a miracle drug in the pipeline waiting to be discovered, you’ll be willing to risk falsely rejecting small improvements along the way in order to get to the big improvement.

Suppose there are 500 drugs waiting to be tested. All of these are only 10% effective except for one that is 100% effective. You could quickly find the winner by giving each candidate to one patient. For every drug whose patient responded, repeat the process until only one drug is left. One strike and you’re out. You’re likely to find the winner in three rounds, treating fewer than 600 patients. But if all the drugs are 10% effective except one that’s 11% effective,  you’d need hundreds of trials with thousands of patients each.

The best research strategy depends on what you believe is out there to be found. People who know nothing about cancer often believe we could find a cure soon if we just spend a little more money on research. Experts are more sanguine, except when they’re asking for money.

Read More

Some fields produce more false results than others

John Ioannidis stirred up a healthy debate when he published Why Most Published Research Findings Are False. Unfortunately, most of the discussion has been over whether the word “most” is correct, i.e. whether the proportion of false results is more or less than 50 percent. At least there is more awareness that some published results are false and that it would be good to have some estimate of the proportion.

However, a more fundamental point has been lost. At the core of Ioannidis’ paper is the assertion that the proportion of true hypotheses under investigation matters. In terms of Bayes’ theorem, the posterior probability of a result being correct depends on the prior probability of the result being correct. This prior probability is vitally important, and it varies from field to field.

In a field where it is hard to come up with good hypotheses to investigate, most researchers will be testing false hypotheses, and most of their positive results will be coincidences. In another field where people have a good idea what ought to be true before doing an experiment, most researchers will be testing true hypotheses and most positive results will be correct.

For example, it’s very difficult to come up with a better cancer treatment. Drugs that kill cancer in a petri dish or in animal models usually don’t work in humans. One reason is that these drugs may cause too much collateral damage to healthy tissue. Another reason is that treating human tumors is more complex than treating artificially induced tumors in lab animals. Of all cancer treatments that appear to be an improvement in early trials, very few end up receiving regulatory approval and changing clinical practice.

A greater proportion of physics hypotheses are correct because physics has powerful theories to guide the selection of experiments. Experimental physics often succeeds because it has good support from theoretical physics. Cancer research is more empirical because there is little reliable predictive theory. This means that a published result in physics is more likely to be true than a published result in oncology.

Whether “most” published results are false depends on context. The proportion of false results varies across fields. It is high in some areas and low in others.

Read More

Techniques, discoveries, and ideas

“Progress in science depends on new techniques, new discoveries, and new ideas, probably in that order.” — Sidney Brenner

I’m not sure whether I agree with Brenner’s quote, but I find it interesting. You could argue that techniques are most important because they have the most leverage. A new technique may lead to many new discoveries and new ideas.

 

Read More

Academic freedom

This tweet from Luis Pedro Coelho says so much in 140 characters:

“Oh, the intellectual freedom of academia” he thought while filling out a time sheet which checks that he does not work on non-grant science.

Read More

NYT Book of Physics and Astronomy

I’ve enjoyed reading The New York Times Book of Physics and Astronomy, a collection of 129 articles written between 1888 and 2012. Its been much more interesting than its mathematical predecessor. I’m not objective — I have more to learn from a book on physics and astronomy than a book on math — but I think other readers might also find this new book more interesting.

I was surprised by the articles on the bombing of Hiroshima and Nagasaki. New York Times reporter William Lawrence was allowed to go on the mission over Nagasaki. He was not on the plane that dropped the bomb, but was in one of the other B-29 Superfortresses that were part of the mission. Lawrence’s story was published September 9, 1945, exactly one month later. Lawrence was also allowed to tour the ruins of Hiroshima. His article on the experience was published September 5, 1945. I was surprised how candid these articles were and how quickly they were published. Apparently military secrecy evaporated rapidly once WWII was over.

Another thing that surprised me was that some stories were newsworthy more recently than I would have thought. I suppose I underestimated how long it took to work out the consequences of a major discovery. I think we’re also biased to think that whatever we learned as children must have been known for generations, even though the dust may have only settled shortly before we were born.

Read More

Continuous quantum

David Tong argues that quantum mechanics is ultimately continuous, not discrete.

In other words, integers are not inputs of the theory, as Bohr thought. They are outputs. The integers are an example of what physicists call an emergent quantity. In this view, the term “quantum mechanics” is a misnomer. Deep down, the theory is not quantum. In systems such as the hydrogen atom, the processes described by the theory mold discreteness from underlying continuity. … The building blocks of our theories are not particles but fields: continuous, fluidlike objects spread throughout space. … The objects we call fundamental particles are not fundamental. Instead they are ripples of continuous fields.

Source: The Unquantum Quantum, Scientific American, December 2012.

Read More

Pure math and physics

From Paul Dirac, 1938:

Pure mathematics and physics are becoming ever more closely connected, though their methods remain different. One may describe the situation by saying that the mathematician plays a game in which he himself invents the rules while the physicist plays a game in which the rules are provided by Nature, but as time goes on it becomes increasingly evident that the rules which the mathematician finds interesting are the same as those which Nature has chosen.

Read More

Playful and purposeful, pure and applied

From Edwin Land, inventor of the Polaroid camera:

… applied science, purposeful and determined, and pure science, playful and freely curious, continuously support and stimulate each other. The great nation of the future will be the one which protects the freedom of pure science as much as it encourages applied science.

Read More

How to double science research

Scientists spend 40% of their time chasing grants according to some estimates. Suppose they spend 20% of their time doing something else, such as teaching. That means they spend no more than 40% of their time doing research.

If universities simply paid their faculty a salary rather than giving them a hunting license for grants, the faculty could spend 80% of their time on research rather than 40%. Of course the numbers wouldn’t actually work out so simply. But it is safe to say that if you remove something that takes 40% of their time, researchers could spend more time doing research. (Researchers working in the private sector are often paid by grants too, so to some extent this applies to them as well.)

Universities depend on grant money to pay faculty. But if the money allocated for research were given to universities instead of individuals, universities could afford to pay their faculty.

Not only that, universities could reduce the enormous bureaucracies created to manage grants. This isn’t purely hypothetical. When Hillsdale College decided to refuse all federal grant money, they found that the loss wasn’t nearly as large as it seemed because so much of the grant money had been going to administering grants.

Read More

How mathematicians see physics

From the preface to Physics for Mathematicians:

In addition to presenting the advanced physics, which mathematicians find so easy, I also want to explore the workings of elementary physics, and mysterious maneuvers — which physicists seem to find so natural — by which one reduces a complicated physical problem to a simple mathematical question, which I have always found so hard to fathom.

That’s exactly how I feel about physics. I’m comfortable with differential equations and manifolds. It’s blocks and pulleys that kick my butt.

Read More

History of weather prediction

I’ve just started reading Invisible in the Storm: The Role of Mathematics in Understanding Weather.

The subtitle may be a little misleading. There is a fair amount of math in the book, but the ratio of history to math is pretty high. You might say the book is more about the role of mathematicians than the role of mathematics. As Roger Penrose says on the back cover, the book has “illuminating descriptions and minimal technicality.”

Someone interested in weather prediction but without a strong math background would enjoy reading the book, though someone who knows more math will recognize some familiar names and theorems and will better appreciate how they fit into the narrative.

Related posts:


Evaluating weather forecast accuracy: an interview with Eric Floehr

Accuracy versus perceived accuracy

Read More

Are tweets more accurate than science papers?

John Myles White brings up an interesting question on Twitter:

Ioannidis thinks most published biological research findings are false. Do you think >50% of tweets are false?

I’m inclined to think tweets may be more accurate than research papers, mostly because people tweet about mundane things that they understand. If someone says that there’s a long line at the Apple store, I believe them. When someone says that a food increases or decreases your risk of some malady, I’m more skeptical. I’ll wait to see such a result replicated before I put much faith in it. A lot of tweets are jokes or opinions, but of those that are factual statements, they’re often true.

Tweets are not subject to publication pressure; few people risk losing their job if they don’t tweet. There’s also not a positive publication bias: people can tweet positive or negative conclusions. There is a bias toward tweeting what makes you look good, but that’s not limited to Twitter.

Errors are corrected quickly on Twitter. When I make factual errors on Twitter, I usually hear about it within minutes. As the saga of Anil Potti illustrates, errors or fraud in scientific papers can take years to retract.

(My experience with Twitter may be atypical. I follow people with a relatively high signal to noise ratio, and among those I have a shorter list that I keep up with.)

Related:

My Twitter accounts
Popular research areas produce more false results

Read More

Sun, milk, red meat, and least-squares

I thought this tweet from @WoodyOsher was pretty funny.

Everything our parents said was good is bad. Sun, milk, red meat … the least-squares method.

I wouldn’t say these things are bad, but they are now viewed more critically than they were a generation ago.

Sun exposure may be an apt example since it has alternately been seen as good or bad throughout history. The latest I’ve heard is that moderate sun exposure may lower your risk of cancer, even skin cancer, presumably because of vitamin D production. And sunlight appears to reduce your risk of multiple sclerosis since MS is more prevalent at higher latitudes. But like milk, red meat, or the least squares method, you can over do it.

More on least squares: When it works, it works really well

Read More

Personalized medicine

When I hear someone say “personalized medicine” I want to ask “as opposed to what?”

All medicine is personalized. If you are in an emergency room with a broken leg and the person next to you is lapsing into a diabetic coma, the two of you will be treated differently.

The aim of personalized medicine is to increase the degree of personalization, not to introduce personalization. In particular, there is the popular notion that it will become routine to sequence your DNA any time you receive medical attention, and that this sequence data will enable treatment uniquely customized for you. All we have to do is collect a lot of data and let computers sift through it. There are numerous reasons why this is incredibly naive. Here are three to start with.

  • Maybe the information relevant to treating your malady is in how DNA is expressed, not in the DNA per se, in which case a sequence of your genome would be useless. Or maybe the most important information is not genetic at all. The data may not contain the answer.
  • Maybe the information a doctor needs is not in one gene but in the interaction of 50 genes or 100 genes. Unless a small number of genes are involved, there is no way to explore the combinations by brute force. For example, the number of ways to select 5 genes out of 20,000 is 26,653,335,666,500,004,000. The number of ways to select 32 genes is over a googol, and there isn’t a googol of anything in the universe. Moore’s law will not get us around this impasse.
  • Most clinical trials use no biomarker information at all. It is exceptional to incorporate information from one biomarker. Investigating a handful of biomarkers in a single trial is statistically dubious. Blindly exploring tens of thousands of biomarkers is out of the question, at least with current approaches.

Genetic technology has the potential to incrementally increase the degree of personalization in medicine. But these discoveries will require new insight, and not simply more data and more computing power.

Related posts:

Predicting height from genes
Why microarray studies are often wrong

Read More