Why drugs often list headache as a side-effect

In an interview on Biotech Nation, Gary Cupit made an offhand remark about why so many drugs list headache as a possible side-effect: clinical trial participants are often asked to abstain from coffee during the trial. That also explains why those who receive placebo often complain of headache as well.

Cupit’s company Somnus Therapeutics makes a sleep medication for people who have no trouble going to sleep but do have trouble staying asleep. The medication has a timed-release so that it is active only in the middle of the night when needed. One of the criteria by which the drug is evaluated is whether there is a lingering effect the next morning. Obviously researchers would like to eliminate coffee consumption as a confounding variable. But this contributes to the litany of side-effects that announcers must mumble in television commercials.

Clinical trials

Subtle variation on gaining weight to become taller

Back in March I wrote a blog post asking whether gaining weight makes you taller. Weight and height are clearly associated, and from that data alone one might speculate that gaining weight could make you taller. Of course causation is in the other direction: becoming taller generally makes you gain weight.

In the 1980’s, cardiologists discovered that patients with irregular heart beats for the first 12 days following a heart attack were much more likely to die. Antiarrythmic drugs became standard therapy. But in the next decade cardiologist discovered this was a bad idea. According to Philip Devereaux, “The trial didn’t just show that the drugs weren’t saving lives, it showed they were actually killing people.”

David Freedman relates the story above in his book Wrong. Freedman says

In fact, notes Devereaux, the drugs killed more Americans than the Vietnam War did—roughly an average of forty thousand a year died from the drugs in the United States alone.

Cardiologists had good reason to suspect that antiarrythmic drugs would save lives. In retrospect, it may be that heart-attack patients with poor prognosis have arrhythmia rather than arrhythmia causing poor prognosis. Or it may be that the association is more complicated than either explanation.

* * *

Consulting in clinical trial design

I promise I’m not trying to learn anything

Medical experiments come under greater scrutiny than ordinary medical practice. There are good reasons for such precautions, but this leads to a sort of paradox. As Frederick Mosteller observed

We have a strange double standard now. As long as a physician treats a patient intending to cure, the treatment is admissible. When the object is to find out whether the treatment has value, the physician is immediately subject to many constraints.

If a physician has two treatment options, A and B, he can assign either treatment as long as he believes that one is best. But if he admits that he doesn’t know which is better and says he wants to treat some patients each way in order to get a better idea how they compare, then he has to propose a study and go through a long review processes.

I agree with Mosteller that we have a strange double standard, that a doctor is free to do what he wants as long as he doesn’t try to learn anything. On the other hand, review boards reduce the chances that patients will be asked to participate in ill-conceived experiments by looking for possible conflicts of interest, weaknesses in statistical design, etc. And such precautions are more necessary in experimental medicine than in more routine medicine. Still, there is more uncertainty in medicine than we may like to admit, and the line between “experimental” and “routine” can be fuzzy.

Related posts:

Something like a random sequence but …

When people ask for a random sequence, they’re often disappointed with what they get.

Random sequences clump more than most folks expect. For graphical applications, quasi-random sequence may be more appropriate.These sequences are “more random than random” in the sense that they behave more like what some folks expect from randomness. They jitter around like a random sequence, but they don’t clump as much.

Researchers conducting clinical trials are dismayed when a randomized trial puts several patients in a row on the same treatment. They want to assign patients one at a time to one of two treatments with equal probability, but they also want the allocation to work out evenly. This is like saying you want to flip a coin 100 times, and you also want to get exactly 50 heads and 50 tails. You can’t guarantee both, but there are effective compromises.

One approach is to randomize in blocks. For example, you could randomize in blocks of 10 patients by taking a sequence of 5 A‘s and 5 B‘s and randomly permuting the 10 letters. This guarantees that the allocations will be balanced, but some outcomes will be predictable. At a minimum, the last assignment in each block is always predictable: you assign whatever is left. Assignments could be even more predictable: if you give n A‘s in a row in a block of 2n, you know the last n assignments will be all B‘s.

Another approach is to “encourage” balance rather than enforce it. When you’ve given more A‘s than B‘s you could increase the probability of assigning a B. The greater the imbalance, the more heavily you bias the randomization probability in favor of the treatment that has been assigned less. This is a sort of compromise between equal randomization and block randomization. All assignments are random, though some assignments may be more predictable than others. Large imbalances are less likely than with equal randomization, but more likely than with block randomization. You can tune how aggressively the method responds to imbalances in order to make the method more like equal randomization or more like block randomization.

No approach to randomization will satisfy everyone because there are conflicting requirements. Randomization is a dilemma to be managed rather than a problem to be solved.

Related posts:

Malaria on the prairie

My family loves the Little House on the Prairie books. We read them aloud to our three oldest children and we’re in the process of reading them with our fourth child. We just read the chapter describing when the entire Ingalls family came down with malaria, or “fever ‘n’ ague” as they called it.

The family had settled near a creek that was infested with mosquitoes. All the settlers around the creek bottoms came down with malaria, though at the time (circa 1870) they did not know the disease was transmitted by mosquitoes. One of the settlers, Mrs. Scott, believed that malaria was caused by eating the watermelons that grew in the creek bottoms. She had empirical evidence: everyone who had eaten the melons contracted malaria. Charles Ingalls thought that was ridiculous. After he recovered from his attack of malaria, he went down to the creek and brought back a huge watermelon and ate it. His reasoning was that “Everybody knows that fever ‘n’ ague comes from breathing the night air.”

It’s easy to laugh at Mrs. Scott and Mr. Ingalls. What ignorant, superstitious people. But they were no more ignorant than their contemporaries, and both had good reasons for their beliefs. Mrs. Scott had observational data on her side. Ingalls was relying on the accepted wisdom of his day. (After all, “malaria” means “bad air.”)

People used to believe all kinds of things that are absurd now, particularly in regard to medicine. But they were also right about many things that are hard to enumerate now because we take them for granted. Stories of conventional wisdom being correct are not interesting, unless there was some challenge to that wisdom. The easiest examples of folk wisdom to recall may be the instances in which science initially contradicted folk wisdom but later confirmed it. For example, we have come back to believing that breast milk is best for babies and that a moderate amount of sunshine is good for you.

Related posts:

Managing biological data

Jon Udell’s latest Interviews with Innovators podcast features Randall Julian of Indigo BioSystems. I found this episode particularly interesting because it deals with issues I have some experience with.

The problems in managing biological data begin with how to store the raw experiment data. As Julian says

… without buying into all the hype around semantic web and so on, you would argue that a flexible schema makes more sense in a knowledge gathering or knowledge generation context than a fixed schema does.

So you need something less rigid than a relational database and something with more structure than a set of Excel spreadsheets. That’s not easy, and I don’t know whether anyone has come up with an optimal solution yet. Julian said that he has seen many attempts to put vast amounts of biological data into a rigid relational database schema but hasn’t seen this approach succeed yet. My experience has been similar.

Representing raw experimental data isn’t enough. In fact, that’s the easy part. As Jon Udell comments during the interview

It’s easy to represent data. It’s hard to represent the experiment.

That is, the data must come with ample context to make sense of the data. Julian comments that without this context, the data may as well be a list of zip codes. And not only must you capture experimental context, you must describe the analysis done to the data. (See, for example, this post about researchers making up their own rules of probability.)

Julian comments on how electronic data management is not nearly as common as someone unfamiliar with medical informatics might expect.

So right now maybe 50% of the clinical trials in the world are done using electronic data capture technology. … that’s the thing that maybe people don’t understand about health care and the life sciences in general is that there is still a huge amount of paper out there.

Part of the reason for so much paper goes back to the belief that one must choose between highly normalized relational data stores and unstructured files. Given a choice between inflexible bureaucracy and chaos, many people choose chaos. It may work about as well, and it’s much cheaper to implement. I’ve seen both extremes. I’ve also been part of a project using a flexible but structured approach that worked quite well.

Related posts:

Bayesian clinical trials in one zip code

I recently ran across this quote from Mithat Gönen of Memorial Sloan-Kettering Cancer Center:

While there are certainly some at other centers, the bulk of applied Bayesian clinical trial design in this country is largely confined to a single zip code.

from “Bayesian clinical trials: no more excuses,” Clinical Trials 2009; 6; 203.

The zip code Gönen alludes to is 77030, the zip code of M. D. Anderson Cancer Center. I can’t say how much activity there is elsewhere, but certainly we design and conduct a lot of Bayesian clinical trials at MDACC.

Update: After over a decade working at MDACC, I left to start my own consulting business. If you’d like help with adaptive clinical trials please let me know.

Related posts:

Off to Puerto Rico

I’m leaving today for San Juan. I’m giving a couple talks at a conference on clinical trials.

Puerto Rico is beautiful. (I want to say a “lovely island,” but then the song America from West Side Story gets stuck in my head.) Here are a couple photos from my last visit.

Science versus medicine

Before I started working for a cancer center, I was not aware of the tension between science and medicine. Popular perception is that the two go together hand and glove, but that’s not always true.

Physicians are trained to use their subjective judgment and to be decisive. And for good reason: making a fairly good decision quickly is often better than making the best decision eventually. But scientists must be tentative, withhold judgment, and follow protocols.

Sometimes physician-scientists can reconcile their two roles, but sometimes they have to choose to wear one hat or the other at different times.

The physician-scientist tension is just one facet of the constant tension between treating each patient effectively and learning how to treat future patients more effectively. Sometimes the interests of current patients and future patients coincide completely, but not always.

This ethical tension is part of what makes biostatistics a separate field of statistics. In manufacturing, for example, you don’t need to balance the interests of current light bulbs and future light bulbs. If you need to destroy 1,000 light bulbs to find out how to make better bulbs in the future, no big deal. But different rules apply when experimenting on people. Clinical trials will often use statistical designs that sacrifice some statistical power in order to protect the people participating in the trial. Ethical constraints make biostatistics interesting.

Probability that a study result is true

Suppose a new study comes out saying a drug or a food or a habit lowers your risk of some disease. What is the probability that the study’s result is correct? Obviously this is a very important question, but one that is not raised often enough.

I’ve referred to a paper by John Ioannidis (*) several times before, but I haven’t gone over the model he uses to support his claim that most study results are false. This post will look at some equations he derives for estimating the probability that a claimed positive result is correct.

First of all, let R be the ratio of positive findings to negative findings being investigated in a particular area. Of course we never know exactly what R is, but let’s pretend that somehow we knew that out of 1000 hypotheses being investigated in some area, 200 are correct. Then R would be 200/800 = 0.25. The value of R varies quite a bit, being relatively large in some fields of study and quite small in others. Imagine researchers pulling hypotheses to investigate from a hat. The probability of selecting a hypothesis that really is true would be R/(R+1) and the probability selecting a false hypothesis is 1/(R+1).

Let α be the probability of incorrectly declaring a false hypothesis to be true. Studies are often designed with the goal that α would be 0.05. Let β be the probability that a study would incorrectly conclude that that a true hypothesis is false. In practice, β is far more variable than α. You might find study designs with β anywhere from 0.5 down to 0.01. The design choice β = 0.20 is common in some contexts.

There are two ways to publish a study claiming a new result: you could have selected a true hypothesis and correctly concluded that it was true, or you could have selected a false but incorrectly concluded it was true. The former has probability (1-β)R/(R+1) and the latter has probability α/(R+1). The total probability of concluding a hypothesis is true, correctly or incorrectly, is the sum of these probabilities, i.e. ((1-β)R + α)/(R+1). The probability that a study conclusion is true given that you concluded it was true, the positive predictive value or PPV, is the ratio of (1-β)R/(R+1) to ((1-β)R + α)/(R+1). In summary, under the assumptions above, the probability of a claimed result being true is (1-β)R/((1-β)R + α).

If (1 – β)R < α then the model say that a claim is more likely to be false than true. This can happen if R is small, i.e. there are not a large proportion of true results under investigation, and if β is large, i.e. if studies are small. If R is smaller than α, most studies will be false no matter how small you make β, i.e. no matter how large the study. This says that in a challenging area, where few of the ideas being investigated lead to progress, there will be a large proportion of false results published, even if the individual researchers are honest and careful.

Ioannidis develops two other models refining the model above. Suppose that because of bias, some proportion of results that would otherwise have been reported as negative are reported as positive. Call this proportion u. The derivation of the positive predictive value is similar to that in the previous model, but messier. The final result is R(1-β + uβ)/(R(1-β + uβ) + α + u – αu). If 1 – β > α, which is nearly always the case, then the probability of a reported result being correct decreases as bias increases.

The final model considers the impact of multiple investigators testing the same hypothesis. If more people try to prove the same thing, it’s more likely that someone will get lucky and “prove” it, whether or not the thing to be proven is true. Leaving aside bias, if n investigators are testing each hypothesis, the probability that a positive claim is true is given by R(1 – βn)/(R + 1 – (1 – α)nRβn). As n increases, the probability of a positive claim being true decreases.

The probability of a result being true is often much lower than is commonly believed. One reason is that hypothesis testing focuses on the probability of the data given a hypothesis rather than the probability of a hypothesis given the data. Calculating the probability of a hypothesis given data relies on prior probabilities, such as the factors R/(R+1) and 1/(R+1) above. These prior probabilities are elusive and controversial, but they are critical in evaluating how likely it is that claimed results are true.

(*) John P. A. Ioannidis, Why most published research findings are false. CHANCE volume 18, number 4, 2005.

Sometimes it's right under your nose

Neptune was discovered in 1846. But Galileo’s notebooks describe a “star” he saw on 28 December 1612 and 2 January 1613 that we now know was Neptune. Galileo even noticed that his star was in a slightly different location for his two observations, but he chalked the difference up to observational error.

The men who discovered Neptune were not the first to see it; they were the first to realize what they were looking at.

Voyager 2 photo of Neptune via Wikipedia