The goal of a clinical trial is to determine what treatment will be most effective in a given population. What if the population changes while you’re conducting your trial? Say you’re treating patients with Drug X and Drug Y, and initially more patients were responding to X, but later more responded to Y. Maybe you’re just seeing random fluctuation, but maybe things really are changing and the rug is being pulled out from under your feet.
Advances in disease detection could cause a trial to enroll more patients with early stage disease as the trial proceeds. Changes in the standard of care could also make a difference. Patients often enroll in a clinical trial because standard treatments have been ineffective. If the standard of care changes during a trial, the early patients might be resistant to one therapy while later patients are resistant to another therapy. Often population drift is slow compared to the duration of a trial and doesn’t affect your conclusions, but that is not always the case.
My interest in population drift comes from adaptive randomization. In an adaptive randomized trial, the probability of assigning patients to a treatment goes up as evidence accumulates in favor of that treatment. The goal of such a trial design is to assign more patients to the more effective treatments. But what if patient response changes over time? Could your efforts to assign the better treatments more often backfire? A trial could assign more patients to what was the better treatment rather than what is now the better treatment.
On average, adaptively randomized trials do treat more patients effectively than do equally randomized trials. The report Power and bias in adaptive randomized clinical trials shows this is the case in a wide variety of circumstances, but it assumes constant response rates, i.e. it does not address population drift.
I did some simulations to see whether adaptive randomization could do more harm than good. I looked at more extreme population drift than one is likely to see in practice in order to exaggerate any negative effect. I looked at gradual changes and sudden changes. In all my simulations, the adaptive randomization design treated more patients effectively on average than the comparable equal randomization design. I wrote up my results in The Effect of Population Drift on Adaptively Randomized Trials.
Related: Adaptive clinical trial design
My mind immediately went to the challenge: How do you adjust the ongoing analysis in an adaptive trial to monitor for the presence of population drift? You lose some power by adding another variable, but it might be worth it.
In the end, though, I (as a clinician) am acutely aware that we are discussing subtleties when the vast majority of daily clinical actions are taken on the basis of virtually no evidence whatsoever. Either the doc is not aware of the evidence, or the research simply hasn’t been done. The possibility (probability?) that population drift undermines the believability of what evidence we have – well, my mind shuts down and refuses to contemplate the implications seriously.
Should we spend research dollars trying to pursue new hypotheses, or continuously retest the ones we’ve done before?
Come to think of it, this might be an argument for the trend of drug comparison trials rather than placebo trials; at least that way we have ongoing evaluation of old therapies as we test new ones.