In a dose-finding clinical trial, you have a small number of doses to test, and you hope find the one with the best response. Here “best” may mean most effective, least toxic, closest to a target toxicity, some combination of criteria, etc.
Since your goal is to find the best dose, it seems natural to compare dose-finding methods by how often they find the best dose. This is what is most often done in the clinical trial literature. But this seemingly natural criterion is actually artificial.
Suppose a trial is testing doses of 100, 200, 300, and 400 milligrams of some new drug. Suppose further that on some scale of goodness, these doses rank 0.1, 0.2, 0.5, and 0.51. (Of course these goodness scores are unknown; the point of the trial is to estimate them. But you might make up some values for simulation, pretending with half your brain that these are the true values and pretending with the other half that you don’t know what they are.)
Now suppose you’re evaluating two clinical trial designs, running simulations to see how each performs. The first design picks the 400 mg dose, the best dose, 20% of the time and picks the 300 mg dose, the second best dose, 50% of the time. The second design picks each dose with equal probability. The latter design picks the best dose more often, but it picks a good dose less often.
In this scenario, the two largest doses are essentially equally good; it hardly matters how often a method distinguishes between them. The first method picks one of the two good doses 70% of the time while the second method picks one of the two good doses only 50% of the time.
This example was exaggerated to make a point: obviously it doesn’t matter how often a method can pick the better of two very similar doses, not when it very often picks a bad dose. But there are less obvious situations that are quantitatively different but qualitatively the same.
The goal is actually to find a good dose. Finding the absolute best dose is impossible. The most you could hope for is that a method finds with high probability the best of the four arbitrarily chosen doses under consideration. Maybe the best dose is 350 mg, 843 mg, or some other dose not under consideration.
A simple way to make evaluating dose-finding methods less arbitrary would be to estimate the benefit to patients. Finding the best dose is only a matter of curiosity in itself unless you consider how that information is used. Knowing the best dose is important because you want to treat future patients as effectively as you can. (And patients in the trial itself as well, if it is an adaptive trial.)
Suppose the measure of goodness in the scenario above is probability of successful treatment and that 1,000 patients will be treated at the dose level picked by the trial. Under the first design, there’s a 20% chance that 51% of the future patients will be treated successfully, and a 50% chance that 50% will be. The expected number of successful treatments from the two best doses is 352. Under the second design, the corresponding number is 252.5.
(To simplify the example above, I didn’t say how often the first design picks each of the two lowest doses. But the first design will result in at least 382 expected successes and the second design 327.5.)
You never know how many future patients will be treated according to the outcome of a clinical trial, but there must be some implicit estimate. If this estimate is zero, the trial is not worth conducting. In the example given here, the estimate of 1,000 future patients is irrelevant: the future patient horizon cancels out in a comparison of the two methods. The patient horizon matters when you want to include the benefit to patients in the trial itself. The patient horizon serves as a way to weigh the interests of current versus future patients, an ethically difficult comparison usually left implicit.
Related: Adaptive clinical trial design
4 thoughts on “Finding the best dose”
Nice summary of the essential philosophy of dose finding.
My comments start with the term as such: finding the dose. Doesn’t finding suggest that we once had it, lost it, and now we try finding it? With a mathematics background, why would one not call the procedure dose optimization or dose determination?
Determining – finding if you want – the optimal dose has of course some hard constraints: no matter how effective the dose, certain toxicities are not acceptable, period. And those constraints might differ even for the same compound in different disease indications.
Finally, it seems established that it is not the dose but the exposure to the drug that drives the effect (the concentration or the area under the curve). The classical example is that mostly patients with lower body weights experience more effect on the same dose than patients with higher body weight (because they see higher concentrations due to lower volumes of distribution). This is the reason why – until entry into man – doses are specified in mg/kg units.
Why is it that it is still the dose that is optimized in dose finding trials and not the exposure even though PK/PD models are developed for every drug nowadays? This seems the obvious step towards therapy individualization.
Dose is often, but not always, in units per body mass. I used mg rather than mg/kg just to make a simpler example.
If PK/PD models were accurate, we wouldn’t need clinical trials. But at least in oncology, researchers have only the most crude guess what toxicity might be before conducting a trial, and even less of an idea what efficacy will be.
I imagine PK/PD models are more useful in diseases less complex than cancer, but they’re not much use in oncology. (They may be useful in predicting specific biological effects, but as far as predicting an overall outcome they’re useless.) PK/PD models may be used to suggest a range of doses to investigate, but ultimately the researcher makes up a list of doses.
Oncology is different in many respects. Exposure in cancer should denote the exposure of the tumor to the drug, not the systemic exposure. Many concomitant medications and varying degrees of compliance add further complications for proper evaluation of drug effects. On the other hand, if PK/PD models were of very little use, there would be no reason for their existence.
We should differentiate between predicting dose-response prior to a dose-finding study and evaluation of the dose-finding study. Having a continous
measure such as exposure on the x-axis seems preferable to discrete x-values, in particular if only few doses were tested.
Here is a case study by an FDA scientist, suggesting that lack of exposure correlates with lack of efficacy in herceptin:
Continuous doses and continuous responses make sense, but they’re rare. By far the most commonly used dose-finding method is the crude 3+3 rule: discrete doses, one binary outcome, no probability model, no patient covariates.
At the other extreme are complex, one-of-a-kind methods designed for publication in statistics journals. There’s not much in the middle.