When I hear someone say “personalized medicine” I want to ask “as opposed to what?”
All medicine is personalized. If you are in an emergency room with a broken leg and the person next to you is lapsing into a diabetic coma, the two of you will be treated differently.
The aim of personalized medicine is to increase the degree of personalization, not to introduce personalization. In particular, there is the popular notion that it will become routine to sequence your DNA any time you receive medical attention, and that this sequence data will enable treatment uniquely customized for you. All we have to do is collect a lot of data and let computers sift through it. There are numerous reasons why this is incredibly naive. Here are three to start with.
- Maybe the information relevant to treating your malady is in how DNA is expressed, not in the DNA per se, in which case a sequence of your genome would be useless. Or maybe the most important information is not genetic at all. The data may not contain the answer.
- Maybe the information a doctor needs is not in one gene but in the interaction of 50 genes or 100 genes. Unless a small number of genes are involved, there is no way to explore the combinations by brute force. For example, the number of ways to select 5 genes out of 20,000 is 26,653,335,666,500,004,000. The number of ways to select 32 genes is over a googol, and there isn’t a googol of anything in the universe. Moore’s law will not get us around this impasse.
- Most clinical trials use no biomarker information at all. It is exceptional to incorporate information from one biomarker. Investigating a handful of biomarkers in a single trial is statistically dubious. Blindly exploring tens of thousands of biomarkers is out of the question, at least with current approaches.
Genetic technology has the potential to incrementally increase the degree of personalization in medicine. But these discoveries will require new insight, and not simply more data and more computing power.
10 thoughts on “Personalized medicine”
I am little surprised to see this, especially since you have worked at MD Anderson. There are now genetic tests used in the treatment of nearly every cancer (decisions for treatments are made due to the specific DNA mutations or pathways activated/deactivated in the specific tumor). The person next door to me sequences tumors and uses the sequence information to create a vaccine specific for that person and that person only – some amazing results although the success rate is low (might be due to the fact that only those with weeks to live are allowed to treated). There was just an article about some researcher in Washington U. that had his colleagues develop a treatment just for him. Just before this I read this http://genomeeee.blogspot.com/2012/10/what-could-our-genomes-actually-tell.html , look at the number of drug target genes that have known SNPs, soon this will be used to make treatment decisions. It might be a while before all this hits the market (largely due to regulations – drug lag kills thousands every year), but personalized medicine has a huge potential.
I’m skeptical of personalized medicine precisely because of my experience at M. D. Anderson. Without inside knowledge of the difficulties involved, I think I’d be more optimistic.
There certainly have been some successes in using genetic data to treat patients more effectively. Sometimes there is a clear signal that jumps out of the noise. But usually it’s far from that simple.
Doctors, when making treatment choices among several alternatives, must use certain criteria to rank these choices in the context of an individual patient.
What is the likelihood that changing a treatment decision based on genetic sequencing/studies might result in a worse patient outcome (either because the correct treatment was not given) or better outcome (because the chosen treatment is more effective for this patient’s genotype)? Perhaps for a given diagnosis, there is so little expectation either way of a change in treatment decision that even peforming the sequencing is not worth it. (Is there really any genetic factor that would affect how you would treat a broken leg?) For others, as alluded to by mdb above, there could be a dramatic difference in outcomes by adding the new information into the analysis.
This is similar in concept to evaluating the usefulness of a diagnostic test by considering the false positive and false negative rates of the test.
Truly personalized medicine would be about more than genetics. How a broken leg should be treated can depend on personal factors relating to how the break occurred (a broken leg might be a symptom of other disorders, possibly psychological or social) and how the person’s lifestyle will be influenced by treatment (a reduction in mobility might have significant psychological or economic effect) and vice verse (a person’s diet or unwillingness to rest might impact treatment).
Ideally, computers would facilitate more personal care by finding and filtering information and by allowing more time to be spent on a patient at the same cost.
Note: I am not suggesting that physicians are responsible for treatment of all psychological and social ills they might conceivably recognize. However, more personalized medicine might have social benefits as well as medical benefits.
Sadly, physicians may be experiencing similar pressures to public school teachers with simple generic metrics of performance and impersonal regulation encouraging a declining mediocrity. (I do not have an answer for accountability in any profession, but the recent Ars Technica article about tech support has increased my concern that simple metrics are too easily made counter-productive.)
Variability is the “killer” in all of medical research and particularly as it relates to patient outcomes. Even when we find “statistically significant” associations they still only account for a small portion of the variability. Furthermore, we do not always know if an association is causative or not. Even when one just tries to use association for prediction we get into mess. If one is lucky, there is only narrow error about the mean but “personalized medicine” is not about the mean. It is about the single individual. The error around any prediction for an individual is massive (usually useless) – small p values be damned.
I understand your criticism and agree with the problems you point out (and calling something “personalized medicine” in the sense that the treatment is, as you correctly point out, selected from a list of options based on not only the symptoms you present (diabetes vs leg), but also on your genetic background, is merely another word invented to make the public more interested in funding the research and thinking it is important).
However, I think what is actually going on “under the hood” of personalized medicine research is going to revolutionize medicine as we know it, and I think it is a great thing. The breakthrough is that diseases no longer have to be classified by what a surgeon or GP is able to see (with the naked eye or with a series of relatively primitive tests); we are going to be able to classify diseases that involve something going wrong at the molecular level at that actual molecular level. This means that cases of colon cancer and breast cancer will no longer be subdivided into two separate diseases, but instead classified and treated based on which pathways have been disrupted, and hence which therapy will be more likely to work. And these diagnostics are not limited to cancer (where, as the above poster points out, they have been implemented), but can also be applied to complex metabolic and mental disorders, enabling better treatment options to be made available to patients.
The other hope I have for “personalized medicine” is the potential for early diagnosis, and in a perfect world – the possibility of prevention. Most disorders today are diagnosed quite late, when there is significant organ impact, pain or some other physical manifestation of the disease – but the developing technologies will allow us to both see which diseases a particular person has a predisposition to (and hence do more thorough monitoring) and to earlier detect that something has gone wrong. Mike Snyder’s seminal diabetes prevention trick makes me optimistic about this not being just a faraway dream.
But I also think that it will never work if all of the analysis is done by a computer, and then a recommendation spit out to the clinician, who doesn’t really have the training to deal with it. I think in the future there will be “genetic services” labs, perhaps on the base of modern pathology labs, where teams of “medical bioinformaticians” will plow through boatloads of data and make sense of the results, with the end point of generating a report for the clinician with ideas and recommendation on diagnosis/treatment. The clinican, then, will integrate that information with his personal experience with the patient to determine the best course of treatment.
DNA while important is only one layer. The field of metabolomics which is often a new term for people even in molecular biology, in my opinion is the key to disease as it’s the final product of DNA + environment + time. Research shows that metabolic networks adapt spatially, volumetrically and temporally to keep flux through particular pathways consistent for the organism (to ultimately maintain homeostasis). Compensating for “poor genes” is just another day at the office for our metabolome. This answers the obvious question of many molecular biologists: given all these genetic differences how can our internal state be almost constant – and how are we even alive at all? It’s easy – we compensate by adjusting non-genetic parts of the system. Note that this information is crucially NOT encoded in our DNA.
These pathways have been analysed. And as far as I am aware, metabolic flux analysis is still an NP hard problem.
My guess is that there will be low hanging fruit. Some SNP’s or rapid sequencing technologies will find some genes that aren’t able to be adjusted well by the metabolome. But most disease will have far more complicated effects in the metabolic system – which is where it really counts. Since we all have a somewhat unique metabolome, someone will have to crack these high dimensional, currently intractable problems and devise new ways for real time non-invasive metabolic analysis for us to make any progress at all in most disease.
I tend to think that HT sequencing et al will go the way of the microarray. More data is not necessarily more information if it’s only part of the picture.
The computational limits you mention with respect to Moore’s law may very well be broken in our professional lifetimes with quantum computing. This is no longer a very bold statement. For example, D-Wave baby steps in solving protein folding: http://www.nature.com/srep/2012/120813/srep00571/full/srep00571.html.
If we are able to explore astronomically large combinations of genes some day, the problem of multiple testing and false positives will explode as well. That is if we use this compute power as a blunt instrument.
If we combine insight with computing, we may accomplish amazing things. What I object to most is the idea that all we need biologists for is supplying raw data, that biology research can be reduced to a brute-force computational problem.
D-Wave really hasn’t even done the baby step yet. In keeping with your analogy, at best they are showing signs of trying to turn themselves over and crawl.
As Scott Aaronson points out over at Shetl-Optimized, after 6 years of funding:
“(1) D-Wave still hasn’t demonstrated 2-qubit entanglement, which I see as
one of the non-negotiable “sanity checks” for scalable quantum computing.”