Researchers today unveiled the two largest ever databases of information about cancer cells, which they say will help them and others test new chemotherapy treatments and speed them to the clinic.
Before today’s publication of these databases in Nature, the most sophisticated cancer cell-line library was the US National Cancer Institute’s NCI-60 panel, which is composed of 60 cancer cell lines that have been tested against thousands of potential drugs over the last twenty years. The new databases, compiled at the Dana Farber Cancer Institute and Massachusetts General Hospital, both in Boston, boast 479 and 507 cancer cell lines, respectively, and have information about a broader range of DNA and RNA mutations and drug responses than the NCI-60.
“You want as many cell types as possible in order to establish a spectrum of [cancer cell] behavior in response to a potential drug,” says John Weinstein, a computational biologist at MD Anderson Cancer Center in Houston, Texas. Weinstein, who was not involved with either project, says the breadth of the new collections will make it more likely that scientists will identify cell lines with mutations that match those found in tumors from specific patients. Having access to information about how those cell lines respond to different chemotherapy treatments will help physicians suggest the best drug options to their patients.
“This is going to enable us to realize the vision we have for personalized medicine,” says Levi Garraway, a molecular pharmacologist at the Dana Farber Cancer Institute and senior author of the first paper. And, he says, such extensive cancer cell line data will speed up the preclinical process because in the future it will allow drug companies to predict the effectiveness of experimental compounds, decreasing their reliance on slow and costly animal studies. “If you have robust genetic or molecular indicators, that can be sufficient,” he says. “You don’t necessarily need mouse models.”
But other scientists say cell-line data cannot replace testing in animal models. “Validation models are always necessary, whether it’s testing potential drugs in freshly isolated tumor cells, or using several genetically engineered mouse models to recapitulate the heterogeneity of human tumors,” says Lee Ellis of MD Anderson Cancer Center. In a commentary also published today in Nature, Ellis and cancer researcher Glenn Begley, formerly of the drug company Amgen in Thousand Oaks, California emphasize that cell line data is only the first step in bringing new treatments into the clinic. They point out that even large numbers of cell lines cannot predict how immune mechanisms and the tumor micro-environment will factor into a drug’s effectiveness.
They are not alone in their concerns about lax preclinical testing. An analysis last September in Nature Reviews Drug Discovery said that new drug targets fail to be reproduced in follow-up studies 65% of the time, meaning that many drugs are making it to the clinic without being adequately validated.
“Preclinical validation is expensive,” Ellis says, “but going to clinic is not only expensive, it puts patients at risk.”
Image courtesy of Dimarion via Shutterstock