The worst thing that can happen to a person participating in a clinical trial is what’s known as a ‘serious adverse event’, which can describe anything from permanent kidney damage or liver failure to hospitalization or even death. Federal law in the US mandates that researchers conducting trials of drugs or other products regulated by the country’s Food and Drug Administration (FDA) report adverse events on ClinicalTrials.gov, a data repository open to the public. But a new study shows that many of these serious adverse events don’t appear in medical journals, making some interventions seem more favorable than they may actually be.
Reporting online today in the Archives of Internal Medicine, a group of researchers led by Daniel Hartung, a drug safety and policy analyst at Oregon Health & Science University in Portland, looked at how the data reported on ClincialTrials.gov stack up against the results published in the medical literature. The team limited their focus to phase 3 or 4 trials with results reported on ClinicalTrials.gov and completed prior to 2009, to allow sufficient time for the trials’ results to be published in medical journals. Hartung’s group then randomly selected 10% of those trials that had matching publications, yielding a total of 110 trials.
Hartung’s team found that 33 of the trials reported a greater number of serious adverse events on ClinicalTrials.gov than in the medical literature. For example, a 13,608-person study comparing the blood-thinning drugs Effient (prasugrel) and Plavix (clopidogrel) reported in the online database a total of 3,406 serious adverse events among all participants in the trial, and 3,082 in a related publication. (The patients in the trial were at high risk of heart attack, and were undergoing angioplasty, so it’s important to note that these adverse events were not necessarily linked to the drugs.)
Of the 84 trials that reported the occurrence of serious adverse events in the public database, 16 of the matching publications either failed to mention them or incorrectly reported that they did not occur. (Notably, 5 trials actually reported more serious adverse events in related medical papers than they did in the public database.)
Hartung’s team point to a number of potential causes for the discrepancies they saw: deliberate misrepresentation of the data, unintentional errors, the influence of journal editors or reviewers or changes in researchers’ analyses over time. “If serious adverse events are relatively rare and unlikely related to trial procedures, authors or journal editors may elect to omit their reporting,” Hartung says. In any event, the underreporting of serious adverse events in the literature could distort the public’s perceptions of the balance between the benefits and risks of a particular intervention, he says. “This adds to the general sense that perhaps what’s submitted to publications is spun in a number of ways to give the intervention the best lighting.”
Adding it up
The study “demonstrates just how problematic selective outcome reporting really is,” says Joseph Ross, who studies health policy at Yale University. “If you see serious events that are only being reported in ClinicalTrials.gov and not in the paper, that’s a real challenge, because it’s potentially distorting the medical evidence base,” he says. That’s because many clinicians and researchers who want to learn more about the results of a clinical trial tend to search the primary literature, rather than ClinicalTrials.gov. “They’re collecting the evidence from trials that have been published and they’re meta-analyzing them and summarizing them, and that’s what goes toward clinical practice guidelines and expert recommendations and the like.”
For now, it’s still unclear which source for clinical trials data—publications versus ClinicalTrials.gov—may be more accurate. But ClinicalTrials.gov seems to have the most comprehensive information on serious adverse events, which are one of the legally mandated reporting elements, Hartung says. That might help to boost the website’s image, which has come under fire recently over revelations of missing data in the repository.
Whatever the cause of the discrepancies, the findings of the new study should serve as a “wake up call”—particularly for journal editors and peer reviewers, Ross says. He suggests that editors and reviewers might be able to improve the accuracy of published papers by comparing the submitted results with the information posted at ClinicalTrials.gov. “That’s the way to diminish the funny business that may happening,” he says. “Some of this might just be typographical errors and the like, but we want to make sure that it’s not due to the selective publication of endpoints and outcomes that make the intervention look more favorable. We have to make sure that the system functions well so that when it comes time to pool together all the evidence from all the available trials that this evidence is in good shape.”
Image by Roxanne Khamsi; photograph of stethoscope by Garsya via Shutterstock
Recent comments
Real-time tissue analysis could guide brain tumor surgery
Bundled RNA balls silence brain cancer gene expression
Ebola outbreak in West Africa lends urgency to recently-funded research