While the current New Yorker piece on the unreliability of science is causing a bit of a buzz, the notion has been around for a while. Take the work of John Ioannidis, who is identified in the piece as a Standford researcher, but still holds an adjunct appointments at Tufts and Harvard.
Five years ago, he published an analysis in PLoS Medicine that concluded most scientific research is, basically, statistically underpowered.
Research findings are more likely true in scientific fields with large effects, such as the impact of smoking on cancer, than in scientific fields where postulated effects are small, such as genetic risk factors for diseases where many different genes are involved in causation. If the effect sizes are very small in a particular field, says Ioannidis, it is “likely to be plagued by almost ubiquitous false positive claims.”
Financial and other interests and prejudices can also lead to untrue results. And “the hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true,” which may explain why we sometimes see “major excitement followed rapidly by severe disappointments in fields that draw wide attention.”
Ioannidis was also the subject of piece in the November issue of The Atlantic (which probably ruined the scooped New Yorker writer’s day.)
He’s what’s known as a meta-researcher, and he’s become one of the world’s foremost experts on the credibility of medical research. He and his team have shown, again and again, and in many different ways, that much of what biomedical researchers conclude in published studies—conclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fiber or less meat, or when they recommend surgery for heart disease or back pain—is misleading, exaggerated, and often flat-out wrong.
Ioannidis isn’t always the bearer of troubling news. Working with other researchers at Tufts’ Institute for Clinical Research and Health Policy Studies, he recently co-authored a study of genome-wide association studies in mice.
Our project highlights the wealth of available information from mouse models for human GWAS, catalogues extensive information on plausible physiologic implications for many genes, provides hypothesis-generating findings for additional GWAS analyses and documents that the concordance between human and mouse genetic association is larger than expected by chance and can be informative.
So, for those who fear that we’ve set the bar too low for scientific evidence, remember: Not everything you know is wrong.