When I was an undergraduate, my tutor, a laboratory-based PhD-trained scientist, often complained that the medical students he tutored were incapable of thinking scientifically; clinicians are only able to learn by rote, not think logically, my tutor regularly said. It was only when I started working in medical publishing that I came to realise how misinformed my tutor was and how rigorous clinical research can be.
I am currently attending the American Heart Association’s annual scientific meeting and the quality of some of the science is simply breathtaking. Every day the results of massive randomised controlled trials are presented that are run by large collaborations of clinical researchers based all over the world.
I’ve already blogged about JUPITER, which may well turn out to be one of the most significant pieces of medical research published this year. Yesterday, another big trial was presented: I-PRESERVE, which enrolled 4218 patients from medical centres in 25 countries. The trial was negative—-it showed that irbesartan does not improve the outcomes of patients with heart failure who have an ejection fraction of at least 45%—-so it won’t get as much airtime as JUPITER in the popular press, but it is still an impressive piece of work all the same (see paper in the NEJM).
As is usual in clinical trials, the primary and secondary endpoints in I-PRESERVE had been prespecified in the trial protocol, leaving the researchers unable to mine the data until they found a statistically significant result. The large sample size means that the results are likely to be reproducible; indeed the negative findings of I-PRESERVE agree with two other clinical trials that have already been published.
I doubt very much that the vast majority of bench science is done to anywhere near such high standards. How many lab-based scientists do a power calculation in advance and prespecify exactly which endpoints they’re investigating in an experimental protocol that’s published in advance?
In addition, many medical journals insist that clinical trials are reported in a standardised fashion, according to the CONSORT guidelines, so that readers have all the information they need to determine whether the trial methodology is adequate and the analysis appropriate. Sure, life scientists also have publication guidelines—MIAME instantly springs to mind—but in my opinion basic scientists, like my old undergraduate tutor, could learn a lot from their clinical colleagues about how to design scientifically meaningful experimental protocols and how to publish the resulting papers in as rigorous a fashion as possible.
James Butcher is publisher of Nature’s eight Clinical Practice review journals.