There is much comment in Nature and elsewhere this week about the two fraudulent stem-cell papers published in and retracted from the journal Science. Science‘s editors commissioned an external committee to report on its handling of the papers, in the light of which the journal is likely to begin extra scrutiny for “high risk” papers. For more details, see Nature’s news story in the current issue (vol 444, pp 658-659; 7 December 2006), and this special feature.
Other opinions about the panel’s report can be seen in a statement by Don Kennedy, Editor-in-Chief of Science, at the journal’s website; at the weblog Nobel Intent; and in The Scientist’s online news service. The report itself can be seen here.
Inevitably, much of the discussion centres on the role of journals and peer-reviewers in their combined ability to detect fabricated results. Everyone would agree that published papers have to be entirely above suspicion. But what of the authors’ perspective — how much data, methodology or calculation is necessary to provide a convincing case for a conclusion? Especially in fast-moving fields such as stem-cell research, how much time and effort is needed to accumulate such evidence and submit it to a journal — a journal that may have to decline 90 per cent of submitted papers?
The Nature journals’ policies on data availability can be seen here. As we scrutinize these policies in the light of current events, we welcome suggestions from authors, past, present or future, as to what you believe to be reasonable for a journal to demand to ensure that conclusions are solid. Please make your suggestions in the comments to this posting.