News blog

The new dilemma of online peer review: too many places to post?

As online comments on newly published research become widespread, a new dilemma faces scientists wanting to enter the electronic fray: where to comment, and in what format for maximum impact?

That question faced Kenneth Lee, a researcher in regenerative medicine at the Chinese University of Hong Kong, when he wanted to post up his critique of controversial stem-cell research. Lee’s research group has, like many other scientists, tried and failed to replicate the work, published in Nature at the end of January. The studies are now under investigation, with some of their authors calling for retraction.

Lee had his pick of online fora. He could have posted on a closely watched stem-cell blog by researcher Paul Knoepfler at the University of California, Davis, which has been collecting tales of failed replications. He could have posted on PubPeer.com, a website where people can make anonymous comments about published papers, which has also seen large amounts of traffic discussing problems with images used in the studies. He could have posted on PubMed Commons, an initiative launched last October that allows scientists to comment on published abstracts on the PubMed website. He might have chosen any number of other venues — such as the news articles reporting on the controversy — or even his own website.

Instead, Lee picked ResearchGate, a social network that boasts more than 4 million signed-up researchers. And instead of just adding his comment linked to the publication’s page on the site, Lee posted up a structured mini-review, with sections for ‘methodology’, ‘analyses’, ‘references’, ‘findings’ and ‘conclusions’, and including his own images.

This did not happen by accident. ResearchGate’s managers had noticed that Lee was chattering about his replications on their network, and an employee invited him to be the first to try out their new post-publication review format. “I was very reluctant at first, but she said I keep the copyrights, so I reluctantly agreed,” Lee says. “This is how everything came together. I think it is just fate.”

ResearchGate is calling its structured feedback format Open Review, and the co-founder of the site, Ijad Madisch, says that it is a feature he has long wanted to introduce.

“It looks interesting, and I am a supporter of innovative approaches to facilitate discussions among scientists in real time,” says microbiologist Ferric Fang of the University of Washington in Seattle. “A nice thing about the more structured format is that it encourages reviewers to be more systematic and to support their critiques. Short comments are OK but it is easier to make reckless statements in the absence of structure.” Fang adds that in this particular case, “I don’t expect the open review to have much impact on the paper since questions about its validity have already been raised”.

Asked why researchers should post their reviews on ResearchGate — as opposed to any other website — Madisch points out that his site has a community of verified scientists. “The content is free — anyone can read that from outside — but to contribute, you need to be affiliated with an institution that does research, so the quality is high,” he says. “I think Kenneth decided to publish on ResearchGate because he is part of an engaged community there. He wanted to get his replication out fast in order to warn others, and to get feedback on his work — rather than, say, write a letter to the editor, which can come six months after an article is published, and may be completely detached from the study itself. If there is one central place where people go, post-publication peer review becomes more efficient for everyone,” he says.

Will a few hubs such as ResearchGate or Pubpeer.com dominate post-publication peer review? Or will online comments look more like a scattered hodgepodge of reviews, comments and discussions across websites unlinked to original publications? And if so, can search functions tie the thicket together? To these questions, Madisch has a simple answer: “I don’t know where this will end, but I do know it will be really big.”

Lee says he would still like to publish his results in a journal, so that his students get the credit they deserve for their efforts. He says he doesn’t know whether his work posted on ResearchGate could be considered a citable object in itself. “But it has already been cited on the Wall Street Journal, BBC and Boston Globe, so the impact is really far reaching,” he notes.  “The most important thing is that the finding is fairly and accurately reported so that other researchers can decide whether to use their valuable resources to continue pursuing the study.”

Online post-publication peer review, in the fuller sense that Lee has performed it, is unlikely to be common, says Fang. “Given the amount of time it takes to read and carefully review a paper, I suspect that the papers selected for discussion are going to be limited to very high-profile work about which readers have concerns. After all, there are something like a million new papers published each year and the average scientist reads only about 20–25 papers each month,” he says.

Elizabeth Iorns, chief executive of Science Exchange, and an advocate for efforts to reproduce published scientific research, agrees. She points out a subtlety in the way scientists have rushed to replicate the findings. Rather than, like Lee, acting as post-publication reviewers seeking to check the paper, she says, researchers are instead trying to adopt the method for their own laboratories, and so often are not performing exact replications of the original work.

“What we have learned is that researchers don’t generally want to perform confirmatory replication studies of other researchers’ findings,” she says.

Comments

  1. james tres said:

    Interesting. I like what Madisch said “I don’t know where this will end, but I do know it will be really big.”

  2. Andrew Preston said:

    Great article, thanks! We (Publons.com) really enjoyed reading it, and it caused some interesting conversation around the office.

    As one of the companies involved in the online peer review Cambrian Explosion, this is of course something about which we think a lot.

    Our approach is to worry less about where reviews are posted and to focus more on giving reviewers the appropriate credit for their contributions to science and highlighting their impact. We provide publisher-independent credit for all forms of peer review (and discussion!) and we able to aggregate from across the web. We’re looking forward to indexing this review too.

    Great work Kenneth!

    Andrew (CEO/co-founder of Publons)

    Report this comment Cancel report
    Your details

    Please confirm the words below

    In order to reduce spamming, this process ensures you are a real person and not an automated program.

  3. William Gunn said:

    So I think the frame of “too many choices” is kinda silly here. The main criticism post-publication commentary gets is that no one does it, so just having the evidence that there are many many people all independently taking an approach to solving this is helpful to push back against that criticism.

    I think the idea of structured reviews is good, but that said, I also think it’s a bit naive to structure based on the headings of an published academic paper. Initial work in this space indicates that individual figures are what draws commentary, and like any other kind of engagement online, you’re going to get 90% of the feedback in the form of implicit, low barrier engagement such as tagging figures, linking to methods, etc and much fewer actual long form narrative comments, so this structure isn’t the best way to go about capturing all this interaction. I don’t want to rain on anyone’s parade, though. If RG makes post-publication into a thing, I’ll be happy for them, but I personally think Pubmed Commons is far more likely to achieve quality commentary and to do so in a way that allows the scholarly community to collectively benefit from the results, long after RG has passed. I tend to think that private companies are better at getting stuff done and focusing on a good user experience than the government, but Pubmed Commons is different and I recognize the value of what they have created.

    (on a personal note, as a member of the online tools for scientists crowd since well before my employer, Mendeley, was around, it irks me to see sites like RG and Academia happily accepting credit for their derivative ideas without crediting the sources they were inspired by (JournalLab, Open Science Framework, Reproducibility Initiative, etc). It’s common in the business world to take whatever advantage you can get, but for services that propose to serve scholars, it’s fair to expect that they credit their sources, even when they can get away with it, because a reporter doesn’t always have time to fully do their homework)

  4. Harsha Radhakrishnan said:

    Most (or all) of us have issues with many a paper thats published in our respective fields. We all want to comment on things wrong in a paper. But who verifies if I we are right. If the comment on the paper takes a life of its own and becomes big, what does that say about the journal that accepted the original paper. What does that actually say about the peer review process. At some point, fingers need to be pointed towards editors and reviewers (full disclosure: I am a reviewer) who deem the manuscript worthy of publication. Do we need a modification of the entire publication process? Does data uploading, videos of methods, and code/software used for analysis be submitted along with the manuscript. Does the burden then again lie on the reviewer to ensure all’s working well.
    Its easy to gripe against a publication. But the problem is far bigger. We need to address that first.

  5. Sergio Stagnaro said:

    Too much information (garbage) kills information (real advances). For instance, who has been informed of the Five Stages of T2DM, bedside recognized with a stehoscope and removed by Quantum Therapy. Almost all physicians were kept in the dark regarding CAD Inherited Real Risk. Not to speak of Oncological Terrain-Dependent Inherited Real Risk, cancer onset is based on.

Comments are closed.