Soapbox Science

Research 2.0.3: The future of research communication

Elizabeth Iorns headshot (portrait)Elizabeth Iorns is a breast cancer scientist and the Co-Founder & CEO of Science Exchange, an online marketplace for science experiments. 

This week Elizabeth is hosting a three part series all about the research cycle. Do share your thoughts in the comment thread.

In the first post of this series, I described the changes that are coming for the way scientific research is funded in the digital age.  In the second part, I explored the ways in which the process of research itself is destined to undergo dramatic changes. In the final part I look at the future of research communication, exploring the impact of digital tools on scientific reputation.

Nowhere in science has the disruptive power of digital technologies been more apparent than in research communication.  Traditional journals with subscription-based models and lengthy, anonymous pre-publication peer-review have been challenged by pre-print servers, post-publication evaluation and open access. Scientific news outlets now co-exist with blogs, often written by practicing scientists.  The threat of gatekeepers stultifying the scientific process is replaced by the need for new ways to extract knowledge from the threat of information overload.

The continued move towards open access

The Open Access movement to free published research from subscription barriers has led to major research funding agencies in the U.S., U.K. and the EU mandating that the research they fund be made openly available on-line within 12 months of publication.  This in turn has led to a shift in publishing business models away from subscriptions and towards publication fees, as evidenced by the prominent rise of new OA journals such as PeerJ and eLife to join the more established BioMed Central, PLOS, and Frontiers journals.

Open Access has led directly to an increase in usage of platforms that make is easy for researchers to comply with this mandate by depositing open access versions of their papers. Examples of companies in this space are Academia.edu, ResearchGate.net and Mendeley.  Open Access also means that anyone can contribute to the post-publication evaluation of research articles.

The evolution of peer review

Traditional pre-publication peer review typically involves the selection of two to three anonymous experts by a journal’s editors to provide feedback on the quality and potential impact of a submitted paper. Because it relies on increasingly burdened volunteers, this process can be excruciatingly slow, requiring months of mediated dialog to reach a satisfactory conclusion.  If a paper is rejected by one journal, the process must begin all over again unless it is passed along to another journal within a publisher’s stable or other consortium.  In addition, the quality of peer review is often poor because, due to the increased specialization of research, many scientists are not properly trained to assess an entire research project which can cover more than 20 different experiment types.

There are a number of initiatives focused on improving the process of peer review. Post-publication peer review, in which journals publish papers after minimal vetting and then encourage commentary from the scientific community, has been explored by several publishers, but has run into difficulties incentivizing sufficient numbers of experts to participate.  Initiatives like Faculty of 1000 have tried to overcome this by corralling experts as part of post-publication review boards.  And sometimes, as in the case of arsenic-based life, the blogosphere has taken peer review into its own hands.

The emergence of open science and the immediate communication of results

One new part of the open science movement is the practice of making the entire primary record of a research project publicly available online as it is conducted. This involves placing the lab notebook of the researcher online along with all raw and processed data as the material is generated. This is commonly referred to as “Open Notebook Science”. My company, Science Exchange, for example, allows for integration with open notebook platforms (e.g. figshare) so results from projects conducted on the Science Exchange platform can be published directly to the notebook. In the best case, these notebooks become part of the scientific record and integrated seamlessly between experiment and publication.

As traditional editorial filters and units of publication give way to new mechanisms for aggregation and evaluation of scientific findings, the question remains what impact this evolution will have on the incentive structures within which scientists work.

The future of reputation in the research world

Traditionally the number of first and senior author publications, and the journal(s) in which those publications appear, has been the key criteria for assessing the quality of a researcher’s work. This is used by funding agencies to determine whether to award research grants to conduct their future work, as well as by academic research institutions to inform hiring and career progression decisions. However, this is actually a very poor measure of a researcher’s true impact since a) it only captures a fraction of a researcher’s contribution and b) since more than 70% of published research cannot be reproduced, the publication based system rewards researchers for the wrong thing (the publication of novel research, rather than the production of robust research).

New ways for researchers to build reputation

The h-index was one of the first alternatives proposed as a measure of scientific research impact.  It and its variants rely on citation statistics, which is a good start, but includes a delay which can be quite long, depending on the rapidity with which papers are published in a particular field.  There are a number of startups that are attempting to improve the way a researcher’s reputation is measured. One is ImpactStory which is attempting to aggregate metrics from researcher’s articles, datasets, blog posts, and more. Another is ResearchGate.net which has developed its own RG Score.

Science Exchange, is central to three key areas of evaluation that will gain central importance over the next few years.

  • The first is the ability to identify and reward researchers who are producing the highest quality research, via badging of independently reproducible papers (see Reproducibility Initiative). Publications and citations are a poor measure of research quality since more than 70% of research studies conducted cannot be independently reproduced. We believe we can help introduce an alternative mechanism for measuring high quality research.
  • The second is a way to measure the effectiveness of research expenditure. Currently the amount of money actually spent to conduct a research project is not well assessed. Science Exchange allows funding agencies to have better oversight about how much it would cost to conduct experiments. We see this becoming an important metric for evaluating how well researchers spent their research dollars (particularly of interest to cost conscious funding agencies like disease foundations).
  • Finally, for fee-for-service providers, Science Exchange offers providers a means to establish a reputation that is literally tied to their ability to conduct and communicate scientific experiments.

The role of social networks

Hard core metrics aside, a scientist’s reputation is affected by his or her participation in scientific society.  Many a worthy tenure case on paper has been sidelined for lack of community feeling, particular in smaller institutions. And many an invitation to an exclusive scientific gathering has not been preferred because “who is that person anyway?  Has anyone heard him or her speak?”  Social networking has afforded new opportunities for scientists to add a personal dimension to their scientific portfolios, without constantly being “on tour”.   Surprisingly, even 140 characters allows scientists to engage in substantive debate, ranging from the value of Obama’s human brain initiative to what ENCODE really tells us.

From reputation, the circle begins again

New filters capture new data. Which set of reputational signifiers rise to the top will shape the future of science itself.  They may be impossible to predict, but as scientists, we need to pay attention.  Research 2.0 is here.

Part 1: Research 2.0.1: The future of research funding

Part 2: Research 2.0.2: How research is conducted 

Part 3: The future of research communication

Comments

  1. Konrad Hinsen said:

    What I consider most important among the proposals in this post is a way to build reputation based on the quality of research rather than on a combination of quantity and popularity, which is what all citation-statistics based schemes come down to. The idea of quality-based badges is good, what remains to be defined is how such badges are awarded. Reproducibility is one important aspect, but not the only one.

    Report this comment Cancel report
    Your details

    Please confirm the words below

    In order to reduce spamming, this process ensures you are a real person and not an automated program.

  2. William Gunn said:

    This is a concise summary of innovations in scholarly communication and Mendeley appreciates the mention, but I would like to make one important correction. Posting of manuscripts on Mendeley or another site may not put you in complance with funder mandate. For example, NIH funding requires deposit in Pubmed Central, and other mandates allow institutional repositories to be used, but anywhere else on the web is less often permitted for compliance with the mandate. Of the three, Academia.edu, Researchgate, and Mendeley, only Mendeley has built tools to make it easier to comply with funder mandates by facilitating deposit directly from the workflow tool to an acceptable repository.

    However, I feel certain that John Norman would welcome the participation of Academia.edu and Researchgate in the DURA project: https://www.jisc.ac.uk/whatwedo/programmes/inf11/jiscdepo/dura.aspx

Comments are closed.