Nature’s news team is no longer updating this newsblog: all articles below are archived and extend up to the end of 2014.
Please go to www.nature.com/news for the latest breaking news from Nature.
Nature’s news team is no longer updating this newsblog: all articles below are archived and extend up to the end of 2014.
Please go to www.nature.com/news for the latest breaking news from Nature.
The Bill and Melinda Gates Foundation has announced the world’s strongest policy in support of open research and open data. If strictly enforced, it would prevent Gates-funded researchers from publishing in well-known journals such as Nature and Science.
On 20 November, the medical charity, based in Seattle, Washington, announced that from January 2015, researchers it funds must make open their resulting papers and underlying data sets immediately upon publication — and must make that research available for commercial reuse. “We believe that published research resulting from our funding should be promptly and broadly disseminated,” the foundation states. It says it will pay the necessary publication fees (which often amount to thousands of dollars per article).
The foundation is allowing two years’ grace: until 2017, researchers may apply a 12-month delay before their articles and data are made free. At first glance, this suggests that authors may still — for now — publish in journals that do not offer immediate open-access (OA) publishing, such as Science and Nature. These journals permit researchers to archive their peer-reviewed manuscripts elsewhere online, usually after a delay of 6–12 months after publication.
Allowing 1 year’s delay makes the charity’s OA policy similar to those of other medical funders, such as the UK Wellcome Trust or the US National Institutes of Health (NIH). But the charity’s intention to close off this option by 2017 might put pressure on paywalled journals to create an OA publishing route.
However, the Gates Foundation’s policy has a second, more onerous twist that appears to put it directly in conflict with many non-OA journals now, rather than in 2017. Once made open, papers must be published under a licence that legally allows unrestricted reuse — including for commercial purposes. This might include ‘mining’ the text with computer software to draw conclusions and mix it with other work, distributing translations of the text or selling republished versions. In the parlance of Creative Commons, a non-profit organization based in Mountain View, California, this is the CC-BY licence (where ‘BY’ indicates that credit must be given to the author of the original work).
This demand goes further than any other funding agency has dared. The Wellcome Trust, for example, demands a CC-BY licence when it is paying for a paper’s publication — but does not require it for the archived version of a manuscript published in a paywalled journal. Indeed, many researchers dislike the thought of allowing such liberal reuse of their work, surveys have suggested. But Gates Foundation spokeswoman Amy Enright says that “author-archived articles (even those made available after a 12-month delay) will need to be available after the 12-month period on terms and conditions equivalent to those in a CC-BY licence.”
Most non-OA publishers do not permit authors to apply a CC-BY licence to their archived, open, manuscripts. Nature, for example, states that openly archived manuscripts may not be reused for commercial purposes. So do the American Association for the Advancement of Science, Elsevier and Wiley and many other publishers (in relation to their non-OA journals).
“It’s a major change. It would be major if publishers that didn’t previously use CC-BY start to use it, even for the subset of authors funded by the Gates Foundation. It would be major if publishers that didn’t previously allow immediate or unembargoed OA start to allow it, again even for that subset of authors. And of course it would be major if some publishers refused to publish Gates-funded authors,” says Peter Suber, director of the Office for Scholarly Communication at Harvard University in Cambridge, Massachusetts.
“You could say that Gates-funded authors can’t publish in journals that refuse to use CC-BY. Or you could say that those journals can’t publish Gates-funded authors. It may look like a standoff but I think it’s the start of a negotiation,” Suber adds — noting that when the NIH’s policy was announced in 2008, many publishers did not want to accommodate all its terms, but now all do.
That said, the Gates Foundation does not leave as large a footprint in the research literature as the NIH. It funded only 2,802 research articles in 2012 and 2013, Enright notes; 30% of these were published in OA journals. (Much of the charity’s funding goes to development projects, rather than to research which will be published in journals.)
The Gates Foundation also is not clear on how it will enforce its mandate; many researchers are still resistant to the idea of open data, for instance. (And most OA mandates are not in fact strictly enforced; only recently have the NIH and the Wellcome Trust begun to crack down.) But Enright says that the charity will be tracking what happens and will write to non-compliant researchers if need be. “We believe that the foundation’s Open Access Policy is in alignment with current practice and trends in research funded in the public interest. Hence, we expect that the policy will be readily understood, adopted and complied with by the researchers we fund,” she says.
More than half of all peer-reviewed research articles published from 2007 to 2012 are now free to download somewhere on the Internet, according to a report produced for the European Commission, published today. That is a step up from the situation last year, when only one year – 2011 – reached the 50% free mark. But the report also underlines how availability dips in the most recent year, because many papers are only made free after a delay.
“A substantial part of the material openly available is relatively old, or as some would say, outdated,” writes Science-Metrix, a consultancy in Montreal, Canada, who conducted the study, one of a series of reports on open access policies and open data.
The study (which has not been formally peer-reviewed) forms part of the European Commission’s efforts to track the evolution of open access. Science-Metrix uses automated software to search online for hundreds of thousands of papers from the Scopus database.
The company finds that the proportion of new papers published directly in open-access journals reached almost 13% in 2012. The bulk of the Internet’s free papers are available through other means – made open by publishers after a delay, or by authors archiving their manuscripts online. But their proportion of the total seems to have stuck at around 40% for the past few years. That apparent lack of impetus is partly because of a ‘backfilling’ effect, whereby the past is made to look more open as authors upload versions of older paywalled papers into online repositories, the report says. During this last year, for instance, close to 14,000 papers originally published in 1996 were made available for free.
“The fundamental problem highlighted by the Science-Metrix findings is timing,” writes Stevan Harnad, an open-access advocate and cognitive scientist at the University of Quebec in Montreal, Canada. “Over 50% of all articles published between 2007 and 2012 are freely available today. But the trouble is that their percentage in the most critical years, namely, the 1-2 years following publication, is far lower than that. This is partly because of publisher open access embargoes, partly because of author fears and sluggishness, but mostly because not enough strong, effective open access mandates have as yet been adopted by institutions and funders.”
The report’s conclusions are only estimates, as the automated software does not pick up every free paper, and this incompleteness must be adjusted for in the figures (typically adding around 5-6% to the total, a margin calculated by testing the software on a smaller, hand-checked sample of papers). And many of the articles, although free to read, do not meet formal definitions of open access – for example, they do not include details on whether readers can freely reuse the material. Éric Archambault, the founder and president of Science-Metrix, says it is still hard to track different kinds of open manuscripts, and when they became free to read.
The proportion of free papers also differs by country and by subject. Biomedical research (71% estimated free between 2011 and 2013) is far more open than chemistry (39%), for example. The study suggests that from 2008-2013, the world’s average was 54%, with Brazil (76%) and the Netherlands (74%) particularly high. The United Kingdom, where the nation’s main public funder, Research Councils UK, has set a 45% target for 2013-14, has already reached 64% in previous years, the report suggests.
The study comes during Open Access week, which is seeing events around the world promoting the ideas of open access to research. Yesterday saw the launch of the ‘Open Access Button’ in London – a website and app that allows users to find free research. If no free copy is available, the app promises to email authors asking them to upload a free version of their paper – with an explanation direct from the user who needs the manuscript. “We are trying to make open access personal – setting up a conversation between the author and the person who wants access,” says Joe McArthur, who co-founded the project and works at the Right to Research Coalition, an advocacy group in London.
The 2014 Nobel Prize in Physics has been awarded to Isamu Akasaki, Hiroshi Amano, and Shuji Nakamura.
The three researchers won the award for their invention of diodes that emit blue light, “which has enabled bright and energy-saving white light sources,” the prize-awarding committee announced in Stockholm today (see press release).
Combining blue, green and red diodes creates a long-lasting, efficient white light. But despite earnest industry efforts to work out how to get gallium nitride-based semiconductors to shoot out blue beams, it took until the 1990s before Akasaki and Amano – working together at Nagoya University in Japan – and Nakamura, working at a company in Tokushima called Nichia Chemicals, made the breakthrough.
Nakamura, like the other winners, was born in Japan. But in 2000, he left the country to take up an academic position at the University of Santa Barbara in California, and is now a US citizen. At the time, he said that the United States offered better working conditions: “Japanese industrial research and development may be on its way to becoming obsolete.” He later sued Nichia Chemicals over the compensation he received for inventing the blue LED technology, in January 2005 eventually settling for ¥840 million ($7.6 million at the time).
Update 2.25pm
Scientific American have a profile of Nakamura, written in 2000, which reveals how hard he had to push to develop the technology at Nichia Chemicals:
In January 1988 [Nakamura] bypassed his boss and marched into the office of Nichia’s CEO, Nobuo Ogawa, with a list of demands. He wanted about $3.3 million in research funding to work on blue-light devices and also a year off to study metallorganic chemical vapor deposition, or MOCVD, at the University of Florida. MOCVD was then emerging as the technology of choice for producing exotic semiconductors, such as the ones capable of emitting blue light.
Nakamura’s move would probably raise a few eyebrows at most in a small American start-up company, but it was absolutely outrageous in the feudal, seniority-based Japanese system. “I was very mad,” he explains, when asked what prompted his ultimatum. “I wanted to quit Nichia. I didn’t care about anything. It was OK for them to fire me. I was not afraid of anything.”
Much to his amazement, Ogawa simply agreed to all his demands.
Nature full news story on the prize is here.
The world’s first commercial coal-fired power plant that can capture its carbon dioxide emissions officially launched today in Canada — marking a milestone for ‘clean coal’ technology.
The Boundary Dam project, in Saskatchewan, aims to capture and sell around 1 million tonnes of carbon dioxide a year — up to 90% of the emissions of one of its refitted power units — to oil company Cenovus Energy, which will pipe the compressed gas deep underground to flush out stubborn oil reserves. Unsold gas will be hived off to the Aquistore research project. (If you’re curious to know what a carbon capture facility looks like, SaskPower provides a virtual tour of the power station.)
As noted in a Nature article about the scheme in April, carbon-dioxide capture and storage (CCS) technology doesn’t come cheap. The Boundary Dam refit will cost Can$1.3 billion (US$1.2 billion), has depended on $240 million in government subsidies, and SaskPower — the sole electricity supplier in the province — hopes that regulators will grant it a 15.5% increase on electricity prices over the next three years. But the hope is that engineers can learn from the experience how to install the technology at lower cost.
The Canadian project is just the first of what will need to be thousands of clean coal plants by 2050 to put a significant dent in emissions. (Coal-burning alone produced 15 billion tonnes of CO2 worldwide in 2012, 43% of the world’s total). On current timetables, the world is nowhere close to achieving this: the technology is just too expensive, and so far there’s been no political will to tax fossil fuels on the basis of their emissions, which would be an incentive for clean coal.
In 2009, the IEA published a road map calling for 100 large CCS projects by 2020, but in July 2013, with projects failing to materialize, it downgraded that to just 30. And even that is ambitious.
Still, one has to start somewhere. Around a dozen projects are already storing carbon dioxide at the million-tonne scale, mostly extracted from natural-gas processing plants, and the Saskatchewan ribbon-cutting today marks the first time that a commercial, grid-connected coal plant has adopted the technology. A newly built advanced coal plant in Kemper County, Mississippi, designed to store 3.5 million tonnes of carbon dioxide annually, was to open this year but has been delayed to 2015.
The Japanese research centre where one researcher was found guilty of scientific misconduct and another died in an apparent suicide this year will be renamed and reduced in size, the institute announced today.
The RIKEN Center for Developmental Biology (CDB) in Kobe is renowned as a world-leading organization for studying stem cells. But its reputation has been severely damaged by this year’s scandal: CDB biochemist Haruko Obokata was found guilty of scientific misconduct in work that claimed an easy way to make embryonic-like stem cells, but which no-one has been able to replicate. In July, her two Nature papers published on the technique, called stimulus-triggered acquisition of pluripotency, or STAP, were retracted. In August, Yoshiki Sasai, a senior co-author of the papers and a pioneering researcher at the CDB, was found dead: he left a suicide note that blamed the storm of media attention around the retraction of the two papers.
An independent RIKEN reform committee had recommended in June that the CDB be entirely dismantled. But that call led to a groundswell of support for the centre from stem-cell researchers around the world. They argued that one case of research misconduct did not mean an entire institute should be closed, even if a new centre replaced it. The committee’s proposals for the CDB “may even be more damaging than the incident itself”, noted Maria Leptin, a molecular biologist and director of the European Molecular Biology Organization in Heidelberg, Germany.
On 27 August, RIKEN said that the centre would be renamed, and its number of laboratories cut. It was not clear how many of its 540 staff would lose their jobs, if any. Masatoshi Takeichi, who has led the CDB since it was founded 14 years ago, will step down.
RIKEN also revealed in an interim report that its attempt to replicate the stem-cell findings have been unsuccessful. Histoshi Niwa, who is leading the replication effort and was a co-author on the original STAP papers, said he hadn’t yet managed to generate embryonic-like stem cells after treating mouse spleen cells with acid. The final report is expected by March. Obokata is also working on a replication attempt.
The US Department of Energy (DOE) has revealed today how papers from research it funds will become free to read, making it the first federal agency to respond to new standards for open access and data-sharing ordered by the White House 18 months ago.
The plans mean the DOE will be releasing up to 30,000 papers annually from behind paywalls, although the directive from the White House’s Office of Science and Technology Policy (OSTP) says that papers need not be made free until a year after publication. The plans come a week after the DOE’s announcement that its researchers should openly share the data from papers they publish.
Open-access advocates have welcomed the plan but say that it is vague and disappointing on some key points. For example, it seems that ‘free’ manuscripts may not be legally open to bulk downloading, re-distribution and re-use for creative purposes such as text mining, even though the OSTP directive had hinted otherwise.
“The DOE’s plan takes steps towards achieving the goals of the directive, but falls short in some key areas,” says Heather Joseph, executive director of the Scholarly Publishing and Academic Resources Coalition (SPARC) in Washington DC. “We don’t want to end up in a ‘read-only’ world of US science articles,” she adds.
Germany’s science funding may look healthy to outsiders, but its research ministry seems to have stretched its cash too thinly. Last week, it decided that helping to fund the world’s biggest radio telescope — to be built in South Africa and Australia by 2024 at a cost of more than €1.5 billion (US$2 billion) — was one international mega-project too many. On 5 June, it said it would pull out of the Square Kilometre Array (SKA), to the dismay of German astronomers, who say that they were not consulted and are hoping to reverse the move.
“It looks like Germany is in danger of derailing one of Africa’s first really big science projects,” says Michael Kramer, the director of the Max Planck Institute for radioastronomy in Bonn. From the SKA’s point of view, however, a loss of German support (which might have amounted to tens of millions of euros to an estimated €650-million first construction phase) would be “disappointing, but not catastrophic”, says Philip Diamond, director-general of the SKA Organization, headquartered in Manchester, UK, which coordinates the efforts of ten supporting nations. Nonetheless, says Diamond, “I and my German colleagues are working hard to do what we can to overturn this decision”.
Five years after it launched, Microsoft’s free scholarly search engine has fallen into shabby disrepair, failing to track even a fraction of papers published since 2011. But the team behind the product says that they are shifting their focus to a yet-to-be-released, next-generation version of the service.
A few years ago, Microsoft Academic Search (MAS) was vying with Google Scholar to be the web’s pre-eminent free scholarly search engine. Both products indexed tens of millions of scholarly documents, tracked their citations, and made profile pages for academics. MAS, which seemed to be envisaged as a research project as well as a free tool, seemed to have the edge on some features — visualizing connections between research fields, for instance. The stage was set for bibliometric battle.
But the competition never happened. A team of Spanish researchers who study science communication at the University of Granada, led by Emilio Delgado López-Cózar, decided to compare Google Scholar and MAS. They discovered — to their surprise — that Microsoft’s product had been failing to efficiently index scholarly documents since around 2011. (Last year, it captured only 8,000-odd documents.) “Is Microsoft Academic Search dead?” they asked in a working paper published on the arXiv preprint server on 28 April.
Others had noticed the issue too, judging from complaints left on the service’s message board last year, to which the only answer given was that the company was “actively working on indexing additional content”.
A phoenix may be rising from the ashes. Asked about the collapse, a spokesperson for Microsoft Research declined to address the problem directly, writing in an e-mail:
“Microsoft Academic Search (MAS) continues as a research project within Microsoft Research. Over the years, we have used the service as a mechanism to explore various challenges related to searching scholarly works, including author disambiguation, relative influence of publications, and graphs of related authors.”
But, he added:
“In parallel, Microsoft Research began an initiative on a next-generation version of MAS, which focuses on enhancing the user experience and evolving it from a research project to an integrated offering within Microsoft’s services portfolio. During this transition, Microsoft has maintained the features, functionality, and the ability for third parties to enter new and updated content into the existing search engine, but the majority of our focus has now shifted to this new initiative.”
He later clarified that the new version, yet to be released, would remain free. At one stage, the company had wondered whether to “evolve the service through third-party collaborators”, he said, but in the end decided to keep the product within Microsoft. The Spanish team notes that the lack of fuss about MAS’s sudden decline suggests not many people were actually using it.
Indeed, Google Scholar has far outstripped MAS by now. It can find about 99.3 million, or 87%, of an estimated 114 million English-language scholarly documents on the web, according to an estimate published last week by Lee Giles and Madian Khabsa at Pennsylvania State University at University Park (PLOS ONE 9, e93949; 2014). ‘Documents’ include books, technical reports and other grey literature, and the computer scientists estimated the number by combining results from Google Scholar and MAS.
At least 24% are freely available, they added. In a score of well-known journals (those classified as ‘multidisciplinary’ under MAS, which includes not only Nature, Science, Proceedings of the National Academy of Sciences and PLOS ONE but also Nano Letters, Journal of Applied Meteorology, Journal of the Royal Society Interface and others), 43% are free, give or take an estimate error of 10%.
Even Google Scholar has its weaknesses, however, the team notes. One is that it doesn’t provide an automated way for computer programs to make searches in the tool through an application programmable interface (API), so searches must be made by hand. It was only by using MAS’s API that the team could download and randomly sample documents for their survey. And of course, quantity is not necessarily quality: Google Scholar indexes more documents than do subscription products such as Thomson Reuters’ Web of Science or Elsevier’s Scopus databases — but it may not yet match their reliability.
It’s a common complaint among academics: today’s researchers are publishing too much, too fast. But just how fast is the mass of scientific output actually growing?
Many would throw up their hands and declare the question impossible. It’s clearly wrong to cite the growth of academic databases, such as Thomson Reuters Web of Science, which has increased its coverage by around 3% per year (barring occasions when the database incorporates a flood of new journals). That dramatically undercounts the true expansion: no database captures everything.
Bibliometric analysts Lutz Bornmann, at the Max Planck Society in Munich, Germany and Ruediger Mutz, at the Swiss Federal Institute of Technology in Zurich, think they have a better answer. It is impossible to know for sure, but the real rate is closer to 8-9% each year, they argue. That equates to a doubling of global scientific output roughly every nine years.
In a study to be published in the Journal of the Association for Information Science and Technology, and uploaded to the online server arXiv, Bornmann and Mutz find that global scientific output has probably kept up this dizzying rate of increase since the end of World War II. Other researchers say the study seems sound, although it is hedged with caveats.
“We identified three growth phases in the development of science, which each led to growth rates tripling in comparison with the previous phase: from less than 1% up to the middle of the 18th century, to 2 to 3% up to the period between the two world wars and 8 to 9% to 2012,” they write.