This weekly Nautilus column highlights some of the online discussion at Nature Network in the preceding week that is of relevance to scientists as authors.
The Nature Network week column is archived here.
The citation in science group’s discussion is continuing after the meeting at the British Library last Tuesday (27 May), which was attended by a range of scientists, publishers, funders and others. Please join this group to provide your views on the use and misuse of citations in assessing research output. Forum topics include Metrics: the art of counting, a useful summary by Ian Mulvany of how metrics such as the Impact Factor and the H-index are calcluated. Ian concludes: “David Colquhoun’s admonition that we should just read the papers to determine the quality of the work is a great ideal, and it is certainly the only way that scientists can determine the value of the contributions of their peers in the published literature. However there remain many of us who are interested in what is happening in science who are not conversant with the details of particular fields, and we depend on derivitave indicators. Beyond the published literature there are many growing areas of contribution that at present are almost totally ignored.” (See also Turning web traffic into citations, a post by Noah Gray at Action Potential, the Nature Neuroscience blog.)
In another discussion on the forum, David Colquhoun asks whether publication metrics are appropriate for assessing people and/or institutes? “There are three separate problems that need to be kept distinct.
(1) Are any sort of publications metrics suitable for assessing people?
(2) Are any sort of publications metrics suitable for assessing institutions?
(3) How accurately can each sort of metric can be measured.
There is little point in discussing (3) unless the answer to (1) or (2) is yes. It is very easy to see that the answer to (1) is no, simply by applying the proposed measure to someone who commands universal respect in you own field. The answer to (2) is perhaps more difficult. The argument against using methods like that is partly their undemonstrated worth, but also the distortion of science that their imposition will undoubtedly produce. The pressure to produce cheap headline-grabbing work will be enormous. The long-term reputation of science will surely be damaged by this sort of bean-counting approach.”
In brief, some other Nature Network news:
From the recently re-named Lo Scienziato blog of Richard Grant, a conversation on the nature of networking, on the ways in which scientists communicate to collaborate (and other, unsummarizable topics).
William Burns asks “”https://network.nature.com/forums/goodpaper/1655">Are presubmission enquiries useful?" I have replied on behalf of the Nature journal editors, the short answer from our perspective being “not really”: the editors prefer to read a full paper at initial submission, rather than just an abstract, for the reasons provided in my reply.
In a forum post, James Millington highlights an article A Young Scientist’s Guide To Gainful Employment, containing “wise words for any junior researcher starting out on their academic career. It’s written with ecologists and biologists in mind but much of the advice is likely to apply to other fields.”
Bob O’Hara draws attention to a paper on “turning tables into graphs” and asks readers whether they think it helpful to create figures instead of presenting data as tables in their papers.