The J-factor

(UPDATE: apparently Thomson ISI have already thought of this – you can calculate an h-index for any search – journal, author, keywords, etc… – you do on Web of Science)

I’ve been mulling this one over for a while. It may mean nothing, it may mean something, I’ll let you decide.

You may have heard of the h-index, it is the highest number of papers a scientist has published that have each received at least that number of citations, i.e., if 20 of your publications each have at least 20 citations, your h-index is 20 (to get up to an h-index of 21, not only does your 21st paper need to receive 21 citations, but all of those other 20 papers – each with at least 20 citations – need to reach 21 cites as well). So, as you can see, increasing your h-index just one point is not necessarily an easy thing to do, and the bigger the number, the harder it is to increase (at least that’s how I see it).

All in all, it seems to be a measure of consistency – rather than just considering the number of papers someone has published – which may include many mediocre ones that are rarely cited – the h-index takes into account how much impact a body of work has made in the community, based on citations.

It got me thinking. How about doing this for journals? Yes, we have the impact factor and associated numerical wizardry, but does that measure consistency (it may well do, I’m just asking…). As you probably know, the impact factor for a journal for any given year, let’s say 2004, is calculated by dividing the total number of citations received in 2004 for all papers published in that journal for the preceding 2 years (i.e., 2002 and 2003) by the total number of papers from that period. So, if The Journal of Marvellous Research published a total of 100 papers in the period 2002-2003, and in 2004 those papers are cited a total of 1500 times, its impact factor is 15.

Here’s my problem with that – what if most of those 1500 citations come from two or three review articles? Or five or six really good primary research papers? In other words, perhaps The Journal of Marvellous Research has published a few gems, but the rest is not quite up to scratch? So, how about an h-index for journals – the J-factor?

So, I thought I’d do a little bit of number crunching. Let’s look at three journals as a comparison. JACS from the ACS, Angewandte from Wiley and ChemComm from the RSC.

Let’s look at the five-year period from 2000-2004 and calculate a J-factor, i.e., the highest number of articles the journal has published during that time which each have at least J citations.

Here are the results:

JACS 133 (papers published 13,606)

Angewantde 114 (papers published 5,423)

ChemComm 75 (papers published 6,655)

Here are the impact factors for 2005 by comparison:

JACS 7.419

Angewantde 9.596

ChemComm 4.426

So, JACS has a lower impact factor than Angewandte, but a higher J-factor… does this mean anything? I’m not sure. Obviously JACS published far more papers, but that is not necessarily a measure of quality – after all, ChemComm published more papers than Angewandte in that five-year span, but has a significantly lower J-factor than Angewandte.

Just to be cheeky, here are the 2000-2004 J-factors for two other journals:

Nature 301 (papers published 13,679)

Science 287 (papers published 13,433)

…and compare this to the 2005 impact factors:

Nature 29.273

Science 30.927

So, I’m not really sure what this all means, if anything. The J-factor is a constantly shifting metric, the 2000-2004 J-factors for all of these journals may be different tomorrow, likely different next week, and certainly different next year. Will they change in proportion to one another? That’s something else to watch.

And remember, you can prove anything with statistics…

Stuart

Stuart Cantrill (Associate Editor, Nature Nanotechnology)

Leave a Reply

Your email address will not be published. Required fields are marked *

Research Roundup: This week’s papers from Boston labs

Surveying lava flows on the ocean floor and surveying conflicts of interest in clinical research

Pat McCaffrey

Present at the creation: Researchers capture spreading of sea floor

An underwater volcanic eruption that wiped out most of an array of seismometers placed on the Pacific Ocean floor turned into a boon for geologists and biologists studying the formation of the earth’s crust.

Researchers from Woods Hole Oceanographic Institute (WHOI) in Falmouth, MA, were part of a group that got the first look at a new patch of ocean floor after the eruption in January 2006. The event gave scientists their first measurements of seismic activity immediately preceding a major lava flow. Their results, published online in Science, will help them predict future underwater eruptions and study the cycle of events responsible for creating most of the earth’s surface.

Formation of new crust in the deep sea occurs along ridges where two tectonic plates are moving apart, leaving gaps for molten lava to escape. In 2003, Maya Tolstoy from Columbia University led a team to install a dozen ocean bottom seismometers directly on a ridge 400 miles south of Acapulco, Mexico. They had been watching the area ever since an eruption in 1991. Knowing that lava bursts occur roughly every 10 years, they placed the instruments in 2003 in hopes of capturing seismic data leading up to the next event.

When the researchers visited the site in 2005 to service the instruments, they found that almost all the seismometers were dead or stuck in the ocean floor. They immediately guessed that the instruments were victims of a recent lava flow. Water samples showed high turbidity and methane content, symptoms of an eruption.

WHOI scientist Adam Soule and colleagues then participated in two “rapid response” expeditions. In the spring of this year, the scientists towed cameras over the ocean floor, confirming a new lava flow more than 11 miles long and nearly two miles wide. Seismic data from the two surviving instruments showed the eruption occurred over a six-hour period on January 22.

An ocean bottom seismometer stuck in the hardened lava flow. Source: Ridge 2000 research program

Those expeditions gave biologists a unique opportunity to study newly formed hydrothermal vents and complex vent ecosystems. The researchers are now back on site. This time, they’re using a manned submersible vehicle to get a closer look at the new piece of ocean real estate.


IRBs packed with potential conflicts of interest

More than one-third of the members of institutional review boards (IRBs), the committees at research institutions that discuss and approve experiments involving human subjects and new treatments, have financial ties to the drug or medical device industries, according to a new nationwide survey from Eric Campbell and colleagues at Massachusetts General Hospital’s Institute for Health Policy.

The results of the poll, reported yesterday in the New England Journal of Medicine, found that while most IRB members do not believe that having a relationship with industry influences judgment, some reported participating in IRB discussions and votes despite possible conflicts of interest.

The anonymous survey of 574 IRB members at medical schools and hospitals found that 36 percent of respondents held positions as consultants, officers, directors, scientific advisory board members, or paid speakers, or received royalties or research support from industry. Fifteen percent said their IRBs had handled issues where they might have had a conflict of interest, and half of those said they had freely engaged in those discussions and votes.

Rather than being concerned about industry ties, a third of the survey participants reported that having colleagues with firsthand knowledge of the drug industry was a large benefit when discussing company-sponsored experiments. Many were also unfamiliar with their own IRB’s policies. Half of the respondents did not know if their IRB had a formal definition of conflict of interest, and only 67 percent knew of the process for disclosing industry ties.

Leave a Reply

Your email address will not be published. Required fields are marked *