Discrepancies in serious adverse event reporting may distort the medical evidence base

CT blog

The worst thing that can happen to a person participating in a clinical trial is what’s known as a ‘serious adverse event’, which can describe anything from permanent kidney damage or liver failure to hospitalization or even death. Federal law in the US mandates that researchers conducting trials of drugs or other products regulated by the country’s Food and Drug Administration (FDA) report adverse events on ClinicalTrials.gov, a data repository open to the public. But a new study shows that many of these serious adverse events don’t appear in medical journals, making some interventions seem more favorable than they may actually be.

Reporting online today in the Archives of Internal Medicine, a group of researchers led by Daniel Hartung, a drug safety and policy analyst at Oregon Health & Science University in Portland, looked at how the data reported on ClincialTrials.gov stack up against the results published in the medical literature. The team limited their focus to phase 3 or 4 trials with results reported on ClinicalTrials.gov and completed prior to 2009, to allow sufficient time for the trials’ results to be published in medical journals. Hartung’s group then randomly selected 10% of those trials that had matching publications, yielding a total of 110 trials.

Hartung’s team found that 33 of the trials reported a greater number of serious adverse events on ClinicalTrials.gov than in the medical literature. For example, a 13,608-person study comparing the blood-thinning drugs Effient (prasugrel) and Plavix (clopidogrel) reported in the online database a total of 3,406 serious adverse events among all participants in the trial, and 3,082 in a related publication. (The patients in the trial were at high risk of heart attack, and were undergoing angioplasty, so it’s important to note that these adverse events were not necessarily linked to the drugs.)

Of the 84 trials that reported the occurrence of serious adverse events in the public database, 16 of the matching publications either failed to mention them or incorrectly reported that they did not occur. (Notably, 5 trials actually reported more serious adverse events in related medical papers than they did in the public database.)

Continue reading

On the road at #SfN13 – Tackling the terabyte: how should research adapt to the era of big data?

If you’re attending the Society for Neuroscience meeting this year (#SfN13), join us for our panel discussion: ‘Tackling the terabyte: how should research adapt to the era of big data?

When: Monday, November 11, 6:30 – 8:30 p.m.

Where: Hilton San Diego Bayfront, 1 Park Blvd, San Diego, CA 92101  

Room: Sapphire 400

Continue reading

Researchers less willing to share study details, according to journal’s survey

Researchers are increasingly reluctant to share the background details of their studies with other scientists according to new results from a survey of authors who published papers in the Annals of Internal Medicine in the last five years. This downward trend in researchers’ willingness to disclose such information is, unfortunately, at odds with the current surge in efforts to facilitate access to the types of study specifics that are vital to reproducing results.

Increasing transparency in research—by sharing the nitty-gritty details of studies that don’t make it into the published accounts, such as preliminary qualifying test results for clinical trial participants—is a hot topic. A report released on 29 March by the Institute of Medicine (IOM) based in Washington, DC, entitled “Sharing Clinical Research Data: A Workshop,” concluded that giving other scientists access to information from studies was increasingly important for the research community. But the report did acknowledge that researchers sometimes have fears that the data they share, for example clinical results, might be misused or misinterpreted if not enough attention is given to how the data were originally collected.

“The biomedical industry lags behind the rest of the world in how we share information,” says Sharon Terry, chair of the IOM workshop committee and president of the Genetic Alliance, a Washington, DC-based health advocacy group that focuses on issues related to gene testing. “We need to catch up with the other industries that have figured out ways to share data and still protect it.”

In the new survey, a majority of researchers said that they would be willing to share study materials with their colleagues, according to the results presented by Christine Laine, the editor-in-chief of the Annals of Internal Medicine, at the International Congress on Peer Review and Biomedical Publication held in Chicago today.

The findings came from theoretical questions answered by 389 respondents who published papers in the Annals of Internal Medicine between 2008 and 2012. During that period, 71% said they would share their study protocols beyond what was in the methods, and 72% were willing to share the full statistical methods used to analyze the data, including the computer algorithms employed. However, only 54% were willing to share all the data collected during the study, including information that didn’t end up being in the final report. Most of the researchers who answered the survey questions also added extra provisos under which they would share these types of information; for example some would only do so in response to a personal request from an interested party (rather than depositing the information in, for example, an online data bank).

Downward trend

What concerns the surveyors most is that over the five-year survey period the responses showed a noticeable decline in scientists’ willingness to share details about their study protocols. Based on the replies to the theoretical questions, around 80% of the survey respondents in 2008 said they would share additional details about their study protocols, beyond what is described in the methods section of the paper, but by 2012 only 60% were willing to provide colleagues with that information, a significant difference. There was a similar—but not statistically significant—slip in researchers’ willingness to share data. When the survey started, about 60% of researchers said they’d share raw data, but five years later that number had dropped to 45%.

Continue reading

Immunologist effort aims to improve hyperlinking of research papers to raw data

TrialShareA study published today in the New England Journal of Medicine reports that people suffering from ANCA-associated vasculitis, a disease in which the body attacks its own defense system, can now be effectively treated with one month of weekly infusions of rituximab, instead of the standard 18-month regimen with daily pills of cyclophosphamide, which has strong side effects. But that is not the only thing that makes the report noteworthy. According to its authors, the study is the first to contain hyperlinked charts or graphs that redirect users to an information-sharing system called TrialShare, where they can instantly access data amassed during this clinical trial and others.

The interactive platform was developed by the Immune Tolerance Network (ITN), a consortium of clinical researchers who study everything from allergies to organ transplants. The ITN is headquartered in Seattle, Washington, and funded by the US National Institute of Allergy and Infectious Disease, in Bethesda, Maryland.

Even without an online version of the article, anyone can sign on to the website, at www.itnTrialShare.org. However, the site is geared toward researchers who want access to information acquired during a study—for example, the twice daily blood pressure readings of patients that a researcher might have reported as an average instead of logging every measurement—which may not have made it into the published paper or supplementary material. Creators of the website emphasize that the information about the patients is anonymized.

This isn’t the first effort to put more information about clinical trials in the hands—and minds—of more people, with the hope of improving the design of future studies and avoiding redundant work. There are some groups, such as the Biomarkers Consortium, based in Bethesda, Maryland, and founded by a combination of private and public agencies that include the US National Institutes of Health (NIH), focused on specific collaborations that only share research data only among their partnership members. TrialShare, by comparison, is now open-access.

Other databases, such as the Gene Expression Omnibus (GEO) run by the NIH’s National Center for Biotechnology Information, act as repositories for data, but they lack the clinical data seen in the TrialShare site.

Continue reading

UPDATED: GSK inquiry reports signs of possible data fabrication in multiple sclerosis paper

An inquiry by the British pharmaceutical company GlaxoSmithKline (GSK) into allegations of possible data fabrication in a 2010 Nature Medicine paper regarding the role of specialized T cells in autoimmune disease has found what it sees as evidence of misconduct. Concerns regarding the paper surfaced last week, when news sources reported that the company had begun investigating the research conducted for the study at a GSK lab in Shanghai.

The paper, led by Jingwu Zhang at the GlaxoSmithKline Research and Development Center’s department of neuroimmunology in Shanghai, originally claimed to have found data suggesting that the signaling molecule interleukin-7 caused a subset of T cells known as T helper 17 (TH17) cells taken from people with multiple sclerosis to multiply. The finding complemented other research in the field suggesting that genetic differences in the cell receptor for interleukin-7 might put some individuals at risk for developing multiple sclerosis—an autoimmune disease thought to involve helper T cells.

Continue reading

Yale immunologist wins new €4 million award

Ruslan Medzhitov

{credit}Brian Ach/HHMI{/credit}

Most scientists will say that they go to the lab every day out of a pure love of science, not to make buckets of money. But for researchers at the pinnacle of their fields, science can be a lucrative trade. Win a Nobel Prize, and you could take home more than $1.2 million. Bag a Templeton Prize, and you could be depositing a $1.7 million check. Net a Breakthrough Prize in Life Sciences, first awarded earlier this year, and you’d walk away with a cool $3 million.

But that’s nothing compared to the €4 million ($5.1 million) purse attached to the Else Kröner-Fresenius Award, a new prize handed out today by the German non-profit Else Kröner-Fresenius-Stiftung (EKFS). Although €3.5 million of the prize money is intended for future research (leaving only €500,000 for the recipient to use as he or she pleases) the total value of new award makes it the most valuable single accolade in all of science, monetarily at least.

That accolade was given to immunologist Ruslan Medzhitov, a Russian-born scientist at Yale University in New Haven, Connecticut, who co-discovered and characterized mammalian Toll-like receptors (TLRs) in the 1990s. These pattern recognition molecules are now recognized as integral parts of the innate immune system that fight off microbial infections and detect associated damage. Many drug companies are actively targeting these receptors in the hopes of treating cancer, sepsis and inflammatory disease.

Two years ago, Medzhitov (pictured) was controversially overlooked for the 2011 Nobel Prize in Physiology or Medicine, which went to the discoverer of dendritic cells (Ralph Steinman) and two other immunologists who elucidated key aspects of innate immunity (Bruce Beutler and Jules Hoffmann, with whom Medzhitov shared the 2011 Shaw Prize in Life Science and Medicine, the $1 million ‘Nobel Prize of the East’). At the time, 24 scientists wrote an open letter in Nature arguing that Medzhitov and his mentor Charles Janeway, who died in 2003, should have been recognized by the Nobel Committee for their seminal contribution of cloning a human TLR and showing that it activated signaling pathways that induce adaptive immunity.

However, according to Stefan Kaufmann, director at the Max Planck Institute for Infection Biology in Berlin, the Nobel snub had no effect on Medzhitov’s selection for the new award. Medzhitov “was clearly one of more innovative researchers,” says Kaufmann, who, as president of the International Union of Immunological Societies, served as chair of the award’s executive committee. Plus, he notes, the Else Kröner-Fresenius Award recognizes both past achievements and ongoing research activity, and Medzhitov has an active research program that could aid in the development of new vaccines and anti-inflammatory medicines. (See this commentary that Kaufmann cowrote last year in Nature Immunology for more background on the award.)

The inaugural immunology-themed award was timed to commemorate the 25th anniversary of the death of EKFS founder Else Kröner. Going forward, the foundation expects to grant the award every four years to a different discipline of medical research.

Reviewing gender

Original image courtesy of Stuart Miles / FreeDigitalPhotos.net

We’re back! Apologies for the long radio silence – day job, what can I say.

Last week Nature published a leader reflecting upon our performance as editors and journalists in the gender balance of our referees, commissioned authors, and journalistic profiles. The verdict?  Plenty of room for improvement – in 2011, only 14% of Nature’s 5,514 manuscript referees were women.  Those numbers are for all areas, both physical and life sciences. I don’t have the exact number for just neuroscientists but a quick partial analysis suggests it is in the same ballpark. How good/bad is 14%? According to a 2007 survey of North American neuroscience programs, 36% of neuroscience assistant professors, 28% of associate, and 21% of full professors are women. I don’t know what those percentages would be if you included neuroscientists from the rest of the the world (I’m guessing they would be lower), but I am fairly confident in saying we haven’t been grossly overrepresenting women in our referee picks.

So how do we choose our referees?  Continue reading

The harder they fall

Pretty busy week over at the JAMA offices. First came the report that one of its editors had called a whistleblower a “”https://blogs.wsj.com/health/2009/03/13/jama-editor-calls-critic-a-nobody-and-a-nothing/“>nobody and a nothing”, report that was accompanied by a pretty long series of comments from outraged readers.

Then came the journal’s decision to modify its policy on conflicts of interest. Crucially, the new policy states that “The person bringing the allegation will be specifically informed that he/she should not reveal this information to third parties or the media while the investigation is under way, will be informed about progress of the investigation, upon request, as appropriate, and will be notified when the investigation is completed.”

Ha! I’m sure that those New York Times and Wall Street Journal reporters will be delighted to hold on to their stories before breaking the news that a fresh conflict-of-interest case has come to light. I’m also sure that next time you discover an unreported conflict, you will first inform the journal and wait as long as needed for it to take remedial action, instead of bringing the conflict to the attention of the author’s institution or funding body — what authority do these other people have, anyway?

Not surprisingly, several media outlets have already put their own spin on the way they are reporting this policy change, and they don’t seem impressed by it.

No-one would deny that JAMA has been a leader in raising awareness about conflicts of interest, discussing them perhaps to the point of eliciting a certain desensitization — is anyone surprised when the journal expresses, yet again, the view that conflicts of interest should not be tolerated? Alas, despite its track record, the events of the past week undermine the credibility of the journal’s position on this front.

To my mind, the way in which this whole controversy has escalated is related, in no small measure, to the overzealous way in which JAMA has always decried conflicts of interest. In other words, the tough line that JAMA has taken against conflicts of interest makes the journal much more susceptible to embarrassment when one emerges. Or as the saying goes, the higher they climb, the harder they fall. The latest policy change would seem to be saying that it’s always possible to climb a little higher.

Photo by SparkyLeigh via Flickr

We want your paper!

This story in The New York Times got me thinking about how similar high-end restaurants and scientific journals have come to be of late.

03note_span.600

Photo Illustration by Tony Cenicola/The New York Times

The article reports that expensive restaurants are no longer playing hard to get and have decided to offer great deals in order to attract costumers. I seem to recall that I read a similar story about British restaurants, but cannot find the link. In any case, the reason why I say that this looks a lot like what’s happening with scientific journals is that it seems that publications are doing everything they can to attract potential authors. For example, according to this blog entry at The Scientist, the Journal of Biology gives authors the option of asking the journal to publish their revised paper without asking the original reviewers to comment on the suitability of the revisions made in response to their critiques.

It seems that the editors of the journal will “carefully scrutinize revised manuscripts,” and if the authors addressed “substantive issues,” the journal will publish the article with an accompanying editorial in which any problems with the paper will be flagged. Sure, authors may be happy with this arrangement, but what about the reviewers? I don’t know about you but, if I were asked to review a paper for this journal, I’m not sure I would be very keen on lending a hand if I won’t have a chance to engage in a dialogue with the authors.

In another example, one of my colleagues at NPG was telling me that a relatively visible cell biology journal has this fast-track system in which members of the editorial board internally referee a paper in less than 2 weeks, only asking for essential controls. Not surprisingly, people in a hurry love this ‘rapid communication’ system. After all, why bother with further experiments to bolster an author’s conclusions?

Then there’s the journal that’s redifining what it means to publish an article — PLoS One. In this case, the only thing that matters is that the paper be technically sound to merit publication. It doesn’t matter if it’s an incremental advance or something not particularly new. As long as the experiments were properly done, the paper will be published. This is actually a very clever model, and I strongly suspect that it will turn on the heat on a lot of specialized journals that publish very thin slices of the scientific salami.

Think about it: if you’re a neuroscientist and your paper didn’t make it in Nature, Science, Nature Neuroscience and Neuron, how much further down the pecking order will you go before you stop caring? The Journal of Neuroscience is a very decent journal, and many of us would still be OK with a paper there. Some of us may go one notch below but, really, very quickly you will want to see the back of that study and just have it published anywhere. PLoS One is therefore an excellent option if your paper didn’t make it into one of the vanity journals, as it will be very visible and freely accessible. My prediction is that very soon this journal will start taking a lot of business from the more specialized journals in every discipline.

There is a problem for the vanity journals, though. If people can publish their work in a decent place like PLoS One, the reputation of which is steadily growing, they will be less inclined to do the hard experiment that will get them a high-profile paper in a vanity journal. This is, of course, bad news for my journal and other highly visible titles. But more worryingly, it might be a bit of a problem for the advancement of science in general, as it isn’t hard to imagine that many scientists may shift into a “complacent mode” in which they cease to ask their staff and themselves to go that extra mile that will turn their study into something really satisfactory. In other words, I can imagine them thinking “why should I do all those experiments that the Nature Medicine referees asked for when I could immediately go to PLoS One and have this part of the story out?”

Don’t get me wrong, though. I don’t mean to insult PLoS One, which strikes me as a legitimate option to disseminate your work. Here I’m trying to make a broader point about the effect that shifting publication standards can have on science at large. In this regard, it may be illustrative to recall the example of PNAS, a journal that, in its heyday, was regarded as a very high-profile publication. I’ve heard many people (including some members of the PNAS editorial board) complain about the fact that members of the National Academy of Science get to publish their work very quickly, after a not-so-stringent peer-review process. I think it’s fair to say that PNAS doesn’t carry any more the weight that it used to carry, but it’s also true that its club-style approach to accepting papers hasn’t been beneficial for the publishing community or for science in general.

The push for attracting papers seems to be so hard that it’s also beginning to affect the vanity journals. Cell, for example, just published this editorial in which Emilie Marcus states that “While some may think the work of an editor is mainly to reject papers, we have found that to achieve our vision for the journal the most important task for an editor is to be an enthusiastic advocate for science and to actively define what is interesting and important to publish—in essence to accept papers.” So, in other words, if you send your paper to Cell you will find an advocate of your science who will try to work with you in order to get the paper where it needs to get.

Emilie is right in that those papers that are potentially interesting but somewhat premature are to be nurtured, and this is something that editors must always try to do — at Nature Medicine we certainly do so. What she fails to mention is that those potentially great papers are so infrequent that, alas, the vanity journals will continue churning out many more rejection letters than letters of encouragement. Be that as it may, as a strategy to get people to submit to their journal, I’m sure the Cell editorial will be very effective.

Even our firm is beginning to experiment with new ways to make a rejection letter from a Nature-branded journal less painful. I don’t think I’m at liberty to discuss the plan in detail, but it is consistent with this global strategy of working in favor of the author, as opposed to asking them to do the hard experiment.

It’s difficult to predict where this whole trend is going to end but, just in case, I’m asking our art editor to print a couple of poster boards like those that top chefs Mario Batali, Sirio Maccioni and Jean-Georges Vongerichten are wearing in the picture above. My plan is to carry the boards with me at every scientific meeting I go to, hoping to attract one or two submissions per trip. You won’t believe our deals — I guarantee it!

Mine is larger than yours

A dear friend of mine sent me a link to this page, which shows the “h indices” of what the author of the page refers to the “best Spanish scientists”. The page is a bit difficult to navigate if you don’t know Spanish, but it doesn’t matter; I’m sure that if you have the time and inclination, you will find a similar page in your language and for the nationality of your choice.

The reason for bringing it up has to do with the raison d’etre of the h index — to quantify an individual’s scientific research output. The h index was originally introduced by J. E. Hirsch, from UCSD, in this paper and, briefly, his proposal was that a scientist has an index of h if h of his/her papers have at least h citations each, and the rest of his/her papers have no more than h citations each. In his paper, Hirsch argues why this measure is preferable to other criteria, and ends up suggesting that “this index may provide a useful yardstick to compare different individuals competing for the same resource when an important evaluation criterion is scientific achievement, in an unbiased way”.

I don’t know how many people have bought into this index, but needless to say, as any of these metrics, it has limitations. For example, if you’re the technician of a lab that has a bunch of highly cited papers and you’re always including in the middle of a long list of authors, does your massive h index turn you into one of those “best scientists”?

In any case, its limitations notwithstanding, I thought I would share it in order to stimulate our unsatiable appetite for ways to measure the quality of what we publish. Ready to go check if yours is larger than your neighbor’s?

Image by Brett L.