They’re calling it the Brown University “Center for Evidence-Based Medicine.” But, the team of researchers migrating from Tufts University in the Boston-area to Providence promise they will apply their research methods to “other complex scientific questions.” Team leader Dr. Thomas Trikalinos offered details to Nature Boston.
NB: What scientific disciplines outside medicine lend themselves to meta-analysis and evidence-based practices?
Almost any scientific discipline has data from different studies that might be combined. One area that Dr. Lau and Dr. Schmid have worked in is ecology where they are co-authors of a forthcoming book on meta-analysis in ecology.
Questions of interest in that field would include estimation of fish stocks by combining information taken in different locations at different times and the effect of different management strategies for recovery of damaged ecosystems.
Other areas include: criminal justice (effect of different policing strategies on crime rates), education (effect of different methods for instructing special needs children), psychology (memory aids for seniors), physics (different estimates of physical constants), economics (effects of microinvestment interventions), marketing (effect of advertizing strategies), etc.
NB: Can you talk about how the common language of methodology crosses disciplines?
As in all interdisciplinary activities, each discipline has its own terminology because many of the methods have grown up separately in different areas. We are actually members of an interdisciplinary group of leaders in the field called the Society for Research Synthesis Methodology, whose aim is to promote the exchange of ideas across disciplines. Dr. Schmid is Co-Editor of its journal Research Synthesis Methods which publishes methods articles intended to focus on tools that can be used across disciplines.
We recognize that many of the same tools just go by different names and are used in slightly different ways. Each discipline has its own type of data to which its tools are crafted, but in many cases these tools can also be used in other disciplines, but may not be known there.
NB: What challenges do you face by moving to a university without a medical center?
Our main work has concentrated in medicine and health care, although we have also worked in nutrition, ecology and economics. One of our challenges will be the loss of the very close connection to the clinical staff we have at Tufts. On the other hand, the Brown administration is very keen on our establishing close relationships with their teaching hospitals in Providence and we have had exceptional feedback from the clinical department chairs we have reached out to.
NB: Will the move make it easier explore disciplines outside medicine?
Definitely. One of the major reasons for our move is the chance to work in a university environment with its emphasis on education and research and its broad array of scientific interests. We have already had positive discussions with the computer science department and look forward to talking with other departments. We hope eventually to establish an academic program in methods for research synthesis at Brown that would enable us to encompass all scientific disciplines.
NB: How has the path from meta-analysis to practice guidelines changed over the years?
Most clinical practice guidelines do not excel in using all the evidence that is pertinent to the questions they address. Empirical research shows that many guidelines fail to cite major relevant randomized trials, or any trials at all.
This trend is probably changing, and recent guidelines are increasingly “evidence-based”, in that they use a systematic approach to prioritize the questions they address, and systematic reviews and meta-analyses to summarize the pertinent evidence. In fact, the Institute of Medicine published standards for systematic reviews and clinical practice guidelines last year in order to promote better methods in both. Dr. Lau was on the guidelines committee and Dr. Schmid was on the standards committee.
Note that clinical practice guidelines aim to provide actionable and realistic recommendations for managing the care of individual patients. Evidence on the effectiveness and safety of interventions or tests is only part of the information considered by the guideline panel – availability of care and access to care, costs, patient and physician preferences and other considerations should be taken into account. That being said, it is important that evidence is summarized in a systematic way.
NB: What did the public’s response to the USPSTF recommendations on screening mammography suggest about evidence v. the public perceptions of disease?
The USPSTF’s recommendations are typically very well thought through. Their recommendations on screening mammography have not been an exception. Arguably, the USPSTF could have handled the communication of the recommendations to the public in a different way given the public’s sensitivity on the topic.
Most people do not realize that screening can cause harms, including stress associated with testing and with getting a positive result, and costs and harms associated with follow-up tests, procedures and treatments for those with a positive screening test. Exposing the general population to such harms should not be taken lightly, as the vast majority of those who are being screened are healthy. Further, because the conditions we screen for have very low prevalence, the false positives far outnumber the true positives. This is true even for established screening programs such as Pap testing for cervical cancer. This is why we have so few screening tests; they are the notable exceptions, where the benefits of screening outweigh its harms.
NB: Can you talk about your group’s role in the study analyzing the effects of vitamin D and calcium?
Our group produced an evidence report on this topic through the Agency for Healthcare Research and Quality’s Evidence-based Practice Center program. The report was used by the IOM Dietary Reference Intakes (DRI) committee charged with reviewing the recommendations for vitamin D and calcium intakes. Our report summarized the evidence on the relationship between intake of vitamin D and calcium and 19 health outcomes. Examples were heart disease, mortality, preeclampsia, falls and fractures in the elderly. The IOM panel took into account our summary of the evidence, and our analysis of the methodological rigor of existing studies in their deliberations for setting the DRI values.
NB: Can you give examples of how your work helped explain why similar studies sometimes find notably different results?
The catch to this question is how one defines “similar”.
Exactly identical studies should give the same answer. As researchers in this field conduct empirical research to evaluate the conduct and reporting of research, we are often finding similar studies are not so similar after all.
For example, the inclusion criteria of two studies may be identical, yet they may end up enrolling two very different populations. The differences may be due to studies being conducted in different countries, how investigators make efforts to recruit patients, or seasons. Studies are also conducted with different rigor by different investigators. An intervention that appears to be the same actually could be many things. For example, “omega-3 fatty acids” have been promoted for various health benefits by various companies and the media. Only the long-chain omega-3 fatty acids that are found in fish or fish oil (EPA, DHA) have scientific evidence to support these claims. The (short-chain) omega-3 fatty acid (ALA) found in plants do not have evidence to support this claim. This confusion may be deliberate attempt at marketing. Furthermore, some researchers may selectively report only positive results.
Some of the early work that we did focused on the effect that differences in the underlying baseline risk had on the effect of treatments. We published a paper examining the differences between meta-analyses and large clinical trials. Although the two approaches generally agreed, we found that in a considerable proportion of the disagreements resulted when the large trial was conducted either after many of the smaller trials that made up the meta-analysis or in more inclusive populations. As a result, the rate of bad events was often lower in both the treatment and control groups in the big trial compared with those in the smaller trials. Consequently, the treatment effect was reduced in the big trial because there was less disease to cure. In other situations, the control itself changed as the standard of care improved. In such cases, the control became a more effect intervention and the treatment effect was reduced.
Another example: p53 is a genetic variation whose association with several cancers, such as lung, breast, and ovarian cancer has been examined in many studies by comparing the frequency of the genetic marker in cancer patients versus non-cancer controls. Studies assessing p53 in non-cancerous tissue for both patients and controls find no association with cancer. By contrast, studies that measured p53 in cancerous tissue find a huge association. But the latter are flawed, as it is known that measurements of p53 in the tumor do not agree with measurements in non-cancerous tissue.
NB: What kind of negotiating did it take to move an entire team from one research center to another? Was it an amicable divorce? Two weeks’ notice? A mass desertion?
We had had discussions at Tufts about establishing an academic program in quantitative synthesis, but it was apparent that its establishment was not imminent and that there were many competing interests across campus that would have to be satisfied before such an endeavor could receive clearance. We explored opportunities at other local universities and the response from Brown was immediately extremely positive. The Public Health leadership understood the importance of evidence-based medicine and the impact it could have on advancing their strategic goals. They identified evidence-based health care as an area of desired growth in Public Health, both for additional research expertise and for educational program development particularly as the program is poised to become a School of Public Health during the upcoming year. They see evidence-based medicine as a distinguishing characteristic for the new School. When we spoke with the Provost, he immediately grasped the application of our methods to other disciplines and could envision the impact we could have on Brown’s academic environment.
From a negotiating point of view, there were very few hurdles as Brown and we agreed on the broad outlines and the details very quickly. The whole offer came together within about a month, facilitated by the enthusiasm of Brown’s leadership.
We did not negotiate at Tufts, but informed them that we would be leaving. Although all at Tufts were disappointed in our departures, they understood the allure of a more academic environment and there have been no issues. Both sides are working to make the transition as smooth as possible. We have also committed to finishing our work at Tufts. Dr. Schmid will continue to advise four graduate students and collaborate with two Tufts research groups he has worked with for many years; Dr. Lau will continue as EPC head until the end of the current contract in September. All of us hope to continue collaborations with Tufts in the future.
NB: How will the ongoing health reform effort – including the move toward global payments and accountable health organizations – make it easier for providers to implement evidence-based practices?
Built into the healthcare reform act is the establishment of the Patient Centered Outcome Research Institute (PCORI). It says on its home page: “PCORI is an independent organization created to help people make informed health care decisions and improve health care delivery. PCORI will commission research that is guided by patients, caregivers and the broader health care community and will produce high integrity, evidence-based information.”
We are optimistic that evidence-based practice is here to stay.