Can technology make science look better than it is?

I read an interesting article from the New Yorker over the weekend (unfortunately, the article isn’t online so I’ll do my best to summarize), from last week’s issue, about the use of functional MRI (fMRI) to detect whether someone is lying…basically, a more ‘high-tech’ version of the polygraph lie-detector.

Overall, I thought the article did a good job of presenting brain scans for lie-detection in a suitably skeptical tone. Only a handful of studies have been done and they have been quite small, done under very artificial conditions and the results haven’t exactly been convincing.

But that hasn’t stopped a couple of startup companies, including one outside of Boston called Cephos, to start developing the technology, licensed from major US universities. A couple of local neuroscientists, MIT’s Nancy Kanwisher and Harvard’s Steve Hyman, are cited as critics of this rush towards commercializing an unproven use of fMRI technology.

One provocative point I thought the article made was that one reason for this premature enthusiasm for the technology in lie detection is that MRI has made some scientists and the public overconfident in science’s ability to ‘read the mind’ and understand and measure the underpinnings of human thought and emotions. Brain scans are high-tech so it can make some studies look like “hard-core” science, even if the studies are not very rigorous and are based on some rather dubious hypotheses.


This reminded me of a “story”:https://network.nature.com/boston/news/articles/2007/03/22/parrot-talks-counts-and-helps-researcher-raise-money-too we ran a few months ago, in which a leading cognitive psychologist was quoted, saying that there’s a real bias against research on the brain that doesn’t include MRI scans…that it’s very difficult to get published or funded unless you include some nice-looking brain scans in your work.

But isn’t this pro-technology bias pervasive in many of fields of science/biology? Are there other examples where the use of the latest ‘cool’ technology—be it high-throughput screening, computer modeling, informatics tools, sequencing, imaging, whatever other buzz words you can think of—has helped to attract more and bigger grants (because the technology is so expensive, of course) by making the science look more advanced, rigorous, objective, powerful than more classical techniques? Is that justified? No doubt, technology has enabled new discoveries. But is this happening at the expensive of more traditional, low-tech research that could still yield results?

Leave a Reply

Your email address will not be published. Required fields are marked *