There is just something about neural decoding that captures the imagination. Scientists “reading out brain activity” to infer what someone was seeing or doing sounds like the stuff of science fiction. But in practice, with the right dataset and right computer algorithm, it can be done – providing the question you are trying to query the brain is simple enough. But no matter how simple the question, with every paper comes an orgy of stories in the mainstream press about how scientists can eavesdrop on your thoughts or even engage in electronic telepathy. Thereby infuriating scientists and science journalists in droves, sometimes detracting from some very cool work.
Today I’m going back a few years to a paper that typifies this effect, a study from Jack Gallant‘s lab about a model for decoding natural images from fMRI activity in early visual cortex.
At the time, there were a handful of published studies that had successfully estimated what subjects had seen by analyzing fMRI activity. Basically, all involved collecting fMRI data while subjects viewed different images, and then training a computer model to classify the patterns of activity evoked by different images. These studies had only used relatively simple stimuli, such as oriented gratings and/or classified images within only a small number of possibilities. Gallant and his colleagues extended the feat in two ways: first, their model was able to decode a much greater range of complex stimuli (natural images) and more importantly, it worked on stimuli it hadn’t been trained on. It stopped short, however, of full reconstruction – it could tell you which one of a large set of images a subject had seen, but it could not recreate a stimulus from scratch (other studies both before and after this one have made some progress but we are still far from a true “mind reader” that can reconstruct all features of a stimulus and incorporate modulation by attention, memory and contextual effects).
Brain decoding is a news-making topic, and everyone who saw this paper during the editorial process knew it was going to be a news-making paper. But that’s not why we published it. Nor did we publish it for any fundamental new understanding of how the human visual system works. We published it for the technical advance – the model performed far better than previous ones. It also demonstrated that a relatively simple model based on just a few basic properties of visual cortex neurons is able to predict the fMRI response to a wide range of stimuli – a nice demonstration of what can be extracted despite the limited temporal and spatial resolution of the BOLD signal. So despite our fear of overhyped press, we felt this was an important step.
Speaking of overhyping, why are decoding studies always the ones described as ‘mind reading’? If mind reading is the inference of mental state by another individual, then isn’t any measurement of brain activity mind reading? Not just evoked and voluntary fMRI and ERP activity in humans, but receptive field mapping in macaques, recording during escape behavior in flies; it’s all inference of mental state. Of course, “mind reading” is a convenient and accessible shorthand for describing the analysis of brain activity – I’ll confess, I’ve used it, though not about decoding. But after seeing how it’s been extrapolated I doubt I will again.
