In one of the more astonishing demonstrations heard at Saturday’s music and language session, Northwestern University neuroscientist Nina Kraus started by playing the sound of someone saying the syllable ‘da’. Then she showed an electronic analysis of the sound as recorded by a sensitive microphone. It was a short burst of oscillating waves that sudden rose in amplitude, then quickly faded to nothing. Next, she show the electrical signals her lab had recorded in a subjects’ brain stem as he listened to the sound. It looked almost identical. Finally, she played the brain stem recording through the speakers: it clearly said ‘da’, only slightly distorted.
Kraus repeated the demo with ‘da’ at a higher pitch. Same story—the brain signals reproduced a slightly distorted ‘da’ at the same higher pitch.
Finally, Kraus played a few bars of rock music—and the brain signals parroted it right back. A bit crude, but recognizable.
This wasn’t just for fun, Kraus emphasized; by using the same types of recordings on subjects with and without musical training, she and her colleagues have been able to show that the training enhances subjects’ ability to recognize patterns in a noisy acoustic environment, and to attend to things that are especially important—for example, the emotional content of a baby’s cry.
Still, the demo was very, very cool to hear.