Scroll Top

In my head, I’m singing


MIT neuroscientists have discovered a population of neurons in the human brain that light up when we hear singing but not other genres of music for the first time.

These auditory cortex neurons seem to react to a particular mix of voice and music, but not to conventional speech or instrumental music. The researchers said they don’t know what they’re doing and that finding out will take more time.

“The work provides evidence for relatively fine-grained function segregation within the auditory cortex, in a way that aligns with an intuitive distinction within music,” says Sam Norman-Haignere, an assistant professor of neuroscience at the University of Rochester Medical Center and a former MIT postdoc.

The research expands on a 2015 study that utilized functional magnetic resonance imaging (fMRI) to discover a group of neurons in the brain’s auditory cortex that reacts particularly to music. The researchers utilized recordings of electrical activity obtained near the surface of the brain in their current study, which provided them with much more accurate data than fMRI.

“There is a population of neurons that reacts to singing, and another population of neurons that responds to a wide range of music. They’re so near in fMRI that you can’t tell them apart, but with intracranial recordings, we get more resolution, and that’s what we think enabled us to tell them apart “Norman-Haignere agrees.

The study’s primary author is Norman-Haignere, and it was published today in the journal Current Biology. The study’s principal authors are Josh McDermott, an associate professor of brain and cognitive sciences at MIT, and Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines (CBMM).

Recordings of the nervous system

The researchers utilized functional magnetic resonance imaging (fMRI) to examine the brains of volunteers as they listened to a collection of 165 sounds, including various forms of speech and music, as well as commonplace noises like finger tapping or a dog barking, in a 2015 study. The researchers established a unique technique of processing the fMRI data for that study, which enabled them to discover six brain populations with various response patterns, including a population that reacts preferentially to music and another that responds selectively to speech.

See also  Big data and artificial intelligence can aid in the preservation of wildlife

The goal of the current study was to gather higher-resolution data using an electrocorticography (ECoG) approach, which records electrical activity using electrodes inserted within the skull. Compared to fMRI, which monitors blood flow in the brain as a surrogate for neuron activity, this provides a considerably more exact image of electrical activity in the brain.

“You can’t visualize the neuronal representations with most tools in human cognitive neuroscience,” Kanwisher explains. “The majority of the data we can gather can tell us that there’s a part of the brain that accomplishes something, but it’s a really restricted set of information. We’re curious as to what’s depicted in there.”

Because electrocorticography is an intrusive treatment, it cannot be conducted on people, although it is often used to monitor epilepsy patients who are ready to have surgery to treat their seizures. Patients are observed for many days so that physicians can pinpoint the source of their seizures before undergoing surgery. Patients may engage in research that entail monitoring their brain activity while completing specified activities at that period if they consent. The MIT team was able to collect data from 15 people over the course of many years for this investigation.

The researchers employed the same collection of 165 sounds as in the previous fMRI investigation for those subjects. Because the position of each patient’s electrodes was decided by their surgeons, some patients did not respond to auditory input, while others did. The researchers were able to deduce the sorts of neuronal populations that generated the data captured by each electrode using a unique statistical methodology that they created.

“When we used this strategy to this data collection, we got this neural response pattern that solely reacted to singing,” Norman-Haignere explains. “Because this was a surprise result, it validates the whole aim of the technique, which is to uncover possibly innovative things you may not think to seek for.”

See also  A new MRI probe can disclose more about the inner workings of the brain

That song-specific population of neurons displayed extremely modest responses to either speech or instrumental music, distinguishing it from the music- and speech-selective groups discovered in their 2015 research.

In the mind’s ear

The researchers established a mathematical way to merge the data from the intracranial recordings with the fMRI data from their 2015 study in the second portion of their research. Because fMRI can cover a significantly bigger area of the brain, they were able to pinpoint the locations of the neuronal populations that react to singing with more precision.

According to McDermott, “this strategy of merging ECoG with fMRI is a substantial methodological development.” “ECoG has been used by many individuals over the last 10 or 15 years, but it has always been restricted by the scarcity of recordings. Sam was the first to find out how to integrate the higher resolution of the electrode recordings with fMRI data to improve overall response localisation.”

They discovered a song-specific hotspot in areas that are selective for language and music at the top of the temporal lobe. According to the researchers, this finding shows that the song-specific population is reacting to elements like perceived pitch or the interplay of words and perceived pitch before transferring information to other sections of the brain for further processing.

The researchers are now hoping to understand more about the components of singing that cause these neurons to respond. They’re also collaborating with the lab of MIT Professor Rebecca Saxe to see whether newborns have music-selective areas, in order to understand more about when and how these brain regions develop.

The National Institutes of Health, the United States Army Study Office, the National Science Foundation, the NSF Science and Technology Center for Brains, Minds, and Machines, the Fondazione Neurone, and the Howard Hughes Medical Institute all contributed to the research.

Related Posts

Leave a comment

You must be logged in to post a comment.