Subscribe To Print Edition About The Tribune Code Of Ethics Download App Advertise with us Classifieds
search-icon-img
search-icon-img
Advertisement

Brain activity of listeners used to recreate classic Pink Floyd number

New Delhi, August 16 Brain activity of people listening to Pink Floyd’s “Another Brick in the Wall” was used to reconstruct the classic rock number, scientists report in a new study. The reconstruction reveals that brain waves can be recorded...
  • fb
  • twitter
  • whatsapp
  • whatsapp
Advertisement

New Delhi, August 16

Brain activity of people listening to Pink Floyd’s “Another Brick in the Wall” was used to reconstruct the classic rock number, scientists report in a new study.

The reconstruction reveals that brain waves can be recorded and processed to bring out musical elements of the speech heard, such as rhythm, stress, accent and intonation, along with the syllables, the study from the University of California (UC), Berkeley, US, said.

Advertisement

These musical elements are known to convey meaning in ways that spoken words alone do not.

The phrase “All in all it was just a brick in the wall” could be recognised in the reproduced song, with its rhythms intact and the words muddy yet decipherable, the researchers report in the study published in the journal PLoS Biology.

Advertisement

The listeners were 29 patients undergoing epilepsy surgery at Albany Medical Center, New York, US, over a decade ago.

Neuroscientists at the centre recorded electrical activity through electrodes placed on the patients’ brains as they heard an approximately 3-minute segment of the classic Pink Floyd song from the 1979 album The Wall.

“As this whole field of brain machine interfaces progresses, this gives you a way to add musicality to future brain implants for people who need it, someone who’s got some disabling neurological or developmental disorder compromising speech output.

“It gives you an ability to decode not only the linguistic content, but some of the prosodic content of speech, some of the (emotional) affect,” said Robert Knight, a neurologist and UC Berkeley professor of psychology who conducted the study with postdoctoral fellow Ludovic Bellier.

The brain machine interfaces used today to help people communicate have a robotic quality similar to how the late Stephen Hawking sounded when he used a speech-generating device, the researchers said.

Previous studies have used brain activity to reconstruct the words a person was hearing. They have also recorded signals from the brain’s motor area linked to jaw, lip and tongue movements to produce the speech intended by a paralysed patient. The words would display on a computer screen.

This study suggested that recording from the brain’s auditory regions, where all aspects of sound are processed, can capture other aspects of speech important in human communication.

“Decoding from the auditory cortices, which are closer to the acoustics of the sounds, as opposed to the motor cortex, which is closer to the movements that are done to generate the acoustics of speech, is super promising,” said Bellier. “It will give a little color to what’s decoded.” For the study, Bellier reanalysed brain recordings obtained in 2012 and 2013 and used artificial intelligence (specifically, nonlinear regression models) to decode brain activity and then encode a reproduction.

He and his team also pinpointed new brain regions involved in detecting rhythm, such as a thrumming guitar, and discovered that some portions (superior temporal gyrus) of the auditory cortex responded to the onset of a voice or a synthesizer, while others responded to sustained vocals.

The researchers also confirmed that the right side of the brain is more attuned to music than the left side. “Language is more left brain. Music is more distributed, with a bias toward right,” Knight said.

Advertisement
Advertisement
Advertisement
Advertisement
tlbr_img1 Home tlbr_img2 Opinion tlbr_img3 Classifieds tlbr_img4 Videos tlbr_img5 E-Paper