close
close

Scientists reproduce a Pink Floyd song by extracting it from the brain waves of listeners – and it sounds eerie

Scientists reproduce a Pink Floyd song by extracting it from the brain waves of listeners – and it sounds eerie

Mind reading seemed impossible a few decades ago. But with the invention of “neural decoding,” neuroscientists have understood how to decipher what is going on in the human brain simply by monitoring brain waves. While previous studies reconstructed images and words by extracting them from brain waves, this is the first time research has been done to reconstruct “music” from the brain. In August 2023, researchers at the University of California, Berkeley, reconstructed a 1979 Pink Floyd song by decoding the electrical signals in listeners’ brain waves. The study was published in the journal PLOS Biology.

Representative image source: Pink Floyd perform on the “The Wall” tour in London, England on August 7, 1980. (Photo by Pete Still/Redferns)
Representative image source: Pink Floyd perform on the “The Wall” tour in London, England on August 7, 1980. (Photo by Pete Still/Redferns)

For the study, lead researchers Robert Knight and Ludovic Bellier analyzed the electrical activity of 29 epilepsy patients undergoing brain surgery at Albany Medical Center in New York. During the surgery, the Pink Floyd single “Another Brick in the Wall, Part 1” was played in the operating room. Several electrodes were attached to these patients’ skulls, recording the electrical activity in their brains as they listened to the song. Later, using artificial intelligence models, Bellier was able to reconstruct the song from this electrical activity. The resulting piece of music was both eerie and fascinating. “It sounds a bit like they’re talking underwater, but this is our first attempt,” Knight told The Guardian.

This experiment provided several insights into the connection between music, muscles and mind. According to the university’s press release, this reconstruction showed that it is possible to record and translate brain waves to capture the musical elements of speech as well as the syllables. In humans, these musical elements, called prosody – rhythm, stress, accent and intonation – carry meaning that words alone do not convey. Since these “intracranial electroencephalography (iEEG) recordings” could only be made from the surface of the brain, this research got as close to the auditory centers as one could get.

Representative image source: Unsplash | Pawel Czerwinski
Representative image source: Unsplash | Pawel Czerwinski

This could prove to be a wonderful thing for people who have difficulty speaking, such as those who have suffered a stroke or muscle paralysis. “It’s a wonderful result,” Knight said, according to the press release. “It gives you the ability to decode not only the linguistic content, but also some of the prosodic content of the language, some of the effects. I think that’s what we’ve started to do, to crack the code.”

When asked why they chose music rather than voice for their research, Knight told Fortune that it was because “music is universal.” He added, “I think it was there before language developed and it’s cross-cultural. When I go to other countries, I don’t know what they’re saying to me in their language, but I can appreciate their music.” And more importantly, “music allows us to add semantics, extraction, prosody, emotion and rhythm to language.”

Representative image source: Pexels | Pixabay
Representative image source: Pexels | Pixabay

“Right now, technology is more like a keyboard for the mind,” Bellier told Fortune. “You can’t read your thoughts off a keyboard. You have to press the keys. And it creates a kind of robotic voice; there’s definitely less of what I call freedom of expression.”

In addition to revealing a way to synthesize speech, the study identified new brain areas involved in rhythm detection, such as the thrum of a guitar. In addition, the researchers also confirmed that the right hemisphere of the brain is more attuned to music than the left. “Speech is more left-brained. Music is more distributed, with a bias to the right,” Knight said, according to the press release. “It wasn’t clear if this would be the same for musical stimuli,” Bellier added. “So here we confirm that this is not just a language-specific thing, but that it is more fundamental to the auditory system and the way it processes both speech and music.”