close
close

Breaking the silence: giving the silent a voice through thoughts

Breaking the silence: giving the silent a voice through thoughts

Summary: Researchers have enabled a mute person to speak using thoughts alone. Depth electrodes in the participant’s brain transmitted electrical signals to a computer, which then spoke imaginary syllables.

This technology offers hope to paralyzed people to regain their speech. The study represents a significant step towards brain-computer interfaces for voluntary communication.

Important facts:

  1. Technology: Depth electrodes transmit brain signals to a computer for speech transmission.
  2. Participant: An epileptic patient with implanted electrodes took part in the experiment.
  3. Future impacts: Could enable paralyzed people to communicate through thoughts.

Source: Tel Aviv University

A scientific breakthrough by researchers at Tel Aviv University and Tel Aviv Sourasky Medical Center (Ichilov Hospital) has shown that a mute person can speak through the power of his thoughts alone.

In one experiment, a silent participant imagined saying one of two syllables. Depth electrodes implanted in his brain transmitted the electrical signals to a computer, which then pronounced the syllables.

The study was led by Dr. Ariel Tankus of Tel Aviv University’s School of Medical and Health Sciences and Tel Aviv Sourasky Medical Center (Ichilov Hospital), along with Dr. Ido Strauss of Tel Aviv University’s School of Medical and Health Sciences and Director of the Department of Functional Neurosurgery at Ichilov Hospital.

The results of this study were published in the journal Neurosurgery.

These findings offer hope that people who are completely paralyzed due to diseases such as ALS, brain stem infarction or brain injury can regain the ability to speak voluntarily.

“The patient in the study is an epileptic patient who has been admitted to the hospital to undergo resection of the epileptic focus in his brain,” explains Dr. Tankus. “Of course, to do this, you have to locate the focus, which is the source of the ‘short circuit’ that sends powerful electrical waves through the brain.”

“This situation affects a smaller subgroup of epilepsy patients who do not respond well to medication and require neurosurgical intervention, and an even smaller subgroup of epilepsy patients in whom the suspected focus is deeper in the brain rather than on the surface of the cerebral cortex.”

“To identify the exact location, electrodes must be implanted into deep brain structures. They are then admitted to hospital and wait for the next seizure.

“When a seizure occurs, the electrodes show neurologists and neurosurgeons where the seizure is located, allowing them to operate precisely. From a scientific point of view, this offers a rare opportunity to look into the depths of a living human brain.

“Fortunately, the epilepsy patient treated at Ichilov Hospital agreed to take part in the experiment, which could ultimately help completely paralyzed people to express themselves again through artificial language.”

In the first phase of the experiment, researchers at Tel Aviv University asked the patient to say two syllables out loud: /a/ and /e/, with the depth electrodes already implanted in his brain.

They recorded brain activity as he articulated these sounds. Using deep learning and machine learning, the researchers trained artificial intelligence models to identify the specific brain cells whose electrical activity indicated the desire to say /a/ or /e/.

After the computer identified the pattern of electrical activity associated with these two syllables in the patient’s brain, the patient was asked to simply imagine saying /a/ and /e/. The computer then translated the electrical signals and played the pre-recorded sounds of /a/ or /e/ accordingly.

“My research area is concerned with the coding and decoding of language, that is, how individual brain cells are involved in the language process – the production of language, the hearing of language and the imagination of language or ‘silent speech’,” says Dr. Tankus.

“In this experiment, for the first time in history, we were able to link the types of speech with the activity of individual cells from the brain regions whose activity we recorded.

“This allowed us to distinguish between the electrical signals that represent the sounds /a/ and /e/. At the moment, our research is focusing on two building blocks of language, two syllables.

“Of course, our goal is to have complete control of the language, but even two different syllables can enable a completely paralyzed person to signal ‘yes’ and ‘no’. For example, in the future it will be possible to teach an ALS patient in the early stages of the disease to use a computer while he or she can still speak.

“The computer would learn to recognize the electrical signals in the patient’s brain so that it could interpret these signals even when the patient can no longer move his muscles. And that’s just one example.

“Our study is an important step towards the development of a brain-computer interface that can replace the brain’s control pathways for language production and enable completely paralyzed people to voluntarily communicate with their environment again.”

About this BCI and Neurotech Research News

Author: Ariel Tankus
Source: Tel Aviv University
Contact: Ariel Tankus – Tel Aviv University
Picture: The image is from Neuroscience News.

Original research: Closed access.
“A speech neuroprosthesis in the frontal lobe and hippocampus: decoding high-frequency activity into phonemes” by Ariel Tankus et al. Neurosurgery


Abstract

A speech neuroprosthesis in the frontal lobe and hippocampus: decoding high-frequency activity into phonemes

BACKGROUND AND GOALS:

The loss of speech due to injury or disease is devastating. Here, we report a novel speech neuroprosthesis that artificially articulates building blocks of language based on high-frequency activity in brain areas that have never been used for a neuroprosthesis before: anterior cingulate and orbitofrontal cortex, and hippocampus.

METHODS:

A 37-year-old male neurosurgical epilepsy patient with intact speech who had depth electrodes implanted solely for clinical reasons almost immediately and naturally silently controlled the neuroprosthesis to voluntarily produce two vowel sounds.

RESULTS:

During the first series of experiments, the participant let the neuroprosthesis artificially produce the various vowel sounds with 85% accuracy. In the following experiments, the performance improved continuously, which can be attributed to neuroplasticity. We show that a neuroprosthesis trained with open speech data can be controlled silently.

DIPLOMA:

This may pave the way for a novel strategy of implanting neuroprostheses in earlier stages of disease (e.g. amyotrophic lateral sclerosis) while language is still intact to enable enhanced training that still allows silent control even in later stages. The results demonstrate the clinical feasibility of directly decoding high frequency activity that includes spiking activity in the above areas for silent production of phonemes, which can serve as part of a neuroprosthesis to replace lost speech control pathways.