Although still at the experimental stage, the latest BCI raises hopes that people who have lost the ability to communicate could regain their voices. — AFP
A brain implant using artificial intelligence (AI) was able to turn a paralysed woman’s thoughts into speech almost simultaneously, US researchers said on April 1 (2025).
Although still at the experimental stage, the latest achievement using an implant linking brains and computers raised hopes that these devices could allow people who have lost the ability to communicate to regain their voice.
The California-based team of researchers had previously used a brain-computer interface (BCI) to decode the thoughts of Ann, a 47-year-old with quadriplegia, and translate them into speech.
However, there was an eight-second delay between her thoughts and the speech being read aloud by a computer.
This meant a flowing conversation was still out of reach for Ann, a former high (secondary) school math teacher who has not been able to speak since suffering a stroke 18 years ago.
But the team’s new model, revealed in the journal Nature Neuroscience, turned Ann’s thoughts into a version of her old speaking voice in 80-millisecond increments.
“Our new streaming approach converts her brain signals to her customised voice in real time, within a second of her intent to speak,” said senior study author Assistant Professor Dr Gopala Anumanchipalli of the University of California, Berkeley.
Ann’s eventual goal is to become a university counsellor, he added.
“While we are still far from enabling that for Ann, this milestone takes us closer to drastically improving the quality of life of individuals with vocal paralysis.”
For the research, Ann was shown sentences on a screen – such as “You love me then” – which she would say to herself in her mind.
Then her thoughts would be converted into her voice, which the researchers built up from recordings of her speaking before she was injured.
Ann was “very excited to hear her voice, and reported a sense of embodiment,” Asst Prof Anumanchipalli said.
The BCI intercepts brain signals “after we’ve decided what to say, after we’ve decided what words to use and how to move our vocal tract muscles,” study co-author and PhD candidate Cho Cheol Jun explained in a statement.
The model uses an AI method called deep learning that was trained on Ann previously attempting to silently speak thousands of sentences.
It was not always accurate, and still has a limited vocabulary of 1,024 words.
Britain’s Newcastle University neuroprosthetics professor Dr Patrick Degenaar, who was not involved in the study, said that this is “very early proof-of- principle” research.
But it is still “very cool,” he added.
Prof Degenaar pointed out that this system uses an array of electrodes that do not penetrate the brain, unlike the BCI used by American billionaire Elon Musk’s Neuralink firm.
The surgery for installing these arrays is relatively common in hospitals for diagnosing epilepsy, which means this technology would be easier to roll out en masse, he added.
With proper funding, Asst Prof Anumanchipalli estimated the technology could be helping people communicate in five to 10 years. – By Daniel Lawler/AFP