New brain scanner can read thoughts – or at least some of them


Previous brain-computer interfaces (like this one developed in Bochum, Germany) have raised hopes of empowering people bound to wheelchairs. Now, a newly developed brain scanner can, at least partially, read a person’s thoughts. It’s questionable, however, if this technology will soon be of any use in practice. — dpa

WASHINGTON: US researchers have used brain scanners and AI to at least roughly record certain types of thoughts in willing test subjects.

A decoder they developed was able to roughly reproduce what was going through the participants' minds in certain experimental situations with the help of so-called fMRI images, the researchers write in the journal Nature Neuroscience.

The team hopes that this brain-computer interface, which does not require surgery, could one day help people who have lost their ability to speak, for example as a result of a stroke. However, experts in the field remain sceptical.

The study authors from the University of Texas stress that their technology could not be used to secretly read people’s thoughts.

Brain-computer interfaces (BCI) are based on the principle of reading human thoughts through technical circuits, processing them and translating them into movements or speech. Paralysed people, for example, could control an exoskeleton by thought, or people with locked-in syndrome (also known as pseudocoma) could communicate with the outside world. However, many of the systems currently being researched require the surgical implantation of electrodes.

In the new approach, a computer forms words and sentences based on brain activity. The researchers trained this language decoder by having three volunteers listen to stories for 16 hours while they lay in a functional magnetic resonance imaging (fMRI) scanner. An fMRI can make blood flow changes in brain areas visible, which in turn are an indicator of the activity of neurones.

In the next step, the subjects were given new stories to listen to while their brains were again examined in the fMRI tube. The previously trained speech decoder was now able to create word sequences from the fMRI data that, according to the researchers, reproduced the content of what was heard for the most part correctly.

The system did not translate the information recorded in the fMRI into individual words, however. Instead, it used the correlations recognised during training as well as artificial intelligence (AI) to assign the measured brain activities to the most probable phrases for new stories.

Explaining this approach in an independent submission, Rainer Goebel, head of the Department of Cognitive Neuroscience at Maastricht University in the Netherlands, said that “a central idea of the work was to use an AI language model to greatly reduce the number of possible phrases consistent with a brain activity pattern”.

In a press briefing on the study, co-author Jerry Tang illustrated the result of the tests. He said the decoder rendered the phrase “I don’t have my driver’s license yet” as “She has not even started to learn to drive yet”. According to Tang, the example illustrates that “the model is very bad with pronouns – but we don’t know yet what the reason is”.

Overall, the decoder was successful in that many selected phrases in new stories included words from the original text or at least had a similar meaning, according to Goebel. “But there were also quite a lot of errors, which is very bad for a full-fledged brain-computer interface,” he said, “since for critical applications – for example, communication with locked-in patients – it is most important not to generate false statements.”

Even more errors were generated when subjects were asked to independently imagine a story or watch a short animated silent film and the decoder was asked to reproduce events in it.

For Goebel, the results of the presented system are altogether too poor to be useful as a trustworthy interface. “I would venture the prediction that fMRI-based BCIs will unfortunately probably remain limited to research work with a few subjects in the future – as in this study,” he said.

Christoph Reichert from the Leibniz Institute of Neurobiology is also sceptical. “If you look at the examples shown of the presented and reconstructed text, it quickly becomes clear that this technique is still far from being able to reliably generate a ‘thought’ text from brain data.” Nevertheless, he believes the study hints at what could be possible if measurement techniques improve.

There are also some ethical concerns. The authors themselves write that depending on future developments, measures to protect intellectual privacy might be necessary. However, experiments with the decoder showed that both training and subsequent use required the cooperation of the subjects. “If they counted in their heads, named animals or thought of another story while decoding, the process was sabotaged,” said Tang.

Similarly, the decoder performed poorly if the model had been trained with another human. – dpa

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

   

Next In Tech News

Grayscale Bitcoin Trust's shares jump after first inflow since January
Lamborghini bros no more: Crypto is creating a new wealth effect
Amazon driver fatally shoots person trying to steal vehicle at gunpoint, US cops say
Microsoft ties pay for top bosses to meeting cybersecurity goals
JPJ: Bjak not authorised to offer road tax renewal services
TikTok’s boss goes from reserved tech exec to Met Gala chair
The bystander’s role is changing in the era of livestreaming. North Carolina’s standoff shows how
Warren Buffett says AI may be better for scammers than society. And he’s seen how
England women's cricket coach using AI to pick team
Food critic Keith Lee is saving struggling restaurants one TikTok review at a time

Others Also Read