A mind-reading brain implant using artificial intelligence

⇧ [VIDÉO] You may also like this partner content (post ad)

Combining a brain implant with artificial intelligence to allow paralyzed people to express themselves… This is the challenge that a group of scientists from Stanford University has undertaken. Their device can turn simple ideas into text and understandable voice in record time.

160 words per minute…that’s the average speed we speak. Real talkers! Brain devices have been around for decades to “give speech back” to people with paralysis. They somehow aim to translate their ideas. To do this, these devices use small networks of electrodes inserted into the brain, which allow the activity of neurons to be measured and interpreted. However, achieving such a speech rate becomes a real technological challenge. Therefore, translation is very slow compared to our normal speech, which can be very frustrating for people trying to communicate.

So the advantage of the research conducted by a group from Stanford University is mainly speed. They took an innovative approach while building on existing systems. Indeed, they combined a brain implant with artificial intelligence. Their research has been published bioRxivwhich, it should be emphasized, means that they have not yet been reviewed.

:: T-SHIRT SUPPORTING SCIENCE! ::

Show the world your passion for space and your support for the fight against global warming.

Experiments have been carried out on only one person so far. Researchers readily admit that their “ the demonstration is a “proof of concept” that suggests that decoding speech acts from intracortical recordings is a promising approach, but not yet a complete and clinically valid system. “.

Indeed, after all these precautions are taken, their research still has something admirable. The person they studied was a 67-year-old woman with amyotrophic lateral sclerosis. This condition gradually causes the brain to lose its ability to activate the muscles and become paralyzed.

In this case, the subject, called “T12” in the study, was still able to make sounds when trying to speak. But his speech was completely incomprehensible. Thanks to an implant put in by researchers, he can now speak at a speed of 62 words per minute. Still not as fast as a natural speaker, but faster than similar devices. According to scientists, this is three times faster than previous records.

T12’s thoughts are written on the screen and the words are spoken in a synthetic voice at the same time. In addition to speed, it is the size of the usable vocabulary that makes an impression. The device draws from a library of 125,000 words that make up a truly extensive vocabulary base.

How does this work?

To achieve this result, the team combined two advanced technologies: neural implant and artificial intelligence. On the hardware side, the scientists placed four microarray electrodes at strategic locations in the outer layer of the brain. Two areas are specifically targeted. The first is the one that controls the movements of the facial muscles surrounding the mouth. The other is known as the “language center” of the brain. This area is also called “Aire de Broca”.

The idea is to detect in the brain the movements a paralyzed person would make if they could move their muscles to speak. Therefore, scientists wanted to capture both what a person wants to say and the performance of speech with muscle movements. A bold proposition, highlights SingularityHub in an article: “ We don’t yet know if speech is confined to a small area of ​​the brain that controls the muscles of the mouth and face, or if language is encoded more globally in the brain? “reminds the media. However, scientists did not rely solely on the information provided by the implants to reconstruct sentences.

They added a touch of “artificial intelligence”. ” We trained a recurrent neural network (RNN) decoder at each 80-ms time step to infer the probability that each phoneme was spoken at that time. These probabilities were then processed through a language model to extract the most likely base word sequence, taking into account phoneme probabilities and English language statistics. “, describe scientists.

Indeed, scientists noted that existing RNNs were able to distinguish with 92% accuracy the types of facial movements perceived according to pronounced phonemes: frowns, puckering of the lips, flicking the tongue … And this is based only on neural signals. A phoneme is the smallest distinct unit of articulation that can be isolated in a language: for example, French has 36.

Thus, they observed that even when a person is paralyzed for a long time, he keeps this entire articular panoply in memory and sends the corresponding nerve signals. In summary, the AI ​​captures the phonemes that are pronounced thanks to the implants and extracts the logical sequence of the phonemes. Even if it is currently only in one person, this research could benefit many patients. However, it needs to be improved: for the moment, the system shows an error rate of about 10% for a library of 50 words, and almost 24% for a library of 250,000 words.

Source: bioRxiv

Leave a Reply

Your email address will not be published. Required fields are marked *