|
Loading ...
|
|
|
|
pages views since 05/19/2016 : 118748
· Members : 7
· News : 733
· Downloads : 0
· Links : 0
|
|
|
|
A New Brain-Computer Interface Translates Brain Signals Into Speech
|
|
|
Posted by Okachinepa on 08/15/2024 @
Courtesy of SynEvol
Credit: UC Regents
This ground-breaking system represents a major advancement in neuroprosthetics as it transforms brain impulses into speech with up to 97% accuracy. Casey Harrell, an ALS patient in a clinical trial, was able to successfully use the gadget to regain his capacity to communicate. For those who have lost their capacity to speak, the technology—which uses microelectrode arrays implanted in the brain—has demonstrated amazing outcomes in real-time speech decoding.
UC Davis Health has developed the most accurate brain-computer interface (BCI) device to date, translating brain signals into speech with up to 97% accuracy.
A guy with amyotrophic lateral sclerosis (ALS) who had significantly affected speech had sensors implanted in his brain by the researchers. After setting the system, the individual was able to speak how he meant to in a matter of minutes.
Courtesy of SynEvol
Credit: UC REGENTS
Lou Gehrig's disease, also known as ALS, destroys the nerve cells that regulate movement throughout the body. The illness causes a progressive loss of one's capacity for walking, standing, and hand function. Additionally, it may result in a person losing control over the speech muscles, which could impair their ability to communicate clearly.
The new technology is being created to help those who are paralyzed or suffer from neurological disorders like ALS, so they can communicate again. When the user tries to speak, it can interpret brain signals and translate them into text that the computer will "spoken" aloud.
Neurosurgeon David Brandman of UC Davis said, "Our BCI technology helped a man with paralysis to communicate with friends, families, and caregivers." "The most accurate speech neuroprosthesis (device) ever described is demonstrated in our paper.”
Co-principal investigator and co-senior author Brandman is involved in this research. He is co-director of the UC Davis Neuroprosthetics Lab in addition to being an assistant professor in the department of neurological surgery at UC Davis.
Courtesy of SynEvol
Credit: UC REGENTS
The new brain-computer interface (BCI) technology translates brain activity into letters on a computer screen when a person tries to talk. The text can then be audibly read by the computer.
The group included 45-year-old ALS patient Casey Harrell in the BrainGate clinical trial in order to build the technology. Harrell experienced tetraparesis, or weakness in his arms and legs, at the time of enrollment. Dysarthria made his speech extremely difficult to comprehend, necessitating the assistance of others to interpret for him.
Brandman inserted the experimental BCI device in July 2023. He implanted four microelectrode arrays into the speech coordination region of the brain, the left precentral gyrus. The 256 cortical electrodes on the arrays are intended to record brain activity.
Neuroscientist Sergey Stavisky said, "We're really detecting their attempt to move their muscles and talk." Stavisky works as an assistant professor in the neurological surgery division. He serves as the study's co-principal investigator in addition to being the co-director of the UC Davis Neuroprosthetics Lab. The area of the brain that is attempting to communicate these commands to the muscles is where we are recording. And we are essentially listening to it, converting those brain activity patterns into phonemes, which are similar to syllables or speech units, and then into the words that they are attempting to express.
Courtesy of SynEvol
Credit: UC REGENTS
Although BCI technology has advanced recently, attempts to facilitate communication have been sluggish and error-prone. This is because it took a lot of time and data to run the machine-learning algorithms that deciphered brain signals.
Word mistakes were a common problem with earlier speech BCI systems. This posed a communication barrier and made it challenging for the user to be understood consistently, according to Brandman. "Our goal was to create a system that enabled anyone to be heard at any time they desired to speak."
Harrell employed the approach for both planned and unplanned conversations. Real-time speech decoding was used in all scenarios, and the system was updated often to maintain accuracy.
A screen displayed the words that had been deciphered. Surprisingly, they were read aloud in a voice that resembled Harrell's prior to his ALS diagnosis. Software trained on pre-existing audio clips of his pre-ALS voice was used to compose the voice.
With a 50-word vocabulary, the system required 30 minutes to reach 99.6% word accuracy during the first speech data training session.
He sobbed with delight the first time we used the device since the words he was trying to say correctly showed up on the screen. All of us did, Stavisky remarked.
The number of terms in the prospective vocabulary grew to 125,000 in the second session. With this significantly increased vocabulary, the BCI achieved a 90.2% word accuracy with only 1.4 hours more training data. Following on from ongoing data collecting, the BCI's accuracy has remained at 97.5%.
"At this stage, we can accurately decode what Casey is attempting to say approximately 97% of the time, which is superior to many smartphone applications that are available for purchase that attempt to interpret a person's voice," Brandman stated. "This technology is revolutionary because it gives those who wish to talk but are unable hope. I'm hoping that speech-based cognitive technology will enable sufferers in the future to communicate with their loved ones.
The study covers 32 weeks' worth of data collecting across 84 sessions. Harrell engaged in self-paced, in-person and video chat communication using the speech BCI for more than 248 hours.
It is really annoying and discouraging to be unable to speak. You feel as though you're stuck," Harrell remarked. "This kind of technology will help people reintegrate into society and life."
Lead author Nicholas Card of the study remarked, "It has been incredibly satisfying to watch Casey regain his ability to speak with his family and friends through this technology." Card works as a postdoctoral scholar in the neurological surgery department at UC Davis.
Casey is an incredible person, as are the other BrainGate participants. They should be commended greatly for participating in these early clinical trials. Co-author and sponsor-investigator of the BrainGate trial Leigh Hochberg stated, "They do this not to gain any personal benefit, but to help us develop a system that will restore communication and mobility for other people with paralysis." Hochberg works at Brown University, the VA Providence Healthcare System, Massachusetts General Hospital, and as a neuroscientist.
|
|
|