Written byShubham Sharma
AI has wide applications but a group of scientists is using the technology for a noble cause - to give voice to those who can't speak.
They have developed a neural network-backed system that can track brain activity and use it to generate realistic synthesized speech to help people convey their thoughts.
Here's all about the system and its working.
The speech-producing system, developed by neurologists from University of California San Francisco, revolves around tracking the activity of the brain with a set of stamp-like electrodes and then feeding this information into a machine learning system.
Once the information is fed, the system translates the activity to synthetic speech, producing entire spoken sentences that can be understood by any human listener.
While producing sounds from brain activity sounds a lot like converting thoughts into vocal speech, it's worth noting that this is not the case.
Instead of translating brain signals directly, the researchers use a virtual vocal tract mimicking the exact motions that the brain - region involved in language production - triggers to produce speech.
The researchers asked five patients suffering from epileptic seizures to read sentences from kid's stories. Then, their linguistic experts analyzed those recordings to decode the exact muscular movements involved in the generation of spoken sentences; these movements were mapped to create the virtual vocal tract.
In their study, the researchers demonstrated that their virtual vocal tract, which included two neural network algorithms, produced natural-sounding speech.
One of the algorithms, the decoder, analyzed signals from the brain's speech center to define which muscular movements the brain is signaling.
Meanwhile, the second used these movements to define a synthetic approximation in the form of voice.
Though the system is still novel, the researchers believe it could be improved/perfected to give voice to those who have lost their ability to speak due to injury, paralysis, or neurodegenerative diseases.
If that happens, this would be a major upgrade over current assistive technologies that not just a take a lot of time, synthesizing 10 odd words per minute, but are also error-prone.
"We can generate entire spoken sentences based on an individual's brain activity," team member Edward Chang stated, adding that their latest work shows they should be able to employ existing technologies to "build a device that is clinically viable in patients with speech loss."
Love Science news?
Subscribe to stay updated.