Scientist Stephen Hawking had to work really hard to speak. He chose letters and words from a screen controlled by movements of a muscle in his cheek. However, the slow system he used might soon be replaced. With a very different approach, doctors have found a way to get a person's speech directly from their brain.
The system uses brain-reading software. It works only for sentences it has been trained on. Doctors at the University of California in San Francisco (UCSF) set out to create a better product, which allows people who are paralyzed (瘫痪的) to communicate more quickly than Hawking's device, which uses muscle twitches(抽搐)to control a keyboard.
The Study of Brain Activity
The work was possible thanks to three patients. They had epilepsy(癫痫症). This is a condition that has something to do with the nerves. It causes seizures(痉挛). This is when their body suddenly moves on its own. They were about to have surgery(外科手术)for their condition.
Before their surgery, they had a small device placed on their brain for at least a week. It was used to map their seizures. The patients could speak normally, and they agreed to take part in Doctor Chang's study. The doctor used the devices to record brain activity while each patient was asked nine questions. The patients were also asked to read a list of 24 possible responses.
Using the recordings, Chang and his team built computer models. The models learned to match patterns of brain activity to the questions the patients heard and the answers they spoke. Once trained, the software could identify questions and responses almost instantly. It used only brain signals. It was correct on what question a patient heard 7 out of 10 times. The software identified what response they gave 6 out of 10 times.
Improving Software to Read More Varied Speech
Even with the breakthrough, there are hurdles ahead. One challenge is to improve the software so it can translate brain signals into a variety of speech. This will require software trained on a huge amount of spoken language and matching brain signals. These signals will be different from one person to another.
Another goal is to read "imagined speech", or sentences spoken in the mind. The system in the study detects brain signals that are sent to move the mouth. However, for some patients these signals may not be enough. More advanced ways of reading sentences in the brain will be needed.