Voices offer lots of information. It turns out that they can even help diagnose an illness and researchers are working on an app for that. The National Institutes of Health is funding a research project to collect voice data and develop an AI that could diagnose people based on their speech.
Everything such as your breathing patterns when you speak offers potential information about your health, says Dr. Yael Bensoussan, the director of the University of South Florida's Health Voice Center and a leader on the study. "We asked experts: Well, if you close your eyes when a patient comes in, just by listening to their voice, can you have an idea of the diagnosis they have?" says Bensoussan. "And that's where we got all our information." Someone who speaks low and slowly might have Parkinson's disease. Depression or cancer could even be diagnosed.
The project is part of the NIH's Bridge to AI program, which was launched over a year ago with more than $100 million in funding from the government, with the goal of creating large-scale health care databases for precision medicine. "We were really lacking what we call open source databases," says Bensoussan. "Every institution has their own database. But to create these networks was really important to allow researchers from other generations to use this data."
The ultimate goal of the project is an app that could help bridge access to rural or underserved communities, by helping general practitioners (行医者) refer patients to specialists. To get there, researchers have to start by amassing data, since the AI can only get as good as the database it's learning from. By the end of the four years, they hope to collect about 30,000 voices.
There are a few roadblocks, however. HIPAA, the law that regulates medical privacy, isn't really clear on whether researchers can share voices. Every institution has different rules on what can be shared, and that opens all sorts of moral and legal questions.