Wearable health monitors, ubiquitous sensors, and the ability to collect and store huge amounts of data are creating challenges for researchers hoping to use artificial intelligence to identify diseases. While the gathered data can hold important clinical answers, finding those answers means that the data must be categorized and labeled.
Now, researchers at MIT have developed a system that can autonomously identify signs of a disease from data gathered from a relatively small group of people and without any initial training.
The research, recently presented at the Machine Learning for Healthcare conference in Ann Arbor, Michigan, focused on learning the audio biomarkers of vocal cord disorders. Using data gathered over a week from an accelerometer attached to the necks of 100 people, the system automatically identified which sound characteristics were important for identifying whether a patient has vocal cord nodules.
“It’s becoming increasing easy to collect long time-series datasets. But you have physicians that need to apply their knowledge to labeling the dataset,” said lead author Jose Javier Gonzalez Ortiz, a PhD student at MIT. “We want to remove that manual part for the experts and offload all feature engineering to a machine-learning model.”
While the system was utilized for a specific sound-related task, it can be trained to analyze data from other diseases. The current study may help to create tools that prevent vocal nodules and help to study the onset of this condition.