Keeping AIs and ears open: Artificial intelligence uses cough audio to detect COVID-19

12 Nov 2020 byTristan Manalac
Some believe that AI can be a solution to the science overloadSome believe that AI can be a solution to the science overload

Through artificial intelligence (AI), coughs can be used as auditory biomarkers and help in the screening of asymptomatic patients with the novel coronavirus disease (COVID-19), according to a recent study.

“We have created an AI prescreening test that discriminates 98.5 percent COVID-19 positives from a forced-cough recording, including 100 percent of asymptomatics, at essentially no cost and with an accompanying saliency map for longitudinal explainability,” the researchers said.

In April 2020, the researchers started a global collection of cough audio, facilitated through their website recording engine. On average, three recordings were requested of each volunteer, and all entries were accompanied by 10 multiple-choice questions related to their demography (age, sex, country, etc.) and COVID-19 status (tests taken, symptoms, doctor’s evaluation, etc.).

At present, the database contains recordings from 2,660 patients who had definitively tested positive for COVID-19, as well as audio from non-COVID-19 controls, who were enrolled at a 1:10 ratio. Samples from the present AI-based screening system were selected from this database; 80 percent of the eligible recordings were used for training and 20 percent for validation.

The resulting model was able to accurately identify patients diagnosed with COVID-19 using an official test 97.1 percent of the time. [IEEE Open J Eng Med Biol 2020;doi:10.1109/OJEMB.2020.3026928]

This made the AI screening protocol 17.9 percent better than self-diagnosis (79.2 percent accuracy) and 0.4 percent better than being assessed by a doctor (96.7 percent accuracy). In addition, the researchers also pointed out that the novel AI tool was able to detect all asymptomatic cases, though carrying the trade-off of having a false positive rate of 16.8 percent.

The researchers also proposed a design for the COVID-19 discriminator. In brief, the device takes in one or more cough audio as its input, splits them into 6-second snippets, and then processes them based on four biomarkers of detection: muscular degradation, changes in vocal cords, changes in mood or sentiment, and changes in the lungs or respiratory tract.

In addition, the system also outputs a saliency map, which provides physicians information regarding the diagnostic accuracy of the test with increasing cough samples, a per-biomarker analysis of the audio sample, and a variety of composite factors. This could allow the physicians to longitudinally monitor their patients, as well as help the research community develop new biomarkers and relevant metrics.

“A group outbreak detection tool could be derived from this model to prescreen whole-populations on a daily basis, while avoiding the cost of testing each inhabitant, especially important in low-incidence areas where the required post-test confinement is harder to justify,” the researchers said, noting that “pandemics could be a thing of the past if prescreening tools are always-on in the background and constantly improved.”

“Eventually we hope our research methods inspire others to develop similar and complementary approaches to disease management beyond dementia and COVID-19,” they added.