An international team led by researchers from the Chinese University of Hong Kong (CUHK) has successfully developed world’s first artificial intelligence (AI) model that can detect Alzheimer’s disease (AD) solely through retinal photographs.
“Although memory complaints are often considered a sign of AD, an accurate diagnosis of AD based on cognitive tests and structural brain imaging can be difficult, while amyloid-PET scans or testing of cerebrospinal fluid collected via lumber puncture are invasive and less accessible,” said Dr Lisa Au of the Division of Neurology, CUHK.
“The retina is an extension of the brain in terms of embryology, anatomy and physiology. Hence, it has long been considered a window to study disorders of the central nervous system. Through noninvasive retinal photography, we can detect a range of changes in blood vessels and nerves of the retina that are associated with AD,” explained Professor Clement Tham of the Department of Ophthalmology and Visual Sciences, CUHK.
The researchers created, validated, and tested a deep learning algorithm to detect AD based on 5,598 retinal photographs from 648 AD patients and 7,351 retinal photographs from 3,240 individuals without the disease from 11 multicentre clinical studies across different countries. “To the best of our knowledge, our study included the largest sample size and the most comprehensive metadata of patients with AD for deep learning model development,” noted the researchers.
In the internal validation dataset, the bilateral model based on four retinal photographs (optic nerve head- and macula-centred images from both eyes) demonstrated 83.6 percent accuracy, 93.2 percent sensitivity, 82.0 percent specificity, and an area under the receiver operating characteristic curve (AUROC) of 0.93 for detection of AD dementia. [Lancet Digit Health 2022;4:e806-e815]
In addition to AD dementia detection, the model was tested with biomarkers of amyloid β on multicentre datasets from different regions and countries (Hong Kong, China, Singapore, and US). In these testing datasets, the bilateral deep learning model had accuracies ranging from 79.6 percent to 92.1 percent and AUROCs ranging from 0.73 to 0.91. In the datasets with PET data, the model was able to differentiate between participants who were amyloid β–positive and those who were amyloid β–negative, with accuracies of 80.6–89.3 percent and AUROCs of 0.68–0.86.
In subgroup analyses, the accuracy of the model was improved in patients with eye disease (89.6 percent) vs those without (71.7 percent), and patients with diabetes (81.9 percent) vs those without (72.4 percent).
“Our AI model retained a robust ability to differentiate between subjects with and without AD, even in the presence of concomitant eye diseases such as macular degeneration and glaucoma, which are common in city-dwellers and the older population. This further supports our AI analysis of retinal photographs as an excellent tool for detecting of memory-depriving AD,” commented Dr Carol Cheung of the Department of Ophthalmology and Visual Sciences, CUHK.
“In addition to its accessibility and noninvasiveness, the accuracy of the new AI model is comparable to imaging tests such as MRI. It shows potential to become not only a diagnostic test in clinics, but also a screening tool for AD in community settings, where hidden high-risk cases can be identified, allowing early initiation of various preventive treatments such as anti-amyloid drugs to slow down cognitive decline and brain damage,” said Professor Vincent Mok, Director of Therese Pei Fong Chow Research Centre for Prevention of Dementia at CUHK.