Article

Deep Learning Models Could Help Assessment of Retinitis Pigmentosa

Author(s):

Findings from a recent study indicate a correlation between actual values of visual function and estimated values with a deep learning model using ultra-widefield FAF images.

Credit: Unsplash

Credit: Unsplash

A cross-sectional study of more than 1000 eyes suggests visual function estimation from deep learning models using ultra-widefield fundus autofluorescence (FAF) images might help objectively assess the progression of retinitis pigmentosa.1

The findings indicate correlations between the actual values of the visual function, and the estimated values by deep learning models using ultra-widefield FAF images, according to investigators led by Yoshinori Mitamura, MD, PhD, Department of Ophthalmology, Institute of Biomedical Sciences, Tokushima University Graduate School.

“Because ultra-widefield FAF images can be obtained easily, quickly, and noninvasively without mydriasis, the ability to estimate the visual function in patients with RP from these images would be an additional benefit in routine clinical practice,” wrote the investigative team. “This might indicate that obtaining ultra-widefield FAF images would enable ophthalmologists to monitor retinitis pigmentosa progression during a follow-up period.”

Appropriate clinical evaluation and estimation for residual visual function in patients with retinitis pigmentosa are necessary, as there is no widespread effective treatment to halt the progression of the disease. Based on this, the investigative team examined whether deep learning models can estimate visual function in patients with retinitis pigmentosa by using ultra-widefield FAF images obtained on concurrent visits.

From January 2012 to December 2018, the retrospective, multicenter study obtained ultra-widefield pseudocolor and ultra-widefield FAF images of 695 consecutive patients (1274 eyes) with retinitis pigmentosa from 5 institutions in Japan. Investigators measured the best-corrected visual acuity (BCVA), mean deviation (MD), and the mean sensitivity of central 12 test points (CENT12) using the Humphrey field analyzer (HFA) in the all-eyes group.

Each of the 3 types of input images (ultra-wife field pseudocolor, ultra-widefield FAF, and both), was paired with 1 of the 31 types of ensemble models constructed from 5 deep-learning models. In the all-eyes group, investigators used 848, 212, and 214 images of the data from patients for the training, validation, and testing data, respectively. All data from a single institution were used for independent testing data, but not for training or validation.

Investigators noted the image type–ensemble model combination yielding the smallest mean absolute error was defined as the model with the best estimation accuracy. Correlations between actual values of the testing data and the estimated values by the best accuracy model were examined by calculating standardized regression coefficients and P-values, they added.

In the study population, a total of 310 patients were male (44.6%) and 385 were female (55.4%), with a mean age of 53.9 years. After analysis, the image type in which the model yielded the smallest mean absolute error value in the all-eyes group was the ultra-widefield FAF image alone for estimating MD, CENT12, and BCVA, according to investigators.

Data showed standardized regression coefficients were 0.684 (95% confidence interval [CI], 0.567 - 0.802) for the mean deviation estimation, 0.697 (95% CI, 0.590 - 0.804) for the central sensitivity estimation, and 0.309 (95% CI, 0.187 - 0.430) for the visual acuity estimation (all P <.001).

The estimation accuracy for MD, CENT12, and BCVA improved when the model used the data set for the presence of autofluorescence rings, compared with the use of the data set for the absence of AF rings, according to investigators.

Study data indicated the estimation accuracy of the deep learning model tended to be higher with the use of ultra-widefield images alone, suggesting their benefit in estimating visual function.

“The deep learning model had higher estimation accuracy from ultra-widefield FAF images alone likely because this type of image had more information for the model,” they wrote. “Thus, the information on the retinal pigment epithelium function reflected in the FAF images could be highly beneficial in estimating visual functions.”

References

  1. Nagasato D, Sogawa T, Tanabe M, et al. Estimation of Visual Function Using Deep Learning From Ultra-Widefield Fundus Images of Eyes With Retinitis Pigmentosa. JAMA Ophthalmol. Published online February 23, 2023. doi:10.1001/jamaophthalmol.2022.6393
Related Videos
Brigit Vogel, MD: Exploring Geographical Disparities in PAD Care Across US| Image Credit: LinkedIn
Eric Lawitz, MD | Credit: UT Health San Antonio
| Image Credit: X
Ahmad Masri, MD, MS | Credit: Oregon Health and Science University
Ahmad Masri, MD, MS | Credit: Oregon Health and Science University
Stephen Nicholls, MBBS, PhD | Credit: Monash University
Marianna Fontana, MD, PhD: Nex-Z Shows Promise in ATTR-CM Phase 1 Trial | Image Credit: Radcliffe Cardiology
Zerlasiran Achieves Durable Lp(a) Reductions at 60 Weeks, with Stephen J. Nicholls, MD, PhD | Image Credit: Monash University
Gaith Noaiseh, MD: Nipocalimab Improves Disease Measures, Reduces Autoantibodies in Sjogren’s
4 experts are featured in this series.
© 2024 MJH Life Sciences

All rights reserved.