Abstract
Objectives
To demonstrate the feasibility of a deep learning-based vascular segmentation tool for UWFA and evaluate its ability to automatically identify quality-optimized phase-specific images.
Methods
Cumulative retinal vessel areas (RVA) were extracted from all available UWFA frames. Cubic splines were fitted for serial vascular assessment throughout the angiographic phases of eyes with diabetic retinopathy (DR), sickle cell retinopathy (SCR), or normal retinal vasculature. The image with maximum RVA was selected as the optimum early phase. A late phase frame was selected at a minimum of 4 min that most closely mirrored the RVA from the selected early image. Trained image analysts evaluated the selected pairs.
Results
A total of 13,980 UWFA sequences from 462 sessions were used to evaluate the performance and 1578 UWFA sequences from 66 sessions were used to create cubic splines. Maximum RVA was detected at a mean of 41 ± 15, 47 ± 27, 38 ± 8 s for DR, SCR, and normals respectively. In 85.2% of the sessions, appropriate images for both phases were successfully identified. The individual success rate was 90.7% for early and 94.6% for late frames.
Conclusions
Retinal vascular characteristics are highly phased and field-of-view sensitive. Vascular parameters extracted by deep learning algorithms can be used for quality assessment of angiographic images and quality optimized phase selection. Clinical applications of a deep learning-based vascular segmentation and phase selection system might significantly improve the speed, consistency, and objectivity of UWFA evaluation.
Similar content being viewed by others
Introduction
Retinal vasculature features provide crucial information for the diagnosis and severity assessment of various ophthalmic diseases. Visualization of retinal vessels enables the detection of manifestations of systemic conditions such as diabetes mellitus and hypertension [1, 2]. Early studies of machine learning (ML) applications in retinal photography identified vascular architecture as the primary dictating factor of computer-based diagnosis and risk factor predictions [3]. Progress in image analysis and deep learning-based algorithms have significantly advanced the accuracy and feature extraction of retinal vessels on color fundus photography [4,5,6]. Identification of vascular biomarkers is promising not only for expanding the understanding of pathophysiology but also for introducing new possibilities for personalized treatments by connecting specific pathologic features with optimal treatments.
Ultra-widefield fluorescein angiography (UWFA) can capture a 200° field of view (FOV) compared to conventional imaging with 30–60° FOV, enabling a more comprehensive disease evaluation [7, 8]. As such, UWFA has become an essential tool for posterior segment disorders due to its ability to identify near-panretinal abnormalities within the retinal vasculature. Fluorescein angiographic features such as retinal vascular non-perfusion, leakage, and microaneurysms are indicators of disease severity [9]. Therefore, quantitative analysis of UWFA images offers significant potential for both clinical and research applications. However, studies on ML vessel extraction from UWFA images are significantly limited in number compared to those of color fundus photography [10,11,12]. Large fluorescein angiography imaging datasets with detailed manual annotations of blood vessels are required to perform this analysis.
One major limitation of UWFA imaging for rapid image assessment is the large number of images that are obtained in a given UWFA session. Often, only a small number of key images are needed for clinician review or automated analysis. Ophthalmologists in clinics and image analysis laboratories manually review the entire set of images to choose the optimal image to assess the angiographic features of interest. Identifying the highest quality phase-specific (e.g., arteriovenous (early), late) images requires significant time and may be highly subjective. In addition, media opacities (e.g., vitreous debris, hemorrhage, and cataract), lid artifacts, optimal eye-camera distance, sufficient fluorescein dye infusion, injection-to-image time, and centration may impact the image quality. An automated arteriovenous and late phase pair selection tool could be the first step in improving efficiency and reducing variability in UWFA image selection.
In this report, we provide a feasibility assessment of an ML-based UWFA vascular segmentation platform and utilize this system to evaluate changes in vascular areas across the entire UWFA sequence in eyes with various underlying pathologies (e.g., normal, diabetic retinopathy (DR), and sickle cell retinopathy (SCR)). In addition, this tool was utilized as a basis for developing an automated quality-optimized phase selection tool for both arteriovenous (i.e., early) and late phase angiograms.
Methods
UWFA retinal vessel segmentation and vascular area extraction
A convolutional deep learning model was trained using manual annotations of the vasculature. The training set for vascular segmentation developed for the current platform consisted of the arteriovenous phase UWFA frames with optimal visualization of vascular structures with various pathologies, including non-perfusion, leakage, and neovascularization. Annotators varied the contrast to compensate for variable background fluorescence. The sections with low confidence of segmentation were omitted to have highly accurate ground truth [13, 14]. Early generation vessel masks were extracted from the training set images with conventional image processing algorithms (Fig. 1C) as previously described and were used as initial templates for manual segmentation to facilitate efficiency and prevent annotator fatigue [15]. A total of 7787, 256 × 256-pixel patches were extracted from 17 angiograms and angiogram sections. Randomly selected 10% of the patches were not included in the training and were reserved for performance assessment by F score calculation and qualitative comparison. NVIDIA GeForce GTX 1080 was used to train this model that is consisted of 18 convolutional layers, with concatenation to provide lower-level information to higher levels of the model. This was based on the U-Net architecture previously reported by Ronneberger et al. [16]. One key difference was the use of 12 × 12 convolutional kernels. This kernel size allowed for more contextual information to improve performance [17]. Grayscale images created by the deep learning model were post-processed to achieve binary vessel masks with minimum artifacts. Retinal vessel areas (RVA) were computed from retinal vessel masks using a custom Python script.
UWFA image selection
This retrospective image analysis study included IRB-approval at the Cleveland Clinic for retinal vascular assessment and at Johns Hopkins for the SCR image analysis portion of the study. Given the retrospective nature of these analyses, the informed consent requirement was waived. In addition, this study included images from the intravitreal aflibercept as indicated by real-time objective imaging to achieve DR improvement (PRIME) clinical trial (NCT03531294) that was IRB-approved by the Sterling IRB. As this was a prospective clinical trial, informed consent was obtained from all subjects. This analysis adhered to the tenets of the Declaration of Helsinki.
Complete UWFA sessions were identified for 492 individual sessions for eyes that were imaged on either the 200Tx or California UWFA imaging systems (Optos, Dunfermline, Scotland, United Kingdom). When both eyes were imaged, the eye with more UWFA frames available is included in the study. Intra-study changes in vascular area were examined in a total of 63 eyes: 33 with DR, 18 with SCR, and 12 with normal retinal vasculature. Arteriovenous and late pair selection tool performance was evaluated using UWFA images from 462 sessions eyes with DR from the PRIME clinical trial.
Assessment of intra-study changes in the vascular area
RVA and timestamps converted to seconds were calculated for each image in UWFA sessions of eyes with DR, SCR, and normal retina. The areas detected were graphed overtime after dye injection using R software (Vienna, Austria). Peak RVA and corresponding timestamp were identified for each eye. Mean RVA was calculated for each cohort using the UWFA frames with maximum vasculature detection for each eye. The selected frames were corrected by a previously described de-warping transformation algorithm to convert the vessel area measurements from pixels to mm2 [18]. Meantime for optimal vasculature detection for each cohort was calculated using timestamps corresponding to maximum vessel area detection in each eye. Cubic smoothing splines with 10 degrees of freedom were fit to each dataset to visualize the trend.
Arteriovenous and late pair selection and performance evaluation
All frames of UWFA were run through retinal vessel extraction and area calculation algorithm without any manual input. The image that corresponded to the mask with maximum RVA was identified as the optimal arteriovenous phase image. Among the frames with later than 4-min timestamp, the image with retinal vessels which were closest to the vessel area of the arteriovenous phase was selected as the optimal late image.
Two trained image analysts (DDS and MO) independently evaluated automatically selected pairs for individual and combined successes of arteriovenous and late phase image selection. Evaluation criteria included contrast, the FOV, obscuring artifacts, centering, and appropriate phase for angiographic feature evaluation such as non-perfusion in arteriovenous phase images and leakage in late-phase images. If an automatically selected image was graded as non-optimal according to at least one evaluation criteria, readers reviewed the remaining UWFA sequences to determine if a superior image existed. The automated selection was considered unsuccessful when a preferable image was identified by the readers. In cases of disagreement, an independent ophthalmologist (JPE) evaluated the images for the final decision. Inter-rater and inter-machine agreements were evaluated.
Results
RVA extraction performance
Seven hundred seventy-nine (10%) patches randomly selected from the training set were used to evaluate model performance and found an F-score of 0.77. Retinal vessel masks created by our deep learning model were highly accurate and superior in capturing details of vessels with small diameters as compared with masks created by conventional algorithms or unsupervised methods (Fig. 1) [12, 14]. Accurate detection of blood vessels and manual blood vessel annotations in UWFA images were challenged by changes in observable vasculature as the dye perfuses the retina, variable contrast, labor-intensive nature of the manual segmentation, and image quality [10].
Intra-study changes in the vascular area
A total of 1578 UWFA sequential images from 66 sessions of eyes with DR (33 eyes, 787 frames), SCR (18 eyes, 462 frames), and normal retina (12 eyes, 329 frames) were used to graph ML detected RVA overtime after dye injection. The maximum RVA was detected in the arteriovenous phase with means 104 ± 22, 96 ± 25, 105 ± 13 mm2 in DR, SCR, and normal cohorts respectively. Cubic smoothing splines fitted to the UWFA sessions of the groups were shown in Fig. 2. The mean timestamp for the maximum retinal vessel detection was 41 ± 15, 47 ± 27, 38 ± 8 s for DR, SCR, and normal retina groups, respectively.
Arteriovenous and late pair selection and performance
The automated selection tool was evaluated to be successful in identifying appropriate images for both phases in 394 out of 462 visits (85.2%). Readers (DDS and MO) agreed in 441 out of 462 (95.5%) visits. Following adjudication, success rates for identifying early and late images individually were 90.7% (419/462) and 94.6% (437/462) respectively. Figure 3 demonstrates an example of successful pair selection. Of the 43 images that were considered unsuccessful for early phase image selection, the algorithm was unable to identify an early image in 6 sessions; in the remaining 37 visits the images selected were not considered optimal because of image quality issues such as non-central FOV, contrast or inappropriate angiographic phase for non-perfusion evaluation. Of the sessions where the algorithm did not detect an optimal late image, the algorithm failed to identify a late image in 10 images; in the remaining 15, a superior late phase image was available based on image focus (Fig. 4) and FOV.
Discussion
This study demonstrates the change in deep learning-assisted retinal vessel detection by the UWFA phase and its successful application as an automated arteriovenous late pair selection tool. To our knowledge, this is the first proposed automated UWFA image selection method based on retinal vascular segmentation area assessments.
Accurately detecting changes in small blood vessels is critical when dealing with diseases that result in microvascular abnormalities, such as DR. RVA-timestamp graphs demonstrate a peak in retinal detection within the first minute of injection. Approximately 2 min after dye injection, detected vessel areas change only minimally as the background fluorescein increased with the dye perfusion. Increased background brightness decreases the contrast and interferes with the detection of small vessels. Figure 2 suggests that the optimal timeframe to measure vascular change is within the first minute after injection. Similarly, frames captured later than 2 min after the injection may not be appropriate for non-perfusion analysis because of the increased background fluorescence (Fig. 3). RVA-timestamp graphs are helpful in identifying images with images that have artifacts or anomalous FOVs since obstructions or changes in FOV result in significant changes in detected vascular area (Fig. 5). The arteriovenous late pair selection algorithm using these principles is successful in identifying both optimal arteriovenous and late phase pairs in 85.2% of sessions. As part of the imaging protocol in the PRIME clinical trial, peripheral sweeps were performed after 1 min with the UWFA device. The FOV changes due to peripheral sweeps introduced irregularities in detectible vessel area graphs and at times resulted in selection errors (Fig. 5).
There are several potential uses of this tool in the clinical setting. At the time-image acquisition, real-time assessment of vasculature could provide immediate feedback to photographers regarding image optimization. For the clinician, automated selection of the optimum early and late phase frame for review may increase the efficiency by potentially eliminating the need to review the entire UWFA sequence. In addition, an automated image selection platform is essential for fully automated clinical deployment of quantitative UWFA analysis of clinically important features including leakage, non-perfusion, and microaneurysms. Automated real-time quantification of these features in a clinical setting has the potential to serve as a clinician decision-support tool. The role of this tool in real-time patient care is currently being explored in ongoing studies.
There are several limitations to this study. In this methodology, vessel areas were calculated through the whole image without determining a region of interest. This resulted in the inclusion of artifacts such as eyelids and eyelashes in area measurements. The variability in FOV from subject to subject limited the comparison between groups. Another limitation for cross-sectional comparison is that the background brightness in well-perfused areas prevents visualization and detection of microvasculature compared to the areas where perfusion is compromised. This phenomenon results in increased vessel area detection in the early stages of non-perfusion. RVA-timestamp graphs are limited by the small sample size and unequal distribution of available UWFA frames. Typically, more frames are captured in the arteriovenous phase compared to the late phase. An additional potential limitation of segmentation includes leakage interference. The presence of large leakage foci in the late phase images may result in loss of local vessel segmentation due to obscuration of the underlying vasculature. This measurement error does not appear to affect the performance of the late selection algorithm as the leakage location is the same in all late frames of a given UWFA session. Future studies with more frequent frame capture from the first sign of dye through complete venous filling are required to better understand how the detectable RVA over time is affected by different pathologies. Another limitation of the current automated platform is that the stratification of the phase distribution is timestamp-dependent. Timestamp accuracy is dependent on the photographer and is therefore prone to error and may not be available in similar formats.
This study confirms the feasibility of automated quality optimized phase selection tool using retinal vessel detection by deep learning algorithms. This is demonstrated by sufficiently high accuracy, speed, and reliability when interrogating pathologic eyes in a clinical setting. In addition, an optimal window for retinal vessel analysis is demonstrated. Further studies are needed to create a timestamp independent selection tool, and further explore the relationship between detectable vasculature and retinal pathologies. Automating the image selection process saves significant time in image analysis and eliminates subjectivity. It has the potential for multi-level improvements to clinical workflow and automated systems for image interpretation.
Summary
What was known before
-
One major limitation of UWFA imaging for rapid image assessment is the large number of images that are obtained in a given UWFA session.
-
Often, only a small number of key images are needed for clinician review or automated analysis.
-
Identifying the highest quality phase-specific (e.g., arteriovenous (early), late) images requires significant time and may be highly subjective.
What this study adds
-
This study provides an assessment of a machine learning-based UWFA vascular segmentation platform and utilizes this system to evaluate changes in vascular areas across the entire UWFA sequence in eyes with various underlying pathologies.
-
In addition, this tool was utilized as a basis for developing an automated quality-optimized phase selection tool for both arteriovenous (i.e., early) and late phase angiograms.
References
Cabrera DeBuc D, Somfai GM, Koller A. Retinal microvascular network alterations: potential biomarkers of cerebrovascular and neural diseases. Am J Physiol Heart Circ Physiol. 2017;312:H201–H12.
MacGillivray TJ, Trucco E, Cameron JR, Dhillon B, Houston JG, van Beek EJ. Retinal imaging as a source of biomarkers for diagnosis, characterization and prognosis of chronic illness or long-term conditions. Br J Radio. 2014;87:20130832.
Poplin R, Varadarajan AV, Blumer K, Liu Y, McConnell MV, Corrado GS, et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat Biomed Eng. 2018;2:158–64.
Liskowski P, Krawiec K. Segmenting retinal blood vessels with deep neural networks. IEEE Trans Med Imaging. 2016;35:2369–80.
Joonyoung S, Boreom L. Development of automatic retinal vessel segmentation method in fundus images via convolutional neural networks. Conf Proc IEEE Eng Med Biol Soc. 2017;2017:681–4.
Yin B, Li H, Sheng B, Hou X, Chen Y, Wu W, et al. Vessel extraction from non-fluorescein fundus images using orientation-aware detector. Med Image Anal. 2015;26:232–42.
Manivannan A, Plskova J, Farrow A, McKay S, Sharp PF, Forrester JV. Ultra-wide-field fluorescein angiography of the ocular fundus. Am J Ophthalmol. 2005;140:525–7.
Ghasemi Falavarjani K, Wang K, Khadamy J, Sadda SR. Ultra-wide-field imaging in diabetic retinopathy; an overview. J Curr Ophthalmol. 2016;28:57–60.
Ehlers JP, Jiang AC, Boss JD, Hu M, Figueiredo N, Babiuch A, et al. Quantitative ultra-widefield angiography and diabetic retinopathy severity: an assessment of panretinal leakage index, ischemic index and microaneurysm count. Ophthalmology 2019;126:1527–32.
Ding L, Kuriyan A, Ramchandran R, Sharma G. Quantification of longitudinal changes in retinal vasculature from wide-field flourescein angiography via a novel registration and change detection approach. IEEE international conference on acoustics, speech and signal processing (ICASSP). 2018;1070–4.
Ding L, Kuriyan A, Ramchandran R, Sharma G. MULTI-Scale morphological analysis for retinal vessel detection in wide-field flourescein angiography. IEEE Western New York Image and Signal Processing Workshop (WNYISPW). 2017;1–5
Fan W, Uji A, Borrelli E, Singer M, Sagong M, van Hemert J, et al. Precise measurement of retinal vascular bed area and density on ultra-wide fluorescein angiography in normal subjects. Am J Ophthalmol. 2018;188:155–63.
Moosavi A, Figueiredo N, Prasanna P, Srivastava SK, Sharma K, Madabhushi A, et al. Imaging features of vessels and leakage patterns predict extended interval aflibercept dosing using ultra-widefield angiography in retinal vascular disease: findings from the PERMEATE study. IEEE Trans Biomed Eng. 2021:68:1777–86.
Jiang A, Srivastava S, Figueiredo N, Babiuch A, Hu M, Reese J, et al. Repeatability of automated leakage quantification and microaneurysm identification utilising an analysis platform for ultra-widefield fluorescein angiography. Br J Ophthalmol. 2020;104:500–3.
Ehlers JP, Wang K, Vasanji A, Hu M, Srivastava SK. Automated quantitative characterisation of retinal vascular leakage and microaneurysms in ultra-widefield fluorescein angiography. Br J Ophthalmol. 2017;101:696–9.
U-Net: convolutional networks for biomedical image segmentation. 2015. https://arxiv.org/abs/1505.04597.
Peng C, Zhang X, Yu G, Luo G, Sun J. Large kernel matters—improve semantic segmentation by global convolutional network. 2017. https://arxiv.org/abs/1703.02719.
Croft DE, van Hemert J, Wykoff CC, Clifton D, Verhoek M, Fleming A. et al. Precise montaging and metric quantification of retinal surface area from ultra-widefield fundus photography and fluorescein angiography. Ophthalmic Surg Lasers Imaging Retina. 2014;45:312–7.
Funding
NIH/NEI K23-EY022947-01A1 (JPE), Betty J. Powers Retina Research Fellowship (DDS).
Author information
Authors and Affiliations
Contributions
DDS: performed data analysis, image analysis, and manuscript preparation/revisions; SKS: provided supervision, provided imaging data, manuscript revisions; CW: provided data acquisition and imaging data, manuscript revisions; AWS: provided data acquisition and imaging data, manuscript revisions; JH: performed image analysis, provided data organization, manuscript revisions; MO: performed image analysis, provided data organization, manuscript revisions; JW: provided segmentation expertise and analysis support., manuscript revisions; AV: provided segmentation expertise and analysis support, manuscript revisions; JLR: provided supervision, data resources, and manuscript revisions; JPE: provided project oversight, funding support, resource support, study planning, manuscript revisions.
Corresponding author
Ethics declarations
Competing interests
SKS receives funding from Gilead, Regeneron, and Allergan; receives compensation as a consultant from Bausch and Lomb and Santen; owns a patent with Leica. CW receives compensation as a consultant from Adverum, Allergan, Apellis, Clearside, EyePoint, Genentech/Roch, Neurotech, Novartis, Opthea, Regeneron, Regenxbio, Samsung, Santen, Alimera Sciences, Allegro, Alynylam, Bayer, Clearside, D.O.R.C., Kodiak, Notal Vision, ONL Therapeutics, PolyPhotonix, and RecensMedical. AWS receives compensation as a consultant from Allergan and Novartis. AV is an employee of ERT. JPE receives funding and compensation as a consultant from Aerpio, Adverum Alcon, Thrombogenics/Oxurion, Regeneron, Stealth, Roche, Genetech, Novartis, and Allergan; receives compensation as a consultant from Roche, Leica, Zeiss, Allegro, Santen and has a patent with Leica.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Sevgi, D.D., Srivastava, S.K., Wykoff, C. et al. Deep learning-enabled ultra-widefield retinal vessel segmentation with an automated quality-optimized angiographic phase selection tool. Eye 36, 1783–1788 (2022). https://doi.org/10.1038/s41433-021-01661-4
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41433-021-01661-4
This article is cited by
-
Retinal non-perfusion in diabetic retinopathy
Eye (2022)
-
Deep learning for ultra-widefield imaging: a scoping review
Graefe's Archive for Clinical and Experimental Ophthalmology (2022)