Deep Learning for Quality Assessment of Optical Coherence Tomography Angiography Images

It has already been shown that image quality can greatly affect all quantitative measurements of OCTA images25. Additionally, the presence of retinal pathology may increase the prevalence of image artifacts7.26. Indeed, in our data, consistent with previous studies, we found a significant correlation between increasing age and severity of retinal disease with poorer image quality (p 27. Thus, before any quantitative analysis of OCTA imaging, image quality assessment is paramount. Most studies analyzing OCTA images use a machine-reported signal intensity threshold to exclude poor quality images. Although signal strength has been shown to affect the quantification of OCTA parameters, high signal strength alone may not be sufficient to exclude images with image artifacts2,3,28,29. Thus, there is a need to develop a more robust method of image quality control. To this end, we evaluated the performance of a supervised deep learning approach against machine-reported signal strength.

We have developed more than one model to assess image quality, as different scenarios in which OCTA is used may have different image quality requirements. For example, images should be of higher quality if the output of interest is continuous (e.g. vessel density or fractal dimension in a research or clinical trial setting) versus binary (e.g. presence of choroidal or preretinal neovascularization in a clinical setting). Additionally, the particular quantitative parameter of interest is also important. For example, the foveal avascular zone will not be affected by non-central medial opacity, but vessel density will be affected. While our review remains focused on a general approach to image quality, not tied to the requirements of any particular test, but rather intended as a direct replacement for machine-reported signal strength, we hope to provide users with a greater degree of control. such that they can choose a model corresponding to the maximum degree of image artifact considered acceptable, based on a user’s particular metric of interest.

For both low-quality and high-quality scenarios, we demonstrate excellent performance of deep convolutional neural networks using jump connections in the quality assessment of 8 (time ) 8mm OCTA images of the superficial capillary plexus with AUCs of 0.97 and 0.99 for high and low quality models respectively. We also show superior performance of our deep learning approach compared to using machine-reported signal strength alone. Jump connections allow neural networks to learn features at multiple levels of granularity, capturing fine-grained aspects of images such as contrast as well as global features such as image centering30.31. Since image artifacts that affect image quality can be better identified over a wide range of scales, neural network architectures with skip connections can exhibit superior performance than those without for the task of determination of image quality.

By testing our models on 6 (time ) 6 mm OCTA images, of a different size than the models were trained to classify, we noted a reduction in classification performance for both high-quality and low-quality models (Fig. 2). This drop was greater for the AlexNet models than for the ResNet models. The relatively better performance of ResNet may be due to the ability of the residual connections to carry information at multiple scales, thus making the models more robust to classify images taken at multiple scales and/or magnifications.

Some differences between the 8 (time ) 8mm and 6 frames (time ) 6mm images that may contribute to decreased classification performance include a relatively larger proportion of the image including the foveal avascular area, changes in visibility of the vascular arcades, and absence of the optic nerve in images of 6x6mm. Regardless, the ability of our high quality ResNet model to achieve 85% AUC for 6 (time ) The 6 mm images, a configuration for which the model was not trained, suggests that the image quality information encoded in the neural network is applicable beyond the single image size or configuration of the machine for which he was trained (Table 2). Reassuringly, the activation cards of the ResNet and AlexNet classes of the 8 (time ) 8mm and 6 (time ) The 6 mm images are able to highlight the retinal vessels in both cases, suggesting that the models have learned important information applicable to the classification of the two types of OCTA images (Fig. 4).

Lauermann et al. similarly used deep learning approaches for image quality assessment of OCTA images using the Inception architecture, a different convolutional neural network with jump connections6.32. They also limited their study to images of the superficial capillary plexus, but only used smaller 3 × 3 mm images from the Optovue AngioVue, although patients with different chorioretinal diseases were also included. Our work builds on theirs by including multiple models to handle different image quality thresholds and validating the results across multiple image sizes. We’re additionally reporting AUC metrics for our machine learning models and improving their already impressive (90%) accuracy in our low-quality (96%) and high-quality (95.7%) models.6.

There are several limitations to this study. First, the images were taken on a single OCTA machine, and only 8 (time ) 8mm and 6 (time ) 6 mm images of the superficial capillary plexus have been included. The reason for excluding images from deeper layers is that the projection artifact would have made manual classification of images more difficult and potentially less consistent. Additionally, the images were only taken in diabetic patients, a patient population for which OCTA is becoming an important diagnostic and prognostic tool.33.34. Although we were able to validate our models on images of a distinct size to ensure the robustness of the results, we were unable to identify a suitable dataset from a different center, which limited our assessment of the generalizability of the model. Although from a single center, our images came from patients of diverse ethnic and racial backgrounds, a unique strength of our study. By integrating diversity into our training process, we hope that our results can be more broadly generalizable and we will avoid encoding racial biases in our trained models.

Our study demonstrates that neural networks with hopping connections can be trained to achieve high performance in determining OCTA image quality. We provide these models as a tool for further study. Since different quantitative metrics may have different image quality requirements, separate quality control models could be developed for each metric using the framework established here.

Future studies should include images of different sizes, different depths and different OCTA machines to obtain a deep learning image quality ranking process that is generalizable across OCTA platforms and protocols. imagery. The current study also relies on supervised deep learning approaches that require manual image evaluation and ranking, which can be laborious and time-consuming for large datasets. It remains to be seen whether unsupervised approaches to deep learning can adequately separate low-quality images from high-quality images.

As OCTA technology continues to evolve and scanning speed improves, the frequency of imaging artifacts and poor quality images is likely to decrease. Software advancements such as the recent introduction of projection artifact removal will likely alleviate these limitations as well. Nevertheless, many challenges remain, as imaging patients with poor fixation or with significant media opacities will invariably result in image artifacts. As OCTA is increasingly used in clinical trials, careful consideration is needed to establish clear guidelines for the degree of image artifact considered acceptable for image analysis. The application of deep learning approaches to OCTA images shows great promise and further study in this area is needed to develop a robust approach to image quality control.

Previous Scott Henderson talks powerful graphic images, travel and more
Next Artificial intelligence can now create compelling images of buildings