Explainable AI for Glaucoma Prediction Analysis to Understand Risk Factors in Treatment Planning

Publication Name

IEEE Transactions on Instrumentation and Measurement

Abstract

Glaucoma causes irreversible blindness. In 2020, about 80 million people worldwide had glaucoma. Existing machine learning (ML) models are limited to glaucoma prediction, where clinicians, patients, and medical experts are unaware of how data analysis and decision-making are handled. Explainable artificial intelligence (XAI) and interpretable ML (IML) create opportunities to increase user confidence in the decision-making process. This article proposes XAI and IML models for analyzing glaucoma predictions/results. XAI primarily uses adaptive neuro-fuzzy inference system (ANFIS) and pixel density analysis (PDA) to provide trustworthy explanations for glaucoma predictions from infected and healthy images. IML uses sub-modular pick local interpretable model-agonistic explanation (SP-LIME) to explain results coherently. SP-LIME interprets spike neural network (SNN) results. Using two different publicly available datasets, namely fundus images, i.e., coherence tomography images of the eyes and clinical medical records of glaucoma patients, our experimental results show that XAI and IML models provide convincing and coherent decisions for clinicians/medical experts and patients.

Open Access Status

This publication is not available as open access

Volume

71

Article Number

2509209

Share

COinS
 

Link to publisher version (DOI)

http://dx.doi.org/10.1109/TIM.2022.3171613