You are here: vision-research.eu » Vision Research » Vision in the European Focus » Vision in the European Focus - Details

Powerful and transparent tool to optimize applications for patients

Machine learning enhanced the discovery of complicated patterns in OCT data and showed similar performance to humans. Our new tool T-REX adds to a better understanding on how artificial intelligence exactly learns.

The Traceable Relevance Explainability (T-REX) technique to automatically segment OCT images accelerates the development of innovative treatments for blinding retinal diseases, Peter Maloca is convinced. He is Head of the IOB Ophthalmic Imaging and OCT Group, and Assistant Professor at the University of Basel. He and his and co-workers developed T-Rex, a machine learning tool to yield insights into how artificial intelligence (AI) algorithms make decision under uncertainty.

Machines perform equal or even better in OCT diagnosis compared to humans – but can physicians trust them?

Modern ophthalmic diagnostic increasingly relies on imaging, especially the use of optical coherence tomography (OCT) and image analysis. OCT is a non-invasive imaging technology that utilizes low coherence laser light to produce cross-sectional images in biological tissues. 

It has been shown that machine learning increased OCT information throughput and showed similar performance as human graders in annotation of complex OCT images. Not least therefore the field of ophthalmology is particularly suited for machine learning applications: Machine learning increased OCT information throughput and showed similar performance as human graders in annotation of complex OCT images.
There have been successful reports with regard to implementation of machine learning analysis of OCT data and its diagnostic accuracy for neovascular age-related macular degeneration, diabetic retinopathy, retinal vein occlusion, and others. 

Can we really trust the machine’s diagnosis?

Physicians will only use an AI system for diagnosis and monitoring of diseases if they can understand and comprehend the internal AI processing. More importantly, physicians will only make a clinical decision based on a recommendation of such an AI system if they can fully identify themselves with the AI. 

Mode of working of artificial intelligence not comprehensible to humans

The discrepancy between how a computer works and how humans think is known as the “black box problem”: in communication technology and engineering language a system is usually considered as a “black box” that features an input and output path and shows a particular or at least statistically definable sort of operation. However, such a solution either is not specified in all details or cannot be visualized, so that its mode of working remains unidentified or hidden or in a way that is not (yet) comprehensible to humans.

This can cause a major issue due to the frequent incomplete knowledge and interpretability of the algorithm’s internal workings.

In medical imaging different experts might judge the same medical images slightly differently and come to different conclusions as to where for example the borders of certain medical structures should be marked. In some cases, the ambiguity might be irresolvable, i.e. there is no unambiguous ground truth criterion, since it is impossible to determine the exact location of these structures without applying invasive and destructive procedures to the patient (which are impossible in the vast majority of cases). 


Original publication:

Unraveling the deep learning gearbox in optical coherence tomography image segmentation towards explainable artificial intelligence

Peter M. Maloca, Philipp L. Müller, Aaron Y. Lee, Adnan Tufail, Konstantinos Balaskas, Stephanie Niklaus, Pascal Kaiser, Susanne Suter, Javier Zarranz-Ventura, Catherine Egan, Hendrik P. N. Scholl, Tobias K. Schnitzer, Thomas Singer, Pascal W. Hasler & Nora Denk

Nature Communications Biology volume 4, Article number: 170 (2021)