Article Text

Download PDFPDF

Real-time use of artificial intelligence in the evaluation of cancer in Barrett’s oesophagus
  1. Alanna Ebigbo1,
  2. Robert Mendel2,3,
  3. Andreas Probst1,
  4. Johannes Manzeneder1,
  5. Friederike Prinz1,
  6. Luis A de Souza Jr.4,
  7. Joao Papa5,
  8. Christoph Palm2,3,
  9. Helmut Messmann1
  1. 1 Department of Gastroenterology, Universitätsklinikum Augsburg, Augsburg, Germany
  2. 2 Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Regensburg, Germany
  3. 3 Regensburg Center of Health Sciences and Technology, OTH Regensburg, Regensburg, Germany
  4. 4 Department of Computing, Federal University of São Carlos, São Carlos, Brazil
  5. 5 Department of Computing, São Paulo State University, Bauru, Brazil
  1. Correspondence to Dr Alanna Ebigbo, Department of Gastroenterology, Universitätsklinikum Augsburg, Augsburg 86156, Germany; alanna.ebigbo{at}uk-augsburg.de; Professor Christoph Palm, Ostbayerische Technische Hochschule Regensburg, Regensburg 93053, Germany; christoph.palm{at}oth-regensburg.de

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Message

Based on previous work by our group with manual annotation of visible Barrett oesophagus (BE) cancer images, a real-time deep learning artificial intelligence (AI) system was developed. While an expert endoscopist conducts the endoscopic assessment of BE, our AI system captures random images from the real-time camera livestream and provides a global prediction (classification), as well as a dense prediction (segmentation) differentiating accurately between normal BE and early oesophageal adenocarcinoma (EAC). The AI system showed an accuracy of 89.9% on 14 cases with neoplastic BE.

In more detail

This paper follows up on our prior publication on the application of AI and deep learning in the evaluation of BE.1 2 In our initial publications, we developed a computer-aided diagnosis (CAD) model and demonstrated promising performance scores in the classification and segmentation domains during BE assessment.1 2 However, these results were achieved on optimal endoscopic images, which may not mirror the real-life situation sufficiently. To enable the seamless integration of AI-based image classification into the clinical workflow, our previous system was developed further to increase the speed of image analysis for classification and the resolution of the dense prediction, which shows the color-coded spatial distribution of cancer probabilities.1 2 Still based on deep convolutional neural nets (CNNs) and a residual net (ResNet) architecture with DeepLab V.3+, a state-of-the-art encoder–decoder network was adapted.3 To transfer the endoscopic livestream to our AI system, a capture card (Avermedia, Taiwan) was plugged to the endoscopic monitor.

Online supplementary video 1 shows the setting of AI-based BE evaluation in the endoscopy room of the University Hospital Augsburg (figure 1). The AI prediction can be started at any time using either a button on the keyboard or a foot switch. The video clip shows examples of …

View Full Text