Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Artificial intelligence has been portrayed as a silver bullet for a number of challenges encountered in gastrointestinal (GI) endoscopy and beyond. Intense research, commercial and media focus have led to the publication of studies with modest patient numbers and comparatively simple technology. There is no doubt that machine learning (ML) will be a determining medical development for the years to come. However, now that the dust has begun to settle, we are at a critical juncture where the focus is shifting from preclinical work toward the role of ML in clinical practice. Current issues relate to the evaluation and testing of AI and ML systems, especially regarding patient outcomes, and to regulatory issues surrounding implementation. Many of these aspects pertain to one overarching question: how can we ensure that preclinical results translate into trustworthy clinical reality?
For the endoscopist, whether as a reader, a reviewer or a potential user of AI, it becomes increasingly important to understand the technical aspects of the systems and their performance measurements in order to realistically assess their practical value. Therefore, with GI endoscopy ML at the jump-off point from proof-of-principle studies1–7 to clinical trials8–12, van der Sommen et al provided us with an accessible guide to understand, assess and critically review the current ML endoscopy literature.13
Our commentary highlights selected aspects of this review and AI as a whole and elaborates on the role of the GI endoscopy community and how it may both experience and frame the way ahead. In particular, we advocate a close collaboration of technology scientists and clinicians from early development phases onward to allow for the development of well-tailored AI algorithms and realistic preclinical testing. More transparency is needed with respect to the training data and the algorithm development process. In addition, in the legislative …
Correction notice This article has been corrected since it published Online First. The affiliations for Prof Repici have been amended.
Contributors All authors together wrote the paper and approved the final version.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests RW: received research support from Siemens. AR: received consultancy from Medtronic and Cosmo and research support from Cosmo, Fujifilm and Pentax. RB: received research support and consultancy from Pentax, Fujifilm and Medtronic. CH: received consultancy from Medtronic and Fujifilm. PS: received consultancy from Medtronic, Olympus, Boston Scientific, Fujifilm and Lumendi and research support from Ironwood, Docbot, Cosmo Pharmaceuticals, CDx Laboratories and Erbe. TR: received research support from Olympus and Fujifilm.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Provenance and peer review Not commissioned; externally peer reviewed.