Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence

Andreas Holzinger, Matthias Dehmer, Frank Emmert-Streib, Rita Cucchiara, Isabelle Augenstein, Javier Del Ser, Wojciech Samek, Igor Jurisica, Natalia Díaz-Rodríguez

Research output: Contribution to journalArticlepeer-review

128 Citations (Scopus)

Abstract

Medical artificial intelligence (AI) systems have been remarkably successful, even outperforming human performance at certain tasks. There is no doubt that AI is important to improve human health in many ways and will disrupt various medical workflows in the future. Using AI to solve problems in medicine beyond the lab, in routine environments, we need to do more than to just improve the performance of existing AI methods. Robust AI solutions must be able to cope with imprecision, missing and incorrect information, and explain both the result and the process of how it was obtained to a medical expert. Using conceptual knowledge as a guiding model of reality can help to develop more robust, explainable, and less biased machine learning models that can ideally learn from less data. Achieving these goals will require an orchestrated effort that combines three complementary Frontier Research Areas: (1) Complex Networks and their Inference, (2) Graph causal models and counterfactuals, and (3) Verification and Explainability methods. The goal of this paper is to describe these three areas from a unified view and to motivate how information fusion in a comprehensive and integrative manner can not only help bring these three areas together, but also have a transformative role by bridging the gap between research and practical applications in the context of future trustworthy medical AI. This makes it imperative to include ethical and legal aspects as a cross-cutting discipline, because all future solutions must not only be ethically responsible, but also legally compliant.
Original languageEnglish
Pages (from-to)263-278
Number of pages16
JournalInformation Fusion
Volume79
DOIs
Publication statusPublished - Mar 2022

Keywords

  • Artificial intelligence
  • Information fusion
  • Medical AI
  • Explainable AI
  • Robustness
  • Explainability
  • Trust
  • Graph-based machine learning
  • Neural-symbolic learning and reasoning

Project and Funding Information

  • Project ID
  • info:eu-repo/grantAgreement/EC/H2020/826078/EU/Privacy preserving federated machine learning and blockchaining for reduced cyber risks in a world of distributed healthcare/FeatureCloud
  • info:eu-repo/grantAgreement/EC/H2020/965221/EU/Intelligent Total Body Scanner for Early Detection of Melanoma/iToBoS
  • Funding Info
  • Andreas Holzinger acknowledges funding support from the Austrian Science Fund (FWF), Project: P-32554 explainable Artificial Intelligenceand from the European Union’s Horizon 2020 research and innovationprogram under grant agreement 826078 (Feature Cloud). This publi-cation reflects only the authors’ view and the European Commissionis not responsible for any use that may be made of the informationit contains; Natalia Díaz-Rodríguez is supported by the Spanish Gov-ernment Juan de la Cierva Incorporación contract (IJC2019-039152-I); Isabelle Augenstein’s research is partially funded by a DFF Sapere Auderesearch leader grant; Javier Del Ser acknowledges funding supportfrom the Basque Government through the ELKARTEK program (3KIAproject, KK-2020/00049) and the consolidated research group MATH-MODE (ref. T1294-19); Wojciech Samek acknowledges funding Support from the European Union’s Horizon 2020 research and innovationprogram under grant agreement No. 965221 (iToBoS), and the German Federa

Fingerprint

Dive into the research topics of 'Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence'. Together they form a unique fingerprint.

Cite this