Assessing Trustworthy AI in Times of COVID-19: Deep Learning for Predicting a Multiregional Score Conveying the Degree of Lung Compromise in COVID-19 Patients

  • Himanshi Allahabadi
  • , Julia Amann
  • , Isabelle Balot
  • , Andrea Beretta
  • , Charles Binkley
  • , Jonas Bozenhard
  • , Frederick Bruneault
  • , James Brusseau
  • , Sema Candemir
  • , Luca Alessandro Cappellini
  • , Subrata Chakraborty
  • , Nicoleta Cherciu
  • , Christina Cociancig
  • , Megan Coffee
  • , Irene Ek
  • , Leonardo Espinosa-Leal
  • , Davide Farina
  • , Genevieve Fieux-Castagnet
  • , Thomas Frauenfelder
  • , Alessio Gallucci
  • Guya Giuliani, Adam Golda, Irmhild Van Halem, Elisabeth Hildt, Sune Holm, Georgios Kararigas, Sebastien A. Krier, Ulrich Kuhne, Francesca Lizzi, Vince I. Madai, Aniek F. Markus, Serg Masis, Emilie Wiinblad Mathez, Francesco Mureddu, Emanuele Neri, Walter Osika, Matiss Ozols, Cecilia Panigutti, Brendan Parent, Francesca Pratesi, Pedro A. Moreno-Sanchez, Giovanni Sartor, Mattia Savardi, Alberto Signoroni, Hanna Maria Sormunen, Andy Spezzatti, Adarsh Srivastava, Annette F. Stephansen, Lau Bee Theng, Jesmin Jahan Tithi, Jarno Tuominen, Steven Umbrello, Filippo Vaccher, Dennis Vetter, Magnus Westerlund, Renee Wurth, Roberto V. Zicari*
*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

29 Citations (Scopus)

Abstract

This article's main contributions are twofold: 1) to demonstrate how to apply the general European Union's High-Level Expert Group's (EU HLEG) guidelines for trustworthy AI in practice for the domain of healthcare and 2) to investigate the research question of what does 'trustworthy AI' mean at the time of the COVID-19 pandemic. To this end, we present the results of a post-hoc self-assessment to evaluate the trustworthiness of an AI system for predicting a multiregional score conveying the degree of lung compromise in COVID-19 patients, developed and verified by an interdisciplinary team with members from academia, public hospitals, and industry in time of pandemic. The AI system aims to help radiologists to estimate and communicate the severity of damage in a patient's lung from Chest X-rays. It has been experimentally deployed in the radiology department of the ASST Spedali Civili clinic in Brescia, Italy, since December 2020 during pandemic time. The methodology we have applied for our post-hoc assessment, called Z-Inspection®, uses sociotechnical scenarios to identify ethical, technical, and domain-specific issues in the use of the AI system in the context of the pandemic.

Original languageEnglish
Pages (from-to)272-289
Number of pages18
JournalIEEE Transactions on Technology and Society
Volume3
Issue number4
DOIs
Publication statusPublished - 1 Dec 2022
Externally publishedYes

Keywords

  • Artificial intelligence
  • COVID-19
  • Z-Inspection®
  • case study
  • ethical tradeoff
  • ethics
  • explainable AI
  • healthcare
  • pandemic
  • radiology
  • trust
  • trustworthy AI

Fingerprint

Dive into the research topics of 'Assessing Trustworthy AI in Times of COVID-19: Deep Learning for Predicting a Multiregional Score Conveying the Degree of Lung Compromise in COVID-19 Patients'. Together they form a unique fingerprint.

Cite this