TY - JOUR
T1 - A design framework for operationalizing trustworthy artificial intelligence in healthcare
T2 - Requirements, tradeoffs and challenges for its clinical adoption
AU - Moreno-Sánchez, Pedro A.
AU - Del Ser, Javier
AU - van Gils, Mark
AU - Hernesniemi, Jussi
N1 - Publisher Copyright:
© 2025 The Author(s).
PY - 2026/3
Y1 - 2026/3
N2 - Artificial Intelligence (AI) holds great promise for transforming healthcare, particularly in disease diagnosis, prognosis, and patient care. The increasing availability of digital medical data, such as images, omics data, biosignals, and electronic health records, combined with advances in computing, has enabled AI models to approach expert-level performance. However, widespread clinical adoption remains limited, primarily due to challenges beyond technical performance, including ethical concerns, regulatory barriers, and lack of trust. To address these issues, medical AI systems must align with the principles of Trustworthy AI (TAI), which emphasize human agency and oversight, algorithmic robustness, privacy and data governance, transparency, bias and discrimination avoidance, and accountability. Yet, the complexity of healthcare processes (e.g., screening, diagnosis, prognosis, and treatment) and the diversity of stakeholders (clinicians, patients, providers, regulators) complicate the integration of TAI principles. To bridge the gap between TAI theory and practical implementation, this paper proposes a design framework to support developers in embedding TAI principles into medical AI systems. Thus, for each stakeholder identified across various healthcare processes, we propose a disease-agnostic collection of requirements that medical AI systems should incorporate to adhere to the principles of TAI. Additionally, we examine the challenges and tradeoffs that may arise when applying these principles in practice. To illustrate the discussion, we focus on cardiovascular diseases, which is a field marked by both high prevalence and active AI innovation, and demonstrate how TAI principles have been applied and where key obstacles persist.
AB - Artificial Intelligence (AI) holds great promise for transforming healthcare, particularly in disease diagnosis, prognosis, and patient care. The increasing availability of digital medical data, such as images, omics data, biosignals, and electronic health records, combined with advances in computing, has enabled AI models to approach expert-level performance. However, widespread clinical adoption remains limited, primarily due to challenges beyond technical performance, including ethical concerns, regulatory barriers, and lack of trust. To address these issues, medical AI systems must align with the principles of Trustworthy AI (TAI), which emphasize human agency and oversight, algorithmic robustness, privacy and data governance, transparency, bias and discrimination avoidance, and accountability. Yet, the complexity of healthcare processes (e.g., screening, diagnosis, prognosis, and treatment) and the diversity of stakeholders (clinicians, patients, providers, regulators) complicate the integration of TAI principles. To bridge the gap between TAI theory and practical implementation, this paper proposes a design framework to support developers in embedding TAI principles into medical AI systems. Thus, for each stakeholder identified across various healthcare processes, we propose a disease-agnostic collection of requirements that medical AI systems should incorporate to adhere to the principles of TAI. Additionally, we examine the challenges and tradeoffs that may arise when applying these principles in practice. To illustrate the discussion, we focus on cardiovascular diseases, which is a field marked by both high prevalence and active AI innovation, and demonstrate how TAI principles have been applied and where key obstacles persist.
KW - AI fairness
KW - AI safety
KW - Design framework
KW - Explainable AI
KW - Health stakeholders
KW - Healthcare
KW - Human agency and oversight
KW - Medical AI
KW - Privacy
KW - Trustworthy AI
UR - https://www.scopus.com/pages/publications/105020265953
U2 - 10.1016/j.inffus.2025.103812
DO - 10.1016/j.inffus.2025.103812
M3 - Article
AN - SCOPUS:105020265953
SN - 1566-2535
VL - 127
JO - Information Fusion
JF - Information Fusion
M1 - 103812
ER -