TY - JOUR
T1 - Enhancing trust in automated 3D point cloud data interpretation through explainable counterfactuals
AU - Holzinger, Andreas
AU - Lukač, Niko
AU - Rozajac, Dzemail
AU - Johnston, Emile
AU - Kocic, Veljka
AU - Hoerl, Bernhard
AU - Gollob, Christoph
AU - Nothdurft, Arne
AU - Stampfer, Karl
AU - Schweng, Stefan
AU - Del Ser, Javier
N1 - Publisher Copyright:
© 2025
PY - 2025/7
Y1 - 2025/7
N2 - This paper introduces a novel framework for augmenting explainability in the interpretation of point cloud data by fusing expert knowledge with counterfactual reasoning. Given the complexity and voluminous nature of point cloud datasets, derived predominantly from LiDAR and 3D scanning technologies, achieving interpretability remains a significant challenge, particularly in smart cities, smart agriculture, and smart forestry. This research posits that integrating expert knowledge with counterfactual explanations – speculative scenarios illustrating how altering input data points could lead to different outcomes – can significantly reduce the opacity of deep learning models processing point cloud data. The proposed optimization-driven framework utilizes expert-informed ad-hoc perturbation techniques to generate meaningful counterfactual scenarios when employing state-of-the-art deep learning architectures. The optimization process minimizes a multi-criteria objective comprising counterfactual metrics such as similarity, validity, and sparsity, which are specifically tailored for point cloud datasets. These metrics provide a quantitative lens for evaluating the interpretability of the counterfactuals. Furthermore, the proposed framework allows for the definition of explicit interpretable counterfactual perturbations at its core, thereby involving the audience of the model in the counterfactual generation pipeline and ultimately, improving their overall trust in the process. Results demonstrate a notable improvement in both the interpretability of the model's decisions and the actionable insights delivered to end-users. Additionally, the study explores the role of counterfactual reasoning, coupled with expert input, in enhancing trustworthiness and enabling human-in-the-loop decision-making processes. By bridging the gap between complex data interpretations and user comprehension, this research advances the field of explainable AI, contributing to the development of transparent, accountable, and human-centered artificial intelligence systems.
AB - This paper introduces a novel framework for augmenting explainability in the interpretation of point cloud data by fusing expert knowledge with counterfactual reasoning. Given the complexity and voluminous nature of point cloud datasets, derived predominantly from LiDAR and 3D scanning technologies, achieving interpretability remains a significant challenge, particularly in smart cities, smart agriculture, and smart forestry. This research posits that integrating expert knowledge with counterfactual explanations – speculative scenarios illustrating how altering input data points could lead to different outcomes – can significantly reduce the opacity of deep learning models processing point cloud data. The proposed optimization-driven framework utilizes expert-informed ad-hoc perturbation techniques to generate meaningful counterfactual scenarios when employing state-of-the-art deep learning architectures. The optimization process minimizes a multi-criteria objective comprising counterfactual metrics such as similarity, validity, and sparsity, which are specifically tailored for point cloud datasets. These metrics provide a quantitative lens for evaluating the interpretability of the counterfactuals. Furthermore, the proposed framework allows for the definition of explicit interpretable counterfactual perturbations at its core, thereby involving the audience of the model in the counterfactual generation pipeline and ultimately, improving their overall trust in the process. Results demonstrate a notable improvement in both the interpretability of the model's decisions and the actionable insights delivered to end-users. Additionally, the study explores the role of counterfactual reasoning, coupled with expert input, in enhancing trustworthiness and enabling human-in-the-loop decision-making processes. By bridging the gap between complex data interpretations and user comprehension, this research advances the field of explainable AI, contributing to the development of transparent, accountable, and human-centered artificial intelligence systems.
KW - Counterfactual reasoning
KW - Explainable AI
KW - Human-centered AI
KW - Information fusion
KW - Interpretability
KW - Point cloud data
UR - http://www.scopus.com/inward/record.url?scp=85218639336&partnerID=8YFLogxK
U2 - 10.1016/j.inffus.2025.103032
DO - 10.1016/j.inffus.2025.103032
M3 - Article
AN - SCOPUS:85218639336
SN - 1566-2535
VL - 119
JO - Information Fusion
JF - Information Fusion
M1 - 103032
ER -