TY - JOUR
T1 - Overlap Number of Balls Model-Agnostic CounterFactuals (ONB-MACF)
T2 - A data-morphology-based counterfactual generation method for trustworthy artificial intelligence
AU - Pascual-Triana, José Daniel
AU - Fernández, Alberto
AU - Del Ser, Javier
AU - Herrera, Francisco
N1 - Publisher Copyright:
© 2025 Elsevier Inc.
PY - 2025/5
Y1 - 2025/5
N2 - Explainable Artificial Intelligence (XAI) is a pivotal research domain aimed at clarifying AI systems, particularly those considered “black boxes” due to their complex, opaque nature. XAI seeks to make these AI systems more understandable and trustworthy, providing insight into their decision-making processes. By producing clear and comprehensible explanations, XAI enables users, practitioners, and stakeholders to trust a model's decisions. This work analyses the value of data morphology strategies in generating counterfactual explanations. It introduces the Overlap Number of Balls Model-Agnostic CounterFactuals (ONB-MACF) method, a model-agnostic counterfactual generator that leverages data morphology to estimate a model's decision boundaries. The ONB-MACF method constructs hyperspheres in the data space whose covered points share a class, mapping the decision boundary. Counterfactuals are then generated by incrementally adjusting an instance's attributes towards the nearest alternate-class hypersphere, crossing the decision boundary with minimal modifications. By design, the ONB-MACF method generates feasible and sparse counterfactuals that follow the data distribution. Our comprehensive benchmark from a double perspective (quantitative and qualitative) shows that the ONB-MACF method outperforms existing state-of-the-art counterfactual generation methods across multiple quality metrics on diverse tabular datasets. This supports our hypothesis, showcasing the potential of data-morphology-based explainability strategies for trustworthy AI.
AB - Explainable Artificial Intelligence (XAI) is a pivotal research domain aimed at clarifying AI systems, particularly those considered “black boxes” due to their complex, opaque nature. XAI seeks to make these AI systems more understandable and trustworthy, providing insight into their decision-making processes. By producing clear and comprehensible explanations, XAI enables users, practitioners, and stakeholders to trust a model's decisions. This work analyses the value of data morphology strategies in generating counterfactual explanations. It introduces the Overlap Number of Balls Model-Agnostic CounterFactuals (ONB-MACF) method, a model-agnostic counterfactual generator that leverages data morphology to estimate a model's decision boundaries. The ONB-MACF method constructs hyperspheres in the data space whose covered points share a class, mapping the decision boundary. Counterfactuals are then generated by incrementally adjusting an instance's attributes towards the nearest alternate-class hypersphere, crossing the decision boundary with minimal modifications. By design, the ONB-MACF method generates feasible and sparse counterfactuals that follow the data distribution. Our comprehensive benchmark from a double perspective (quantitative and qualitative) shows that the ONB-MACF method outperforms existing state-of-the-art counterfactual generation methods across multiple quality metrics on diverse tabular datasets. This supports our hypothesis, showcasing the potential of data-morphology-based explainability strategies for trustworthy AI.
KW - Counterfactual analysis
KW - Data morphology
KW - Explainable artificial intelligence
KW - Model-agnostic explanations
KW - Trustworthy artificial intelligence
UR - http://www.scopus.com/inward/record.url?scp=85214347728&partnerID=8YFLogxK
U2 - 10.1016/j.ins.2024.121844
DO - 10.1016/j.ins.2024.121844
M3 - Article
AN - SCOPUS:85214347728
SN - 0020-0255
VL - 701
JO - Information Sciences
JF - Information Sciences
M1 - 121844
ER -