Understanding the black box: Android malware detection through Explainable AI

  • Alberto Miranda-Garcia*
  • , Nerea Gómez Larrakoetxea
  • , Iker Pastor-López
  • , Borja Sanz Urquijo
  • , Pablo García Bringas
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Detecting Android malware is crucial in safeguarding mobile devices and user data. We introduce a novel approach to android malware detection through convolutional neural networks (CNNs) and Explainable AI. First, we present a methodology for processing android applications, transforming them into images, and through a series of experiments, we demonstrate the efficacy of CNNs in identifying malware within these images. Furthermore, we employ Explainable AI techniques to analyse the decision-making processes of these models. Our approach goes beyond detection; we rigorously analyse the key aspects that distinguish malware, allowing us to improve and validate our data transformation methodology. This emphasis on understanding the decision-making processes of the models is the main aspect of our approach, as it provides insight into the mechanisms of malware, enhancing our understanding of malware detection and reinforcing the robustness of our conclusions. In essence, our work not only provides a comprehensive framework for Android malware detection but also underscores the significance of Explainable AI in cybersecurity research, integrating advanced computational techniques with a detailed understanding of malware behaviour.

Original languageEnglish
Article numberjzaf018
JournalLogic Journal of the IGPL
Volume34
Issue number1
DOIs
Publication statusPublished - 1 Feb 2026
Externally publishedYes

Keywords

  • Android
  • CNN
  • Explainable AI
  • malware detection

Fingerprint

Dive into the research topics of 'Understanding the black box: Android malware detection through Explainable AI'. Together they form a unique fingerprint.

Cite this