TY - JOUR
T1 - A novel Out-of-Distribution detection approach for Spiking Neural Networks
T2 - Design, fusion, performance evaluation and explainability
AU - Martinez-Seras, Aitor
AU - Del Ser, Javier
AU - L. Lobo, Jesus
AU - Garcia-Bringas, Pablo
AU - Kasabov, Nikola
N1 - Publisher Copyright:
© 2023 Elsevier B.V.
PY - 2023/12
Y1 - 2023/12
N2 - Research around Spiking Neural Networks has ignited during the last years due to their advantages when compared to traditional neural networks, including their efficient processing and inherent ability to model complex temporal dynamics. Despite these differences, Spiking Neural Networks face similar issues than other neural computation counterparts when deployed in real-world settings. This work addresses one of the practical circumstances that can hinder the trustworthiness of this family of models: the possibility of querying a trained model with samples far from the distribution of its training data (also referred to as Out-of-Distribution or OoD data). Specifically, this work presents a novel OoD detector that can identify whether test examples input to a Spiking Neural Network belong to the distribution of the data over which it was trained. For this purpose, we characterize the internal activations of the hidden layers of the network in the form of spike count patterns, which lay a basis for determining when the activations induced by a test instance is atypical. Furthermore, a local explanation method is devised to produce attribution maps revealing which parts of the input instance push most towards the detection of an example as an OoD sample. Experimental results are performed over several classic and event-based image classification datasets to compare the performance of the proposed detector to that of other OoD detection schemes from the literature. Our experiments also assess whether the fusion of our proposed approach with other baseline OoD detection schemes can complement and boost the overall OoD detection capability As the obtained results clearly show, the proposed detector performs competitively against such alternative schemes, and when fused together, can significantly improve the detection scores of their constituent individual detectors. Furthermore, the explainability technique associated to our proposal is proven to produce relevance attribution maps that conform to expectations for synthetically created OoD instances.
AB - Research around Spiking Neural Networks has ignited during the last years due to their advantages when compared to traditional neural networks, including their efficient processing and inherent ability to model complex temporal dynamics. Despite these differences, Spiking Neural Networks face similar issues than other neural computation counterparts when deployed in real-world settings. This work addresses one of the practical circumstances that can hinder the trustworthiness of this family of models: the possibility of querying a trained model with samples far from the distribution of its training data (also referred to as Out-of-Distribution or OoD data). Specifically, this work presents a novel OoD detector that can identify whether test examples input to a Spiking Neural Network belong to the distribution of the data over which it was trained. For this purpose, we characterize the internal activations of the hidden layers of the network in the form of spike count patterns, which lay a basis for determining when the activations induced by a test instance is atypical. Furthermore, a local explanation method is devised to produce attribution maps revealing which parts of the input instance push most towards the detection of an example as an OoD sample. Experimental results are performed over several classic and event-based image classification datasets to compare the performance of the proposed detector to that of other OoD detection schemes from the literature. Our experiments also assess whether the fusion of our proposed approach with other baseline OoD detection schemes can complement and boost the overall OoD detection capability As the obtained results clearly show, the proposed detector performs competitively against such alternative schemes, and when fused together, can significantly improve the detection scores of their constituent individual detectors. Furthermore, the explainability technique associated to our proposal is proven to produce relevance attribution maps that conform to expectations for synthetically created OoD instances.
KW - Explainable artificial intelligence
KW - Model fusion
KW - Out-of-Distribution detection
KW - Relevance attribution
KW - Spiking Neural Networks
UR - http://www.scopus.com/inward/record.url?scp=85169930989&partnerID=8YFLogxK
U2 - 10.1016/j.inffus.2023.101943
DO - 10.1016/j.inffus.2023.101943
M3 - Article
AN - SCOPUS:85169930989
SN - 1566-2535
VL - 100
JO - Information Fusion
JF - Information Fusion
M1 - 101943
ER -