TY - GEN
T1 - Comparing metrics to evaluate performance of regression methods for decoding of neural signals
AU - Spuler, Martin
AU - Sarasola-Sanz, Andrea
AU - Birbaumer, Niels
AU - Rosenstiel, Wolfgang
AU - Ramos-Murguialday, Ander
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2015/11/4
Y1 - 2015/11/4
N2 - The use of regression methods for decoding of neural signals has become popular, with its main applications in the field of Brain-Machine Interfaces (BMIs) for control of prosthetic devices or in the area of Brain-Computer Interfaces (BCIs) for cursor control. When new methods for decoding are being developed or the parameters for existing methods should be optimized to increase performance, a metric is needed that gives an accurate estimate of the prediction error. In this paper, we evaluate different performance metrics regarding their robustness for assessing prediction errors. Using simulated data, we show that different kinds of prediction error (noise, scaling error, bias) have different effects on the different metrics and evaluate which methods are best to assess the overall prediction error, as well as the individual types of error. Based on the obtained results we can conclude that the most commonly used metrics correlation coefficient (CC) and normalized root-mean-squared error (NRMSE) are well suited for evaluation of cross-validated results, but should not be used as sole criterion for cross-subject or cross-session evaluations.
AB - The use of regression methods for decoding of neural signals has become popular, with its main applications in the field of Brain-Machine Interfaces (BMIs) for control of prosthetic devices or in the area of Brain-Computer Interfaces (BCIs) for cursor control. When new methods for decoding are being developed or the parameters for existing methods should be optimized to increase performance, a metric is needed that gives an accurate estimate of the prediction error. In this paper, we evaluate different performance metrics regarding their robustness for assessing prediction errors. Using simulated data, we show that different kinds of prediction error (noise, scaling error, bias) have different effects on the different metrics and evaluate which methods are best to assess the overall prediction error, as well as the individual types of error. Based on the obtained results we can conclude that the most commonly used metrics correlation coefficient (CC) and normalized root-mean-squared error (NRMSE) are well suited for evaluation of cross-validated results, but should not be used as sole criterion for cross-subject or cross-session evaluations.
UR - http://www.scopus.com/inward/record.url?scp=84953296123&partnerID=8YFLogxK
U2 - 10.1109/EMBC.2015.7318553
DO - 10.1109/EMBC.2015.7318553
M3 - Conference contribution
C2 - 26736453
AN - SCOPUS:84953296123
T3 - Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS
SP - 1083
EP - 1086
BT - 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2015
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2015
Y2 - 25 August 2015 through 29 August 2015
ER -