TY - GEN
T1 - Towards Chip-in-The-loop Spiking Neural Network Training via Metropolis-Hastings Sampling
AU - Safa, Ali
AU - Jaltare, Vikrant
AU - Sebt, Samira
AU - Gano, Kameron
AU - Leugering, Johannes
AU - Gielen, Georges
AU - Cauwenberghs, Gert
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - This paper studies the use of Metropolis-Hastings sampling for training Spiking Neural Network (SNN) hardware subject to strong unknown non-idealities, and compares the proposed approach to the common use of the backpropagation of error (backprop) algorithm and surrogate gradients, widely used to train SNNs in literature. Simulations are conducted within a chip-in-the-loop training context, where an SNN subject to unknown distortion must be trained to detect cancer from measurements, within a biomedical application context. Our results show that the proposed approach strongly outperforms the use of backprop by up to 27% higher accuracy when subject to strong hardware non-idealities. Furthermore, our results also show that the proposed approach outperforms backprop in terms of SNN generalization, needing > 10x less training data for achieving effective accuracy. These findings make the proposed training approach well-suited for SNN implementations in analog subthreshold circuits and other emerging technologies where unknown hardware non-idealities can jeopardize backprop.
AB - This paper studies the use of Metropolis-Hastings sampling for training Spiking Neural Network (SNN) hardware subject to strong unknown non-idealities, and compares the proposed approach to the common use of the backpropagation of error (backprop) algorithm and surrogate gradients, widely used to train SNNs in literature. Simulations are conducted within a chip-in-the-loop training context, where an SNN subject to unknown distortion must be trained to detect cancer from measurements, within a biomedical application context. Our results show that the proposed approach strongly outperforms the use of backprop by up to 27% higher accuracy when subject to strong hardware non-idealities. Furthermore, our results also show that the proposed approach outperforms backprop in terms of SNN generalization, needing > 10x less training data for achieving effective accuracy. These findings make the proposed training approach well-suited for SNN implementations in analog subthreshold circuits and other emerging technologies where unknown hardware non-idealities can jeopardize backprop.
KW - Chip-in-the-loop training
KW - Metropolis-Hastings sampling
KW - Spiking Neural Networks
UR - http://www.scopus.com/inward/record.url?scp=85196761161&partnerID=8YFLogxK
U2 - 10.1109/NICE61972.2024.10548355
DO - 10.1109/NICE61972.2024.10548355
M3 - Conference contribution
AN - SCOPUS:85196761161
SN - 979-8-3503-9059-9
T3 - 2024 IEEE Neuro Inspired Computational Elements Conference, NICE 2024 - Proceedings
BT - 2024 Neuro Inspired Computational Elements Conference, Nice
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE Neuro Inspired Computational Elements Conference, NICE 2024
Y2 - 23 April 2024 through 26 April 2024
ER -