TY - JOUR
T1 - Efficient Spectral Graph Convolutional Network Deployment on Memristive Crossbars
AU - Lyu, Bo
AU - Hamdi, Maher
AU - Yang, Yin
AU - Cao, Yuting
AU - Yan, Zheng
AU - Li, Ke
AU - Wen, Shiping
AU - Huang, Tingwen
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2023/4/1
Y1 - 2023/4/1
N2 - Graph Neural Networks (GNNs) have attracted increasing research interest for their remarkable capability to model graph-structured knowledge. However, GNNs suffer from intensive data exchange and poor data locality, which will cause critical performance and energy bottlenecks under conventional complementary metal oxide semiconductor (CMOS)-based von-Neumann computing architectures (graphics processing unit (GPU), central processing unit (CPU)) for the 'Memory Wall' issue. Fortunately, memristive crossbar-based computation has emerged as one of the most promising neuromorphic computing architectures, which has been widely studied as the computing platform for convolutional neural network (CNNs), recurrent neural network (RNNs), spiking neural network (SNNs), etc. This paper proposes the deployment of spectral graph convolutional networks (GCNs) on memristive crossbars. Further, based on the structure of GCNs (extremely high sparsity and unbalanced non-zero data distribution) and the neuromorphic characteristics of memristive crossbar circuit, we propose the acceleration method that consists of Sparse Laplace Matrix Reordering and Diagonal Block Matrix Multiplication. The simulated experiment on memristor crossbars achieves 90.3% overall accuracy on the supervised learning graph dataset (QM7), and compared with the original computation, the proposed acceleration computing framework (with half-size diagonal blocks) achieves a 27.3% reduction of memristor number. Additionally, on the unsupervised learning dataset (Karate club), our method shows no loss of accuracy with half-size diagonal block mapping and reaches a 32.2% reduction of memristor number.
AB - Graph Neural Networks (GNNs) have attracted increasing research interest for their remarkable capability to model graph-structured knowledge. However, GNNs suffer from intensive data exchange and poor data locality, which will cause critical performance and energy bottlenecks under conventional complementary metal oxide semiconductor (CMOS)-based von-Neumann computing architectures (graphics processing unit (GPU), central processing unit (CPU)) for the 'Memory Wall' issue. Fortunately, memristive crossbar-based computation has emerged as one of the most promising neuromorphic computing architectures, which has been widely studied as the computing platform for convolutional neural network (CNNs), recurrent neural network (RNNs), spiking neural network (SNNs), etc. This paper proposes the deployment of spectral graph convolutional networks (GCNs) on memristive crossbars. Further, based on the structure of GCNs (extremely high sparsity and unbalanced non-zero data distribution) and the neuromorphic characteristics of memristive crossbar circuit, we propose the acceleration method that consists of Sparse Laplace Matrix Reordering and Diagonal Block Matrix Multiplication. The simulated experiment on memristor crossbars achieves 90.3% overall accuracy on the supervised learning graph dataset (QM7), and compared with the original computation, the proposed acceleration computing framework (with half-size diagonal blocks) achieves a 27.3% reduction of memristor number. Additionally, on the unsupervised learning dataset (Karate club), our method shows no loss of accuracy with half-size diagonal block mapping and reaches a 32.2% reduction of memristor number.
KW - Memristor
KW - graph convolutional network
KW - neural network
KW - neurocomputing
KW - sparse matrix
UR - http://www.scopus.com/inward/record.url?scp=85140791272&partnerID=8YFLogxK
U2 - 10.1109/TETCI.2022.3210998
DO - 10.1109/TETCI.2022.3210998
M3 - Article
AN - SCOPUS:85140791272
SN - 2471-285X
VL - 7
SP - 415
EP - 425
JO - IEEE Transactions on Emerging Topics in Computational Intelligence
JF - IEEE Transactions on Emerging Topics in Computational Intelligence
IS - 2
ER -