TY - JOUR
T1 - AutoGMap
T2 - Learning to Map Large-Scale Sparse Graphs on Memristive Crossbars
AU - Lyu, Bo
AU - Wang, Shengbo
AU - Wen, Shiping
AU - Shi, Kaibo
AU - Yang, Yin
AU - Zeng, Lingfang
AU - Huang, Tingwen
N1 - Publisher Copyright:
© 2012 IEEE.
PY - 2023/4/18
Y1 - 2023/4/18
N2 - The sparse representation of graphs has shown great potential for accelerating the computation of graph applications (e.g., social networks and knowledge graphs) on traditional computing architectures (CPU, GPU, or TPU). But, the exploration of large-scale sparse graph computing on processing-in-memory (PIM) platforms (typically with memristive crossbars) is still in its infancy. To implement the computation or storage of large-scale or batch graphs on memristive crossbars, a natural assumption is that a large-scale crossbar is demanded, but with low utilization. Some recent works question this assumption; to avoid the waste of storage and computational resource, the fixed-size or progressively scheduled 'block partition' schemes are proposed. However, these methods are coarse-grained or static and are not effectively sparsity-aware. This work proposes the dynamic sparsity-aware mapping scheme generating method that models the problem with a sequential decision-making model, and optimizes it by reinforcement learning (RL) algorithm (REINFORCE). Our generating model [long short-term memory (LSTM), combined with the dynamic-fill scheme] generates remarkable mapping performance on the small-scale graph/matrix data (complete mapping costs 43% area of the original matrix) and two large-scale matrix data (costing 22.5% area on qh882 and 17.1% area on qh1484). Our method may be extended to sparse graph computing on other PIM architectures, not limited to the memristive device-based platforms.
AB - The sparse representation of graphs has shown great potential for accelerating the computation of graph applications (e.g., social networks and knowledge graphs) on traditional computing architectures (CPU, GPU, or TPU). But, the exploration of large-scale sparse graph computing on processing-in-memory (PIM) platforms (typically with memristive crossbars) is still in its infancy. To implement the computation or storage of large-scale or batch graphs on memristive crossbars, a natural assumption is that a large-scale crossbar is demanded, but with low utilization. Some recent works question this assumption; to avoid the waste of storage and computational resource, the fixed-size or progressively scheduled 'block partition' schemes are proposed. However, these methods are coarse-grained or static and are not effectively sparsity-aware. This work proposes the dynamic sparsity-aware mapping scheme generating method that models the problem with a sequential decision-making model, and optimizes it by reinforcement learning (RL) algorithm (REINFORCE). Our generating model [long short-term memory (LSTM), combined with the dynamic-fill scheme] generates remarkable mapping performance on the small-scale graph/matrix data (complete mapping costs 43% area of the original matrix) and two large-scale matrix data (costing 22.5% area on qh882 and 17.1% area on qh1484). Our method may be extended to sparse graph computing on other PIM architectures, not limited to the memristive device-based platforms.
KW - Large-scale graph
KW - long short-term memory (LSTM)
KW - memristor
KW - reinforcement learning (RL)
KW - sparsity
UR - http://www.scopus.com/inward/record.url?scp=85153497436&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2023.3265383
DO - 10.1109/TNNLS.2023.3265383
M3 - Article
AN - SCOPUS:85153497436
SN - 2162-237X
VL - 35
SP - 12888
EP - 12898
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 9
ER -