TY - GEN
T1 - In-memory Distributed Matrix Computation Processing and Optimization
AU - Yu, Yongyang
AU - Tang, Mingjie
AU - Aref, Walid G.
AU - Malluhi, Qutaibah M.
AU - Abbas, Mostafa M.
AU - Ouzzani, Mourad
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2017/5/16
Y1 - 2017/5/16
N2 - The use of large-scale machine learning and data mining methods is becoming ubiquitous in many application domains ranging from business intelligence and bioinformatics to self-driving cars. These methods heavily rely on matrix computations, and it is hence critical to make these computations scalable and efficient. These matrix computations are often complex and involve multiple steps that need to be optimized and sequenced properly for efficient execution. This paper presents new efficient and scalable matrix processing and optimization techniques for in-memory distributed clusters. The proposed techniques estimate the sparsity of intermediate matrix-computation results and optimize communication costs. An evaluation plan generator for complex matrix computations is introduced as well as a distributed plan optimizer that exploits dynamic cost-based analysis and rule-based heuristics to optimize the cost of matrix computations in an in-memory distributed environment. The result of a matrix operation will often serve as an input to another matrix operation, thus defining the matrix data dependencies within a matrix program. The matrix query plan generator produces query execution plans that minimize memory usage and communication overhead by partitioning the matrix based on the data dependencies in the execution plan. We implemented the proposed matrix processing and optimization techniques in Spark, a distributed in-memory computing platform. Experiments on both real and synthetic data demonstrate that our proposed techniques achieve up to an order-of-magnitude performance improvement over state-of-the-art distributed matrix computation systems on a wide range of applications.
AB - The use of large-scale machine learning and data mining methods is becoming ubiquitous in many application domains ranging from business intelligence and bioinformatics to self-driving cars. These methods heavily rely on matrix computations, and it is hence critical to make these computations scalable and efficient. These matrix computations are often complex and involve multiple steps that need to be optimized and sequenced properly for efficient execution. This paper presents new efficient and scalable matrix processing and optimization techniques for in-memory distributed clusters. The proposed techniques estimate the sparsity of intermediate matrix-computation results and optimize communication costs. An evaluation plan generator for complex matrix computations is introduced as well as a distributed plan optimizer that exploits dynamic cost-based analysis and rule-based heuristics to optimize the cost of matrix computations in an in-memory distributed environment. The result of a matrix operation will often serve as an input to another matrix operation, thus defining the matrix data dependencies within a matrix program. The matrix query plan generator produces query execution plans that minimize memory usage and communication overhead by partitioning the matrix based on the data dependencies in the execution plan. We implemented the proposed matrix processing and optimization techniques in Spark, a distributed in-memory computing platform. Experiments on both real and synthetic data demonstrate that our proposed techniques achieve up to an order-of-magnitude performance improvement over state-of-the-art distributed matrix computation systems on a wide range of applications.
KW - Matrix computation
KW - Distributed computing
KW - Query optimization
UR - https://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=hbku_researchportal&SrcAuth=WosAPI&KeyUT=WOS:000403398200143&DestLinkType=FullRecord&DestApp=WOS_CPL
UR - http://www.scopus.com/inward/record.url?scp=85021186559&partnerID=8YFLogxK
U2 - 10.1109/ICDE.2017.150
DO - 10.1109/ICDE.2017.150
M3 - Conference contribution
T3 - Ieee International Conference On Data Engineering
SP - 1047
EP - 1058
BT - 2017 Ieee 33rd International Conference On Data Engineering (icde 2017)
PB - IEEE
T2 - IEEE 33rd International Conference on Data Engineering (ICDE)
Y2 - 19 April 2017 through 22 April 2017
ER -