TY - GEN
T1 - Advanced computation of sparse precision matrices for big data
AU - Baggag, Abdelkader
AU - Bensmail, Halima
AU - Srivastava, Jaideep
N1 - Publisher Copyright:
© 2017, Springer International Publishing AG.
PY - 2017
Y1 - 2017
N2 - The precision matrix is the inverse of the covariance matrix. Estimating large sparse precision matrices is an interesting and a challenging problem in many fields of sciences, engineering, humanities and machine learning problems in general. Recent applications often encounter high dimensionality with a limited number of data points leading to a number of covariance parameters that greatly exceeds the number of observations, and hence the singularity of the covariance matrix. Several methods have been proposed to deal with this challenging problem, but there is no guarantee that the obtained estimator is positive definite. Furthermore, in many cases, one needs to capture some additional information on the setting of the problem. In this paper, we introduce a criterion that ensures the positive definiteness of the precision matrix and we propose the inner-outer alternating direction method of multipliers as an efficient method for estimating it. We show that the convergence of the algorithm is ensured with a sufficiently relaxed stopping criterion in the inner iteration. We also show that the proposed method converges, is robust, accurate and scalable as it lends itself to an efficient implementation on parallel computers.
AB - The precision matrix is the inverse of the covariance matrix. Estimating large sparse precision matrices is an interesting and a challenging problem in many fields of sciences, engineering, humanities and machine learning problems in general. Recent applications often encounter high dimensionality with a limited number of data points leading to a number of covariance parameters that greatly exceeds the number of observations, and hence the singularity of the covariance matrix. Several methods have been proposed to deal with this challenging problem, but there is no guarantee that the obtained estimator is positive definite. Furthermore, in many cases, one needs to capture some additional information on the setting of the problem. In this paper, we introduce a criterion that ensures the positive definiteness of the precision matrix and we propose the inner-outer alternating direction method of multipliers as an efficient method for estimating it. We show that the convergence of the algorithm is ensured with a sufficiently relaxed stopping criterion in the inner iteration. We also show that the proposed method converges, is robust, accurate and scalable as it lends itself to an efficient implementation on parallel computers.
UR - http://www.scopus.com/inward/record.url?scp=85018407886&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-57529-2_3
DO - 10.1007/978-3-319-57529-2_3
M3 - Conference contribution
AN - SCOPUS:85018407886
SN - 9783319575285
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 27
EP - 38
BT - Advances in Knowledge Discovery and Data Mining - 21st Pacific-Asia Conference, PAKDD 2017, Proceedings
A2 - Cao, Longbing
A2 - Shim, Kyuseok
A2 - Lee, Jae-Gil
A2 - Kim, Jinho
A2 - Moon, Yang-Sae
A2 - Lin, Xuemin
PB - Springer Verlag
T2 - 21st Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2017
Y2 - 23 May 2017 through 26 May 2017
ER -