TY - GEN
T1 - Relational variational autoencoder for link prediction with multimedia data
AU - Li, Xiaopeng
AU - She, James
N1 - Publisher Copyright:
© 2017 Association for Computing Machinery.
PY - 2017/10/23
Y1 - 2017/10/23
N2 - As a fundamental task, link prediction has pervasive applications in social networks, webpage networks, information retrieval and bioinformatics. Among link prediction methods, latent variable models, such as relational topic model and its variants, which jointly model both network structure and node a.ributes, have shown promising performance for predicting network structures and discovering latent representations. However, these methods are still limited in their representation learning capability from high-dimensional data or consider only text modality of the content. Thus they are very limited in current multimedia scenario. This paper proposes a Bayesian deep generative model called relational variational autoencoder (RVAE) that considers both links and content for link prediction in the multimedia scenario. The model learns deep latent representations from content data in an unsupervised manner, and also learns network structures from both content and link information. Unlike previous deep learning methods with denoising criteria, the proposed RVAE learns a latent distribution for content in latent space, instead of observation space, through an inference network, and can be easily extended to multimedia modalities other than text. Experiments show that RVAE is able to significantly outperform the state-of-the-art link prediction methods with more robust performance.
AB - As a fundamental task, link prediction has pervasive applications in social networks, webpage networks, information retrieval and bioinformatics. Among link prediction methods, latent variable models, such as relational topic model and its variants, which jointly model both network structure and node a.ributes, have shown promising performance for predicting network structures and discovering latent representations. However, these methods are still limited in their representation learning capability from high-dimensional data or consider only text modality of the content. Thus they are very limited in current multimedia scenario. This paper proposes a Bayesian deep generative model called relational variational autoencoder (RVAE) that considers both links and content for link prediction in the multimedia scenario. The model learns deep latent representations from content data in an unsupervised manner, and also learns network structures from both content and link information. Unlike previous deep learning methods with denoising criteria, the proposed RVAE learns a latent distribution for content in latent space, instead of observation space, through an inference network, and can be easily extended to multimedia modalities other than text. Experiments show that RVAE is able to significantly outperform the state-of-the-art link prediction methods with more robust performance.
KW - Autoencoder
KW - Bayesian
KW - Deep learning
KW - Generative models
KW - Link prediction
KW - Variational inference
UR - http://www.scopus.com/inward/record.url?scp=85034826515&partnerID=8YFLogxK
U2 - 10.1145/3126686.3126774
DO - 10.1145/3126686.3126774
M3 - Conference contribution
AN - SCOPUS:85034826515
T3 - Thematic Workshops 2017 - Proceedings of the Thematic Workshops of ACM Multimedia 2017, co-located with MM 2017
SP - 93
EP - 100
BT - Thematic Workshops 2017 - Proceedings of the Thematic Workshops of ACM Multimedia 2017, co-located with MM 2017
PB - Association for Computing Machinery, Inc
T2 - 1st International ACM Thematic Workshops, Thematic Workshops 2017
Y2 - 23 October 2017 through 27 October 2017
ER -