TY - GEN
T1 - Domain transfer nonnegative matrix factorization
AU - Wang, Jim Jing Yan
AU - Sun, Yijun
AU - Bensmail, Halima
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2014/9/3
Y1 - 2014/9/3
N2 - Domain transfer learning aims to learn an effective classifier for a target domain, where only a few labeled samples are available, with the help of many labeled samples from a source domain. The source and target domain samples usually share the same features and class label space, but have significantly different In these experiments error of the classifier distributions. Nonnegative Matrix Factorization (NMF) has been studied and applied widely as a powerful data representation method. However, NMF is limited to single domain learning problem. It can not be directly used in domain transfer learning problem due to the significant differences between the distributions of the source and target domains. In this paper, we extend the NMF method to domain transfer learning problem. The Maximum Mean Discrepancy (MMD) criteria is employed to reduce the mismatch of source and target domain distributions in the coding vector space. Moreover, we also learn a classifier in the coding vector space to directly utilize the class labels from both the two domains. We construct an unified objective function for the learning of both NMF parameters and classifier parameters, which is optimized alternately in an iterative algorithm. The proposed algorithm is evaluated on two challenging domain transfer tasks, and the encouraging experimental results show its advantage over state-of-the-art domain transfer learning algorithms.
AB - Domain transfer learning aims to learn an effective classifier for a target domain, where only a few labeled samples are available, with the help of many labeled samples from a source domain. The source and target domain samples usually share the same features and class label space, but have significantly different In these experiments error of the classifier distributions. Nonnegative Matrix Factorization (NMF) has been studied and applied widely as a powerful data representation method. However, NMF is limited to single domain learning problem. It can not be directly used in domain transfer learning problem due to the significant differences between the distributions of the source and target domains. In this paper, we extend the NMF method to domain transfer learning problem. The Maximum Mean Discrepancy (MMD) criteria is employed to reduce the mismatch of source and target domain distributions in the coding vector space. Moreover, we also learn a classifier in the coding vector space to directly utilize the class labels from both the two domains. We construct an unified objective function for the learning of both NMF parameters and classifier parameters, which is optimized alternately in an iterative algorithm. The proposed algorithm is evaluated on two challenging domain transfer tasks, and the encouraging experimental results show its advantage over state-of-the-art domain transfer learning algorithms.
UR - http://www.scopus.com/inward/record.url?scp=84908466104&partnerID=8YFLogxK
U2 - 10.1109/IJCNN.2014.6889428
DO - 10.1109/IJCNN.2014.6889428
M3 - Conference contribution
AN - SCOPUS:84908466104
T3 - Proceedings of the International Joint Conference on Neural Networks
SP - 3605
EP - 3612
BT - Proceedings of the International Joint Conference on Neural Networks
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2014 International Joint Conference on Neural Networks, IJCNN 2014
Y2 - 6 July 2014 through 11 July 2014
ER -