TY - GEN
T1 - Infer the Input to the Generator of Auxiliary Classifier Generative Adversarial Networks
AU - Peng, Xiaoming
AU - Bouzerdoum, Abdesselam
AU - Phung, Son Lam
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/10
Y1 - 2020/10
N2 - Generative Adversarial Networks (GANs) are deep-learning-based generative models. This paper presents three methods to infer the input to the generator of auxiliary classifier generative adversarial networks (ACGANs), which are a type of conditional GANs. The first two methods, named i-ACGAN- r and i-ACGAN-d, are 'inverting' methods, which obtain an inverse mapping from an image to the class label and the latent sample. By contrast, the third method, referred to as i-ACGAN-e, directly infers both the class label and the latent sample by introducing an encoder into an ACGAN. The three methods were evaluated on two natural scene datasets, using two performance measures: the class recovery accuracy and the image reconstruction error. Experimental results show that i-ACGAN-e outperforms the other two methods in terms of the class recovery accuracy. However, the images generated by the other two methods have smaller image reconstruction errors. The source code is publicly available from https://github.com/XMPeng/Infer-Input-ACGAN.
AB - Generative Adversarial Networks (GANs) are deep-learning-based generative models. This paper presents three methods to infer the input to the generator of auxiliary classifier generative adversarial networks (ACGANs), which are a type of conditional GANs. The first two methods, named i-ACGAN- r and i-ACGAN-d, are 'inverting' methods, which obtain an inverse mapping from an image to the class label and the latent sample. By contrast, the third method, referred to as i-ACGAN-e, directly infers both the class label and the latent sample by introducing an encoder into an ACGAN. The three methods were evaluated on two natural scene datasets, using two performance measures: the class recovery accuracy and the image reconstruction error. Experimental results show that i-ACGAN-e outperforms the other two methods in terms of the class recovery accuracy. However, the images generated by the other two methods have smaller image reconstruction errors. The source code is publicly available from https://github.com/XMPeng/Infer-Input-ACGAN.
KW - ACGANs
KW - encoder
KW - inverse mapping
UR - http://www.scopus.com/inward/record.url?scp=85098643986&partnerID=8YFLogxK
U2 - 10.1109/ICIP40778.2020.9190658
DO - 10.1109/ICIP40778.2020.9190658
M3 - Conference contribution
AN - SCOPUS:85098643986
T3 - Proceedings - International Conference on Image Processing, ICIP
SP - 76
EP - 80
BT - 2020 IEEE International Conference on Image Processing, ICIP 2020 - Proceedings
PB - IEEE Computer Society
T2 - 2020 IEEE International Conference on Image Processing, ICIP 2020
Y2 - 25 September 2020 through 28 September 2020
ER -