TY - JOUR
T1 - Generating Realistic Videos From Keyframes With Concatenated GANs
AU - Wen, Shiping
AU - Liu, Weiwei
AU - Yang, Yin
AU - Huang, Tingwen
AU - Zeng, Zhigang
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2019/8/1
Y1 - 2019/8/1
N2 - Given two video frames X0 and Xn+1, we aim to generate a series of intermediate frames Y1, Y2, . . ., Yn, such that the resulting video consisting of frames X0, Y1 − Yn, andXn+1 appears realistic to a human watcher. Such video generation has numerous important applications, including video compression, movie production, slow-motion filming, video surveillance, and forensic analysis. Yet, video generation is highly challenging due to the vast search space of possible frames. Previous methods, mostly based on video prediction and/or video interpolation, tend to generate poor-quality videos with severe motion blur. This paper proposes a novel, end-to-end approach to video generation using generative adversarial networks (GANs). In particular, our design involves two concatenated GANs, one capturing motions and the other generating frame details. The loss function is also carefully engineered to include adversarial loss, gradient difference (for motion learning), and normalized product correlation loss (for frame details). Experiments using three video datasets, namely, Google Robotic Push, KTH human actions, and UCF101, demonstrate that the proposed solution generates high-quality, realistic, and sharp videos, whereas all previous solutions output noisy and blurry results.
AB - Given two video frames X0 and Xn+1, we aim to generate a series of intermediate frames Y1, Y2, . . ., Yn, such that the resulting video consisting of frames X0, Y1 − Yn, andXn+1 appears realistic to a human watcher. Such video generation has numerous important applications, including video compression, movie production, slow-motion filming, video surveillance, and forensic analysis. Yet, video generation is highly challenging due to the vast search space of possible frames. Previous methods, mostly based on video prediction and/or video interpolation, tend to generate poor-quality videos with severe motion blur. This paper proposes a novel, end-to-end approach to video generation using generative adversarial networks (GANs). In particular, our design involves two concatenated GANs, one capturing motions and the other generating frame details. The loss function is also carefully engineered to include adversarial loss, gradient difference (for motion learning), and normalized product correlation loss (for frame details). Experiments using three video datasets, namely, Google Robotic Push, KTH human actions, and UCF101, demonstrate that the proposed solution generates high-quality, realistic, and sharp videos, whereas all previous solutions output noisy and blurry results.
KW - Video generation
KW - concatenated GANs
KW - generative adversarial networks (GANs)
UR - http://www.scopus.com/inward/record.url?scp=85052625301&partnerID=8YFLogxK
U2 - 10.1109/TCSVT.2018.2867934
DO - 10.1109/TCSVT.2018.2867934
M3 - Article
AN - SCOPUS:85052625301
SN - 1051-8215
VL - 29
SP - 2337
EP - 2348
JO - IEEE Transactions on Circuits and Systems for Video Technology
JF - IEEE Transactions on Circuits and Systems for Video Technology
IS - 8
ER -