Generating Realistic Videos From Keyframes With Concatenated GANs

Shiping Wen, Weiwei Liu, Yin Yang, Tingwen Huang*, Zhigang Zeng

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

96 Citations (Scopus)

Abstract

Given two video frames X0 and Xn+1, we aim to generate a series of intermediate frames Y1, Y2, . . ., Yn, such that the resulting video consisting of frames X0, Y1 − Yn, andXn+1 appears realistic to a human watcher. Such video generation has numerous important applications, including video compression, movie production, slow-motion filming, video surveillance, and forensic analysis. Yet, video generation is highly challenging due to the vast search space of possible frames. Previous methods, mostly based on video prediction and/or video interpolation, tend to generate poor-quality videos with severe motion blur. This paper proposes a novel, end-to-end approach to video generation using generative adversarial networks (GANs). In particular, our design involves two concatenated GANs, one capturing motions and the other generating frame details. The loss function is also carefully engineered to include adversarial loss, gradient difference (for motion learning), and normalized product correlation loss (for frame details). Experiments using three video datasets, namely, Google Robotic Push, KTH human actions, and UCF101, demonstrate that the proposed solution generates high-quality, realistic, and sharp videos, whereas all previous solutions output noisy and blurry results.

Original languageEnglish
Pages (from-to)2337-2348
Number of pages12
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume29
Issue number8
DOIs
Publication statusPublished - 1 Aug 2019

Keywords

  • Video generation
  • concatenated GANs
  • generative adversarial networks (GANs)

Fingerprint

Dive into the research topics of 'Generating Realistic Videos From Keyframes With Concatenated GANs'. Together they form a unique fingerprint.

Cite this