Multi-Attention Guided SKFHDRNet For HDR Video Reconstruction

Ehsan Ullah, Marius Pedersen, Kjartan Sebastian Waaseth, Bernt Erik Baltzersen

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

We propose a three stage learning-based approach for High Dynamic Range (HDR) video reconstruction with alternating exposures. The first stage performs alignment of neighboring frames to the reference frame by estimating the flows between them, the second stage is composed of multi-attention modules and a pyramid cascading deformable alignment module to refine aligned features, and the final stage merges and estimates the final HDR scene using a series of dilated selective kernel fusion residual dense blocks (DSKFRDBs) to fill the over-exposed regions with details. The proposed model variants give HDR-VDP-2 values on a dynamic dataset of 79.12, 78.49, and 78.89 respectively, compared to Chen et al. [“HDR video reconstruction: A coarse-to-fine network and a real-world benchmark dataset, ” Proc. IEEE/CVF Int'l. Conf. on Computer Vision (IEEE, Piscataway, NJ, 2021), pp. 2502-2511] 79.09, Yan et al. [“Attention-guided network for ghost-free high dynamic range imaging, ” Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition (IEEE, Piscataway, NJ, 2019), pp. 1751-1760] 78.69, Kalantari et al. [“Patch-based high dynamic range video, ” ACM Trans. Graph. 32 (2013) 202-1] 70.36, and Kalantari et al. [“Deep hdr video from sequences with alternating exposures, ” Computer Graphics Forum (Wiley Online Library, 2019), Vol. 38, pp. 193-205] 77.91. We achieve better detail reproduction and alignment in over-exposed regions compared to state-of-the-art methods and with a smaller number of parameters.

Original languageEnglish
JournalJournal of Imaging Science and Technology
Volume67
Issue number5
DOIs
Publication statusPublished - Sept 2023
Externally publishedYes

Fingerprint

Dive into the research topics of 'Multi-Attention Guided SKFHDRNet For HDR Video Reconstruction'. Together they form a unique fingerprint.

Cite this