TY - JOUR
T1 - A novel deep learning-based approach for video quality enhancement
AU - Zilouchian Moghaddam, Parham
AU - Modarressi, Mehdi
AU - Sadeghi, Mohammad Amin
N1 - Publisher Copyright:
© 2025 Elsevier Ltd
PY - 2025/3/15
Y1 - 2025/3/15
N2 - Video content has experienced a surge in popularity, asserting its dominance over internet traffic and Internet of Things (IoT) networks. Video compression has long been regarded as the primary means of efficiently managing the substantial multimedia traffic generated by video-capturing devices. Nevertheless, video compression algorithms entail significant computational demands in order to achieve substantial compression ratios. This complexity presents a formidable challenge when implementing efficient video coding standards in resource-constrained embedded systems, such as IoT edge node cameras. To tackle this challenge, this paper introduces an innovative deep-learning model specifically designed to mitigate compression artifacts stemming from lossy compression codecs. This enhancement significantly elevates the perceptible quality of low-bit-rate videos. By employing our proposed deep-learning model, the video encoder within the video-capturing node can reduce output quality, thereby generating low-bit-rate videos and effectively curtailing both computation and bandwidth requirements at the edge. On the decoder side, which is typically less encumbered by resource limitations, our suggested deep-learning model is applied after the video decoder to compensate for artifacts and approximate the quality of the original video. Experimental results affirm the efficacy of the proposed model in enhancing the perceptible quality of videos, especially those streamed at low bit rates.
AB - Video content has experienced a surge in popularity, asserting its dominance over internet traffic and Internet of Things (IoT) networks. Video compression has long been regarded as the primary means of efficiently managing the substantial multimedia traffic generated by video-capturing devices. Nevertheless, video compression algorithms entail significant computational demands in order to achieve substantial compression ratios. This complexity presents a formidable challenge when implementing efficient video coding standards in resource-constrained embedded systems, such as IoT edge node cameras. To tackle this challenge, this paper introduces an innovative deep-learning model specifically designed to mitigate compression artifacts stemming from lossy compression codecs. This enhancement significantly elevates the perceptible quality of low-bit-rate videos. By employing our proposed deep-learning model, the video encoder within the video-capturing node can reduce output quality, thereby generating low-bit-rate videos and effectively curtailing both computation and bandwidth requirements at the edge. On the decoder side, which is typically less encumbered by resource limitations, our suggested deep-learning model is applied after the video decoder to compensate for artifacts and approximate the quality of the original video. Experimental results affirm the efficacy of the proposed model in enhancing the perceptible quality of videos, especially those streamed at low bit rates.
KW - Auto-encoder
KW - Deep generative models
KW - Diffusion models
KW - Internet of Video Things
KW - Video compression
UR - http://www.scopus.com/inward/record.url?scp=85216515063&partnerID=8YFLogxK
U2 - 10.1016/j.engappai.2025.110118
DO - 10.1016/j.engappai.2025.110118
M3 - Article
AN - SCOPUS:85216515063
SN - 0952-1976
VL - 144
JO - Engineering Applications of Artificial Intelligence
JF - Engineering Applications of Artificial Intelligence
M1 - 110118
ER -