跳至主導覽 跳至搜尋 跳過主要內容

Light-VQA: A Multi-Dimensional Quality Assessment Model for Low-Light Video Enhancement

  • Yunlong Dong
  • , Xiaohong Liu
  • , Yixuan Gao
  • , Xunchu Zhou
  • , Tao Tan
  • , Guangtao Zhai
  • Shanghai Jiao Tong University

研究成果: Conference contribution同行評審

12 引文 斯高帕斯(Scopus)

摘要

Recently, Users Generated Content (UGC) videos becomes ubiquitous in our daily lives. However, due to the limitations of photographic equipments and techniques, UGC videos often contain various degradations, in which one of the most visually unfavorable effects is the underexposure. Therefore, corresponding video enhancement algorithms such as Low-Light Video Enhancement (LLVE) have been proposed to deal with the specific degradation. However, different from video enhancement algorithms, almost all existing Video Quality Assessment (VQA) models are built generally rather than specifically, which measure the quality of a video from a comprehensive perspective. To the best of our knowledge, there is no VQA model specially designed for videos enhanced by LLVE algorithms. To this end, we first construct a Low-Light Video Enhancement Quality Assessment (LLVE-QA) dataset in which 254 original low-light videos are collected and then enhanced by leveraging 8 LLVE algorithms to obtain 2,060 videos in total. Moreover, we propose a quality assessment model specialized in LLVE, named Light-VQA. More concretely, since the brightness and noise have the most impact on low-light enhanced VQA, we handcraft corresponding features and integrate them with deep-learning-based semantic features as the overall spatial information. As for temporal information, in addition to deep-learning-based motion features, we also investigate the handcrafted brightness consistency among video frames, and the overall temporal information is their concatenation. Subsequently, spatial and temporal information is fused to obtain the quality-aware representation of a video. Extensive experimental results show that our Light-VQA achieves the best performance against the current State-Of-The-Art (SOTA) on LLVE-QA and public dataset. Dataset and Codes can be found at https://github.com/wenzhouyidu/Light-VQA.

原文English
主出版物標題MM 2023 - Proceedings of the 31st ACM International Conference on Multimedia
發行者Association for Computing Machinery, Inc
頁面1088-1097
頁數10
ISBN(電子)9798400701085
DOIs
出版狀態Published - 27 10月 2023
事件31st ACM International Conference on Multimedia, MM 2023 - Ottawa, Canada
持續時間: 29 10月 20233 11月 2023

出版系列

名字MM 2023 - Proceedings of the 31st ACM International Conference on Multimedia

Conference

Conference31st ACM International Conference on Multimedia, MM 2023
國家/地區Canada
城市Ottawa
期間29/10/233/11/23

指紋

深入研究「Light-VQA: A Multi-Dimensional Quality Assessment Model for Low-Light Video Enhancement」主題。共同形成了獨特的指紋。

引用此