摘要
Light field (LF) cameras produce sub-aperture images that capture 3D scenes from multiple perspectives, containing both spatial and angular information. This information can be used to improve image super-resolution (SR). However, existing multi-view LF image SR methods neglect the correlation among all LF views, which is captured by the global-view information. Moreover, due to the diversity of LF views, it is essential to model an adaptation network for each LF image to incorporate complementary information from other views. To address these issues, we propose a global-view information adaptation-guided network (LF-GIANet) for LF image SR. Our network aligns features from each view to the global domain dynamically, using information from global views as guidance. It then effectively fuses spatial and angular information from all LF views through an attention mechanism. Our LF-GIANet consists of two segments. The global-view information extraction (GIE), a global-view adaptation-guided module (GAGM), extracts global-view information and constructs guidance factors for each view. The information fusion (IF) can achieve a global feature-level alignment using these factors as the offsets of deformable convolutions. Moreover, we utilize a multi-domain information fusion module (MIFM) to deal with high-dimensionality information and supplement distinctive spatial information and angular information from different LF views. We assess our approach on various synthetic and real scenarios and show that it exceeds other state-of-the-art approaches in terms of SR quality and performance. We also show that our LF-GIANet can handle realistic and synthetic LF scenarios well.
原文 | English |
---|---|
文章編號 | 73 |
期刊 | Multimedia Systems |
卷 | 31 |
發行號 | 1 |
DOIs | |
出版狀態 | Published - 2月 2025 |