Progressive Multi-Scale Fusion Network for Light Field Super-Resolution

Wei Zhang, Wei Ke, Hao Sheng, Zhang Xiong

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)


Light field (LF) cameras can record multi-view images from a single scene, and these images can provide spatial and angular information to improve the performance of image super-resolution (SR). However, it is a challenge to incorporate distinctive information from different LF views. At the same time, due to the inherent resolution of the image sensor, high spatial and angular resolution are trade-off problems. In this paper, we propose a progressive multi-scale fusion network (PMFN) to improve the LFSR performance. Specifically, a progressive feature fusion block (PFFB) based on an encoder-and-decoder structure is designed to implicitly align disparities and integrate complementary information between complementary views. The core module of the PFFB is a dual-branch multi-scale fusion module (DMFM), which can integrate the information from a reference view and auxiliary views to produce a fusion feature. Each DMFM consists of two parallel branches, which have different receptive fields to fuse hierarchical features from complementary views. Three DMFMs with a dense connection are used in the PFFB, which can fully exploit multi-level features to improve the SR performance. Experimental results on both synthetic and real-world datasets demonstrate that the proposed model achieves state-of-the-art performance among existing methods. Moreover, quantitative results show that our method can also generate faithful details.

Original languageEnglish
Article number7135
JournalApplied Sciences (Switzerland)
Issue number14
Publication statusPublished - Jul 2022


  • complementary information
  • feature fusion
  • light field
  • multi-scale
  • super-resolution


Dive into the research topics of 'Progressive Multi-Scale Fusion Network for Light Field Super-Resolution'. Together they form a unique fingerprint.

Cite this