Light-field image super-resolution with depth feature by multiple-decouple and fusion module

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Light-field (LF) images offer the potential to improve feature capture in live scenes from multiple perspectives, and also generate additional normal vectors for performing super-resolution (SR) image processing. With the benefit of machine learning, established AI-based deep CNN models for LF image SR often individualize the models for various resolutions. However, the rigidity of these approaches for actual LF applications stems from the considerable diversity in angular resolution among LF instruments. Therefore, an advanced neural network proposal is required to utilize a CNN-based model for super-resolving LF images with different resolutions obtained from provided features. In this work, a preprocessing to calculate the depth channel from given LF information is first presented, and then a multiple-decouple and fusion module is introduced to integrate the VGGreNet for the LF image SR, which collects global-to-local information according to the CNN kernel size and dynamically constructs each view through a global view module. Besides, the generated features are transformed to a uniform space to perform final fusion, achieving global alignment for precise extraction of angular information. Experimental results show that the proposed method can handle benchmark LF datasets with various angular and different resolutions, reporting the effectiveness and potential performance of the method.

Original languageEnglish
Article numbere13019
JournalElectronics Letters
Volume60
Issue number1
DOIs
Publication statusPublished - Jan 2024

Keywords

  • adaptive signal processing
  • image fusion
  • image processing
  • neural net architecture
  • spatial filters

Fingerprint

Dive into the research topics of 'Light-field image super-resolution with depth feature by multiple-decouple and fusion module'. Together they form a unique fingerprint.

Cite this