摘要
Light-field (LF) images offer the potential to improve feature capture in live scenes from multiple perspectives, and also generate additional normal vectors for performing super-resolution (SR) image processing. With the benefit of machine learning, established AI-based deep CNN models for LF image SR often individualize the models for various resolutions. However, the rigidity of these approaches for actual LF applications stems from the considerable diversity in angular resolution among LF instruments. Therefore, an advanced neural network proposal is required to utilize a CNN-based model for super-resolving LF images with different resolutions obtained from provided features. In this work, a preprocessing to calculate the depth channel from given LF information is first presented, and then a multiple-decouple and fusion module is introduced to integrate the VGGreNet for the LF image SR, which collects global-to-local information according to the CNN kernel size and dynamically constructs each view through a global view module. Besides, the generated features are transformed to a uniform space to perform final fusion, achieving global alignment for precise extraction of angular information. Experimental results show that the proposed method can handle benchmark LF datasets with various angular and different resolutions, reporting the effectiveness and potential performance of the method.
原文 | English |
---|---|
文章編號 | e13019 |
期刊 | Electronics Letters |
卷 | 60 |
發行號 | 1 |
DOIs | |
出版狀態 | Published - 1月 2024 |