Light field (LF) images taken by plenoptic cameras can record spatial and angular information from real-world scenes, and it is beneficial to fully integrate these two pieces of information to improve image super-resolution (SR). However, most of the existing approaches to LF image SR cannot fully fuse the information at the spatial and angular levels. Moreover, the performance of SR is hindered by the ability to incorporate distinctive information from different views and extract informative features from each view. To solve these core issues, we propose a fusion and allocation network (LF-FANet) for LF image SR. Specifically, we have designed an angular fusion operator (AFO) to fuse distinctive features among different views, and a spatial fusion operator (SFO) to extract deep representation features for each view. Following these two operators, we further propose a fusion and allocation strategy to incorporate and propagate the fusion features. In the fusion stage, the interaction information fusion block (IIFB) can fully supplement distinctive and informative features among all views. For the allocation stage, the fusion output features are allocated to the next AFO and SFO for further distilling the valid information. Experimental results on both synthetic and real-world datasets demonstrate that our method has achieved the same performance as state-of-the-art methods. Moreover, our method can preserve the parallax structure of LF and generate faithful details of LF images.
- distinctive information
- interaction operator
- light field
FingerprintDive into the research topics of 'Fusion and Allocation Network for Light Field Image Super-Resolution'. Together they form a unique fingerprint.
Faculty of Applied Sciences Researcher Provides New Insights into Mathematics (Fusion and Allocation Network for Light Field Image Super-Resolution)
ZEWEI WU & WEI KE
1 item of Media coverage