TY - GEN
T1 - Disentangling Local and Global Information for Light Field Depth Estimation
AU - Yang, Xueting
AU - Deng, Junli
AU - Chen, Rongshan
AU - Cong, Ruixuan
AU - Ke, Wei
AU - Sheng, Hao
N1 - Funding Information:
This study is partially supported by the National Key R&D Program of China (No.2022YFC3803600), the National Natural Science Foundation of China (No.61872025), and the Open Fund of the State Key Laboratory of Software Development Environment (No.SKLSDE-2021ZX-03). Thank you for the support from HAWKEYE Group.
Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Accurate depth estimation from light field images is essential for various applications. Deep learning-based techniques have shown great potential in addressing this problem while still face challenges such as sensitivity to occlusions and difficulties in handling untextured areas. To overcome these limitations, we propose a novel approach that utilizes both local and global features in the cost volume for depth estimation. Specifically, our hybrid cost volume network consists of two complementary sub-modules: a 2D ContextNet for global context information and a matching cost volume for local feature information. We also introduce an occlusion-aware loss that accounts for occlusion areas to improve depth estimation quality. We demonstrate the effectiveness of our approach on the UrbanLF and HCInew datasets, showing significant improvements over existing methods, especially in occluded and untextured regions. Our method disentangles local feature and global semantic information explicitly, reducing the occlusion and untextured area reconstruction error and improving the accuracy of depth estimation.
AB - Accurate depth estimation from light field images is essential for various applications. Deep learning-based techniques have shown great potential in addressing this problem while still face challenges such as sensitivity to occlusions and difficulties in handling untextured areas. To overcome these limitations, we propose a novel approach that utilizes both local and global features in the cost volume for depth estimation. Specifically, our hybrid cost volume network consists of two complementary sub-modules: a 2D ContextNet for global context information and a matching cost volume for local feature information. We also introduce an occlusion-aware loss that accounts for occlusion areas to improve depth estimation quality. We demonstrate the effectiveness of our approach on the UrbanLF and HCInew datasets, showing significant improvements over existing methods, especially in occluded and untextured regions. Our method disentangles local feature and global semantic information explicitly, reducing the occlusion and untextured area reconstruction error and improving the accuracy of depth estimation.
UR - http://www.scopus.com/inward/record.url?scp=85170827099&partnerID=8YFLogxK
U2 - 10.1109/CVPRW59228.2023.00344
DO - 10.1109/CVPRW59228.2023.00344
M3 - Conference contribution
AN - SCOPUS:85170827099
T3 - IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
SP - 3419
EP - 3427
BT - Proceedings - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2023
PB - IEEE Computer Society
T2 - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2023
Y2 - 18 June 2023 through 22 June 2023
ER -