VPRF: Visual Perceptual Radiance Fields for Foveated Image Synthesis

Zijun Wang, Jian Wu, Runze Fan, Wei Ke, Lili Wang

Research output: Contribution to journalArticlepeer-review

Abstract

Neural radiance fields (NeRF) has achieved revolutionary breakthrough in the novel view synthesis task for complex 3D scenes. However, this new paradigm struggles to meet the requirements for real-time rendering and high perceptual quality in virtual reality. In this paper, we propose VPRF, a novel visual perceptual based radiance fields representation method, which for the first time integrates the visual acuity and contrast sensitivity models of human visual system (HVS) into the radiance field rendering framework. Initially, we encode both the appearance and visual sensitivity information of the scene into our radiance field representation. Then, we propose a visual perceptual sampling strategy, allocating computational resources according to the HVS sensitivity of different regions. Finally, we propose a sampling weight-constrained training scheme to ensure the effectiveness of our sampling strategy and improve the representation of the radiance field based on the scene content. Experimental results demonstrate that our method renders more efficiently, with higher PSNR and SSIM in the foveal and salient regions compared to the state-of-the-art FoV-NeRF. The results of the user study confirm that our rendering results exhibit high-fidelity visual perception.

Original languageEnglish
JournalIEEE Transactions on Visualization and Computer Graphics
DOIs
Publication statusAccepted/In press - 2024
Externally publishedYes

Keywords

  • Contrast sensitivity
  • Foveated rendering
  • Virtual reality
  • Visual perceptual

Fingerprint

Dive into the research topics of 'VPRF: Visual Perceptual Radiance Fields for Foveated Image Synthesis'. Together they form a unique fingerprint.

Cite this