4.7 Article

HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields

期刊

ACM TRANSACTIONS ON GRAPHICS
卷 40, 期 6, 页码 -

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3478513.3480487

关键词

Neural Radiance Fields; Novel View Synthesis; 3D Synthesis; Dynamic Scenes; Neural Rendering

资金

  1. UW Reality Lab
  2. Google
  3. Amazon
  4. Futurewei
  5. Facebook

向作者/读者索取更多资源

HyperNeRF method addresses the challenge of modeling topological changes in non-rigid scenes by lifting NeRFs into a higher dimensional space and representing the 5D radiance field for each individual input image as a slice through this hyper-space. Experimental results show that HyperNeRF outperforms existing methods in both smooth interpolation and novel-view synthesis tasks.
Neural Radiance Fields (NeRF) are able to reconstruct scenes with unprecedented fidelity, and various recent works have extended NeRF to handle dynamic scenes. A common approach to reconstruct such non-rigid scenes is through the use of a learned deformation field mapping from coordinates in each input image into a canonical template coordinate space. However, these deformation-based approaches struggle to model changes in topology, as topological changes require a discontinuity in the deformation field, but these deformation fields are necessarily continuous. We address this limitation by lifting NeRFs into a higher dimensional space, and by representing the 5D radiance field corresponding to each individual input image as a slice through this hyper-space. Our method is inspired by level set methods, which model the evolution of surfaces as slices through a higher dimensional surface. We evaluate our method on two tasks: (i) interpolating smoothly between moments, i.e., configurations of the scene, seen in the input images while maintaining visual plausibility, and (ii) novel-view synthesis at fixed moments. We show that our method, which we dub HyperNeRF, outperforms existing methods on both tasks. Compared to Nerfies, HyperNeRF reduces average error rates by 4.1% for interpolation and 8.6% for novel-view synthesis, as measured by LPIPS. Additional videos, results, and visualizations are available at hypernerf.github.io.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据