4.7 Article

Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines

期刊

ACM TRANSACTIONS ON GRAPHICS
卷 38, 期 4, 页码 -

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3306346.3322980

关键词

view synthesis; plenoptic sampling; light fields; image-based rendering; deep learning

资金

  1. Hertz Foundation Fellowship
  2. NSF [1617234, 1617794]
  3. ONR [N000141712687]
  4. Google Research Awards
  5. Alfred P. Sloan Foundation Fellowship
  6. Direct For Computer & Info Scie & Enginr
  7. Div Of Information & Intelligent Systems [1617794, 1617234] Funding Source: National Science Foundation

向作者/读者索取更多资源

We present a practical and robust deep learning solution for capturing and rendering novel views of complex real world scenes for virtual exploration. Previous approaches either require intractably dense view sampling or provide little to no guidance for how users should sample views of a scene to reliably render high-quality novel views. Instead, we propose an algorithm for view synthesis from an irregular grid of sampled views that first expands each sampled view into a local light field via a multiplane image (MPD scene representation, then renders novel views by blending adjacent local light fields. We extend traditional plenoptic sampling theory to derive a bound that specifies precisely how densely users should sample views of a given scene when using our algorithm. In practice, we apply this bound to capture and render views of real world scenes that achieve the perceptual quality of Nyquist rate view sampling while using up to 4000x fewer views. We demonstrate our approach's practicality with an augmented reality smartphone app that guides users to capture input images of a scene and viewers that enable realtime virtual exploration on desktop and mobile platforms.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据