4.7 Article

Generative Modelling of BRDF Textures from Flash Images

期刊

ACM TRANSACTIONS ON GRAPHICS
卷 40, 期 6, 页码 -

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3478513.3480507

关键词

material capture; appearance capture; SVBRDF; deep learning; generative model; unsupervised learning

资金

  1. ERC
  2. Google AR/VR Research Award
  3. EPSRC [EP/N006259/1]

向作者/读者索取更多资源

The study introduces a method for learning a latent space to capture and reproduce visual material appearance efficiently. By converting images into latent material codes and generating BRDF model parameters based on these codes, the method can render realistic appearances in various scenes and lighting conditions.
We learn a latent space for easy capture, consistent interpolation, and efficient reproduction of visual material appearance. When users provide a photo of a stationary natural material captured under flashlight illumination, first it is converted into a latent material code. Then, in the second step, conditioned on the material code, our method produces an infinite and diverse spatial field of BRDF model parameters (diffuse albedo, normals, roughness, specular albedo) that subsequently allows rendering in complex scenes and illuminations, matching the appearance of the input photograph. Technically, we jointly embed all flash images into a latent space using a convolutional encoder, and -conditioned on these latent codes- convert random spatial fields into fields of BRDF parameters using a convolutional neural network (CNN). We condition these BRDF parameters to match the visual characteristics (statistics and spectra of visual features) of the input under matching light. A user study compares our approach favorably to previous work, even those with access to BRDF supervision. Project webpage: https://henzler.github.io/publication/neuralmaterial/.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据