期刊
ACM TRANSACTIONS ON GRAPHICS
卷 37, 期 4, 页码 -出版社
ASSOC COMPUTING MACHINERY
DOI: 10.1145/3197517.3201288
关键词
Point cloud processing; deep neural network; point-wise displacement; point set transform
资金
- NSFC [61522213, 61761146002, 61861130365]
- 973 Program [2015CB352501]
- Guangdong Science Program [2015A030312015]
- Shenzhen Innovation Program [KQJSCX20170727101233642, JCYJ20151015151249564]
- ISF-NSFC Joint Research Program [2217/15, 2472/17]
- Israel Science Foundation [2366/16]
- NSERC [611370]
We introduce P2P-NET, a general-purpose deep neural network which learns geometric transformations between point-based shape representations from two domains, e.g., meso-skeletons and surfaces, partial and complete scans, etc. The architecture of the P2P-NET is that of a bi-directional point displacement network, which transforms a source point set to a prediction of the target point set with the same cardinality, and vice versa, by applying point-wise displacement vectors learned from data. P2P-NET is trained on paired shapes from the source and target domains, but without relying on point-to-point correspondences between the source and target point sets. The training loss combines two uni-directional geometric losses, each enforcing a shape-wise similarity between the predicted and the target point sets, and a cross-regularization term to encourage consistency between displacement vectors going in opposite directions. We develop and present several different applications enabled by our general-purpose bidirectional P2P-NET to highlight the effectiveness, versatility, and potential of our network in solving a variety of point-based shape transformation problems.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据