4.7 Article

Dynamic Hair Capture using Spacetime Optimization

Journal

ACM TRANSACTIONS ON GRAPHICS
Volume 33, Issue 6, Pages -

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/2661229.2661284

Keywords

image-based hair modeling; dynamic capture

Funding

  1. National Natural Science Foundation of China [61272348, 61202235]
  2. Ph.D. Program Foundation of Ministry of Education of China [20111102110018]
  3. National Key Technology Research & Development Program of China [2014BAK18B01]

Ask authors/readers for more resources

Dynamic hair strands have complex structures and experience intricate collisions and occlusion, posing significant challenges for high-quality reconstruction of their motions. We present a comprehensive dynamic hair capture system for reconstructing realistic hair motions from multiple synchronized video sequences. To recover hair strands' temporal correspondence, we propose a motion-path analysis algorithm that can robustly track local hair motions in input videos. To ensure the spatial and temporal coherence of the dynamic capture, we formulate the global hair reconstruction as a spacetime optimization problem solved iteratively. Demonstrated using a range of real-world hairstyles driven by different wind conditions and head motions, our approach is able to reconstruct complex hair dynamics matching closely with video recordings both in terms of geometry and motion details.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

Article Computer Science, Software Engineering

StyleCariGAN: Caricature Generation via StyleGAN Feature Map Modulation

Wonjong Jang, Gwangjin Ju, Yucheol Jung, Jiaolong Yang, Xin Tong, Seungyong Lee

Summary: Our framework, StyleCariGAN, utilizes Shape and style manipulation with StyleGAN to automatically generate realistic and detailed caricatures with optional controls. Experimental results show that StyleCariGAN produces more realistic and detailed caricatures compared to current state-of-the-art methods. Furthermore, StyleCariGAN also supports other StyleGAN-based image manipulations, such as facial expression control.

ACM TRANSACTIONS ON GRAPHICS (2021)

Article Computer Science, Software Engineering

VirtualCube: An Immersive 3D Video Communication System

Yizhong Zhang, Jiaolong Yang, Zhen Liu, Ruicheng Wang, Guojun Chen, Xin Tong, Baining Guo

Summary: The VirtualCube system is a 3D video conference system that utilizes virtual cubicles and advanced rendering techniques to enable realistic interactions and eye contact with remote participants.

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS (2022)

Editorial Material Computer Science, Software Engineering

Message from the Best Paper Award Committee

Ming C. Lin, Xin Tong, Wenping Wang

COMPUTATIONAL VISUAL MEDIA (2022)

Article Computer Science, Software Engineering

Sparse Ellipsometry: Portable Acquisition of Polarimetric SVBRDF and Shape with Unstructured Flash Photography

Inseung Hwang, Daniel S. Jeon, Adolfo Munoz, Diego Gutierrez, Xin Tong, Min H. Kim

Summary: Ellipsometry techniques are used to measure the polarization information of materials, but traditional methods are time-consuming and require cumbersome devices. This paper presents a sparse ellipsometry method that can capture both polarimetric reflectance information and the 3D shape of objects using a portable device. The results are in strong agreement with a ground-truth dataset of polarimetric BRDFs of real-world objects.

ACM TRANSACTIONS ON GRAPHICS (2022)

Article Computer Science, Software Engineering

ComplexGen: CAD Reconstruction by B-Rep Chain Complex Generation

Haoxiang Guo, Shilin Liu, Hao Pan, Yang Liu, Xin Tong, Baining Guo

Summary: This study views the reconstruction of CAD models as the detection of geometric primitives and their correspondence, and proposes a novel neural network framework for more complete and regularized reconstructions. By solving a global optimization and applying geometric refinements, it achieves more accurate and complete CAD B-Rep models.

ACM TRANSACTIONS ON GRAPHICS (2022)

Article Computer Science, Software Engineering

Dual Octree Graph Networks for Learning Adaptive Volumetric Shape Representations

Peng-Shuai Wang, Yang Liu, Xin Tong

Summary: This paper presents an adaptive deep representation of volumetric fields of 3D shapes and an efficient approach to learn this representation for high-quality 3D shape reconstruction and auto-encoding. The method encodes the volumetric field with an adaptive feature volume organized by an octree and applies a compact multilayer perceptron network for mapping the features to the field value. The approach effectively encodes shape details, enables fast 3D shape reconstruction, and exhibits good generality.

ACM TRANSACTIONS ON GRAPHICS (2022)

Article Computer Science, Software Engineering

SDF-StyleGAN: Implicit SDF-Based StyleGAN for 3D Shape Generation

X. Zheng, Y. Liu, P. Wang, X. Tong

Summary: We propose a StyleGAN2-based deep learning approach, SDF-StyleGAN, for 3D shape generation. By extending StyleGAN2 to 3D generation and utilizing the implicit signed distance function as the shape representation, we introduce global and local shape discriminators to improve the geometry and visual quality of the shapes. We use shading-image-based Frechet inception distance scores to evaluate the visual quality and shape distribution of the generated shapes.

COMPUTER GRAPHICS FORUM (2022)

Editorial Material Computer Science, Information Systems

Three-dimensional shape space learning for visual concept construction: challenges and research progress

Xin Tong

FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING (2022)

Article Computer Science, Artificial Intelligence

Face Restoration via Plug-and-Play 3D Facial Priors

Xiaobin Hu, Wenqi Ren, Jiaolong Yang, Xiaochun Cao, David Wipf, Bjoern Menze, Xin Tong, Hongbin Zha

Summary: This paper proposes an improved face restoration method by embedding the network with 3D morphable priors, which enhances the performance of facial restoration tasks. Experimental results demonstrate superior performance of this method in face super-resolution and deblurring.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2022)

Article Computer Science, Software Engineering

Semi-supervised 3D shape segmentation with multilevel consistency and part substitution

Chun-Yu Sun, Yu-Qi Yang, Hao-Xiang Guo, Peng-Shuai Wang, Xin Tong, Yang Liu, Heung-Yeung Shum

Summary: The lack of fine-grained 3D shape segmentation data is a major challenge for developing learning-based 3D segmentation techniques. In this study, we propose an effective semi-supervised method that learns 3D segmentations using a combination of labeled and unlabeled data. Our approach incorporates a novel multilevel consistency loss to ensure consistent network predictions across different levels of perturbed 3D shapes, and a part substitution scheme to augment labeled data for improved training. Extensive validation on different tasks demonstrates the superior performance of our method compared to existing semi-supervised and unsupervised pre-training approaches.

COMPUTATIONAL VISUAL MEDIA (2023)

Proceedings Paper Computer Science, Artificial Intelligence

GRAM: Generative Radiance Manifolds for 3D-Aware Image Generation

Yu Deng, Jiaolong Yang, Jianfeng Xiang, Xin Tong

Summary: The study aims to generate 3D-consistent images with controllable camera poses through 3D-aware image generative modeling. A novel approach is proposed to regulate point sampling and radiance field learning on 2D manifolds, addressing the limitations in handling fine details and stable training in existing generators.

2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) (2022)

Article Computer Science, Artificial Intelligence

SkeletonNet: A Topology-Preserving Solution for Learning Mesh Reconstruction of Object Surfaces From RGB Images

Jiapeng Tang, Xiaoguang Han, Mingkui Tan, Xin Tong, Kui Jia

Summary: This paper focuses on the challenging task of learning 3D object surface reconstructions from RGB images and proposes a method that learns and uses a topology-preserved skeletal shape representation to assist the surface reconstruction. Through experiments, the proposed method is shown to be effective and outperforms existing methods in the surface reconstruction task.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2022)

Proceedings Paper Computer Science, Artificial Intelligence

High-Resolution Optical Flow from 1D Attention and Correlation

Haofei Xu, Jiaolong Yang, Jianfei Cai, Juyong Zhang, Xin Tong

Summary: This paper proposes a new method for high-resolution optical flow estimation inspired by Transformers, using 1D attention and correlation operations to achieve 2D correspondence modeling effect and significantly reduce computational complexity. Experimental results demonstrate the effectiveness and superiority of the proposed method in handling high-resolution images.

2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) (2021)

Proceedings Paper Computer Science, Artificial Intelligence

Indoor Scene Generation from a Collection of Semantic-Segmented Depth Images

Ming-Jia Yang, Yu-Xiao Guo, Bin Zhou, Xin Tong

Summary: The method presented in this study utilizes a generative model trained on semantic-segmented depth images to automatically generate 3D indoor scenes, modeling each scene as a 3D semantic volume and learning from 2.5D partial observations. Compared to existing methods, it reduces modeling and acquisition workload, producing improved object shapes and layouts.

2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) (2021)

Proceedings Paper Computer Science, Artificial Intelligence

Learning High-Fidelity Face Texture Completion without Complete Face Texture

Jongyoo Kim, Jiaolong Yang, Xin Tong

Summary: This study introduces a new method for completing invisible textures in single face images without using any complete textures, achieved through unsupervised learning using a large corpus of face images. The proposed DSD-GAN method utilizes two discriminators in UV map space and image space to learn both facial structures and texture details in a complementary manner, demonstrating the importance of their combination for high-fidelity results. Despite never seeing complete facial appearances, the network is able to generate compelling full textures from single images.

2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) (2021)

No Data Available