4.7 Article

ROSEFusion: Random Optimization for Online Dense Reconstruction under Fast Camera Motion

Journal

ACM TRANSACTIONS ON GRAPHICS
Volume 40, Issue 4, Pages -

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3450626.3459676

Keywords

-

Funding

  1. National Key Research and Development Program of China [2018AAA0102200]
  2. NSFC [61532003, 62002376]
  3. NUDT Research Grants [ZK19-30]

Ask authors/readers for more resources

The method proposed in this study tackles the challenges of fast-motion camera tracking using random optimization techniques, particularly Particle Filter Optimization (PFO). By updating a particle swarm template (PST), the method can efficiently drive thousands of particles to quickly and robustly locate and cover a good local optimum. The evaluation metric based on depth-model conformance allows the method to effectively track camera poses under fast motion, mitigating the effects of motion blur.
Online reconstruction based on RGB-D sequences has thus far been restrained to relatively slow camera motions (<1m/s). Under very fast camera motion (e.g., 3m/s), the reconstruction can easily crumble even for the state-of-the-art methods. Fast motion brings two challenges to depth fusion: 1) the high nonlinearity of camera pose optimization due to large inter-frame rotations and 2) the lack of reliably trackable features due to motion blur. We propose to tackle the difficulties of fast-motion camera tracking in the absence of inertial measurements using random optimization, in particular, the Particle Filter Optimization (PFO). To surmount the computation-intensive particle sampling and update in standard PFO, we propose to accelerate the randomized search via updating a particle swarm template (PST). PST is a set of particles pre-sampled uniformly within the unit sphere in the 6D space of camera pose. Through moving and rescaling the pre-sampled PST guided by swarm intelligence, our method is able to drive tens of thousands of particles to locate and cover a good local optimum extremely fast and robustly. The particles, representing candidate poses, are evaluated with a fitness function defined based on depth-model conformance. Therefore, our method, being depth-only and correspondence-free, mitigates the motion blur impediment as (ToF-based) depths are often resilient to motion blur. Thanks to the efficient template-based particle set evolution and the effective fitness function, our method attains good quality pose tracking under fast camera motion (up to 4m/s) in a realtime framerate without including loop closure or global pose optimization. Through extensive evaluations on public datasets of RGB-D sequences, especially on a newly proposed benchmark of fast camera motion, we demonstrate the significant advantage of our method over the state of the arts.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available