4.7 Article

Deep learning for detecting robotic grasps

Journal

INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
Volume 34, Issue 4-5, Pages 705-724

Publisher

SAGE PUBLICATIONS LTD
DOI: 10.1177/0278364914549607

Keywords

Robotic grasping; deep learning; RGB-D multi-modal data; Baxter; PR2; 3D feature learning

Categories

Funding

  1. ARO [W911-NF12-1-0267]
  2. Microsoft
  3. NSF CAREER
  4. Google

Ask authors/readers for more resources

We consider the problem of detecting robotic grasps in an RGB-D view of a scene containing objects. In this work, we apply a deep learning approach to solve this problem, which avoids time-consuming hand-design of features. This presents two main challenges. First, we need to evaluate a huge number of candidate grasps. In order to make detection fast and robust, we present a two-step cascaded system with two deep networks, where the top detections from the first are re-evaluated by the second. The first network has fewer features, is faster to run, and can effectively prune out unlikely candidate grasps. The second, with more features, is slower but has to run only on the top few detections. Second, we need to handle multimodal inputs effectively, for which we present a method that applies structured regularization on the weights based on multimodal group regularization. We show that our method improves performance on an RGBD robotic grasping dataset, and can be used to successfully execute grasps on two different robotic platforms.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available