4.5 Article

A neural learning approach for simultaneous object detection and grasp detection in cluttered scenes

期刊

出版社

FRONTIERS MEDIA SA
DOI: 10.3389/fncom.2023.1110889

关键词

grasp detection; object detection; RGB-D image; deep neural network; robotic manipulation

向作者/读者索取更多资源

Object detection and grasp detection are crucial for unmanned systems operating in cluttered real-world environments. The proposed SOGD approach utilizes neural learning to predict the best grasp configuration for each detected object, overcoming the challenge of finding relationships between objects and grasp configurations. Experimental results demonstrate the superior performance of SOGD in predicting reasonable grasp configurations from cluttered scenes, outperforming state-of-the-art methods on public datasets.
Object detection and grasp detection are essential for unmanned systems working in cluttered real-world environments. Detecting grasp configurations for each object in the scene would enable reasoning manipulations. However, finding the relationships between objects and grasp configurations is still a challenging problem. To achieve this, we propose a novel neural learning approach, namely SOGD, to predict a best grasp configuration for each detected objects from an RGB-D image. The cluttered background is first filtered out via a 3D-plane-based approach. Then two separate branches are designed to detect objects and grasp candidates, respectively. The relationship between object proposals and grasp candidates are learned by an additional alignment module. A series of experiments are conducted on two public datasets (Cornell Grasp Dataset and Jacquard Dataset) and the results demonstrate the superior performance of our SOGD against SOTA methods in predicting reasonable grasp configurations from a cluttered scene.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据