4.6 Article

SE-ResUNet: A Novel Robotic Grasp Detection Method

期刊

IEEE ROBOTICS AND AUTOMATION LETTERS
卷 7, 期 2, 页码 5238-5245

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2022.3145064

关键词

Robot; grasp detection; convolutional neural network; attentional mechanism

类别

资金

  1. National Natural Science Foundation of China [61803033, 62173035, 61836001]
  2. ChinaUnicom Innovation Ecological Cooperation Plan

向作者/读者索取更多资源

In this letter, a novel grasp detection neural network called Squeeze-and-Excitation ResUNet (SE-ResUNet) is developed. It can generate grasp poses from RGB-D images and predict the quality scores of each grasp pose. The experimental results show high accuracy and real-time performance, and the proposed method outperforms other methods in comparison study.
In this letter, a novel grasp detection neural network Squeeze-and-Excitation ResUNet (SE-ResUNet) is developed, where the residual block with the channel attention is integrated. The proposed framework can not only generate the grasp pose from the RGB-D images, but also predict the quality score of each grasp pose. The experimental results show that the accuracy on the Cornell dataset and the Jacquard dataset is 98.2% and 95.7%, respectively. And the processing speed for the RGB-D images can reach 30fps, which shows the good real-time performance. In the comparison study, better performance is also obtained by the proposed method, which improves the accuracy and time efficiency. Finally, it is also demonstrated by physical grasping on the Baxter robot, where the average grasp success rate is 96.3%.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

Article Automation & Control Systems

EGNet: Efficient Robotic Grasp Detection Network

Sheng Yu, Di-Hua Zhai, Yuanqing Xia

Summary: In this article, a novel grasp detection network called efficient grasp detection network (EGNet) is proposed to address the challenges of grasping in stacked scenes. It combines object detection, grasp detection, and manipulation relationship detection tasks. The EGNet adopts the EfficientDet for object detection and modifies some hyperparameters. It introduces a novel grasp detection module that utilizes the feature map from bidirectional feature pyramid network (BiFPN) to output the grasp position and quality score. The EGNet also incorporates manipulation relation analysis and achieves high detection accuracy in different datasets and practical scenarios.

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS (2023)

Article Automation & Control Systems

CGNet: Robotic Grasp Detection in Heavily Cluttered Scenes

Sheng Yu, Di-Hua Zhai, Yuanqing Xia

Summary: This article proposes a novel robotic grasp detection method, CGNet, which can accurately detect grasp rectangles in cluttered scenes. By introducing attention module, grasp region proposal module, and position focal loss, the detection accuracy is improved. The experiments demonstrate the effectiveness of the proposed method on various datasets and in the real world.

IEEE-ASME TRANSACTIONS ON MECHATRONICS (2023)

Article Automation & Control Systems

SKGNet: Robotic Grasp Detection With Selective Kernel Convolution

Sheng Yu, Di-Hua Zhai, Yuanqing Xia

Summary: This paper proposes a new robotic grasp detection network called SKGNet, which achieves high accuracy and real-time performance by integrating attention mechanism and multi-scale fusion features. Training and testing on public datasets, SKGNet achieves superior accuracy and high detection speed compared to existing methods. Real-world experiments demonstrate the generalization performance and effectiveness of SKGNet.

IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING (2022)

Article Automation & Control Systems

Robotic Grasp Detection Based on Category-Level Object Pose Estimation With Self-Supervised Learning

Sheng Yu, Di-Hua Zhai, Yuanqing Xia

Summary: The paper proposes a new category-level object pose estimation network, SCNet, which introduces the prior point cloud and RGB images to estimate and refine the pose of target objects. Experimental results show that the proposed method outperforms other methods in object pose estimation and achieves good performance in robotic grasp tasks.

IEEE-ASME TRANSACTIONS ON MECHATRONICS (2023)

Article Automation & Control Systems

FANet: Fast and Accurate Robotic Grasp Detection Based on Keypoints

Di-Hua Zhai, Sheng Yu, Yuanqing Xia

Summary: This research proposes a network called FANet based on grasp keypoints, which achieves real-time and accurate robotic grasp detection. It includes a local refinement module, a global feature refinement module, and a grasp keypoint optimization module. Results show that FANet achieves excellent detection performance on multiple datasets and has a high success rate in real-world grasp experiments with a Baxter robot.

IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING (2023)

Article Computer Science, Artificial Intelligence

A Novel Robotic Pushing and Grasping Method Based on Vision Transformer and Convolution

Sheng Yu, Di-Hua Zhai, Yuanqing Xia

Summary: This article proposes a robotic grasping network (GN) that combines pushing and grasping actions to solve the problem of grasping in cluttered scenes. The pushing action is performed by a vision transformer network (PTNet) to predict the object position, while the grasping detection is achieved by a cross dense fusion network (CDFNet) to accurately locate the optimal grasping position. Through simulations and actual robot experiments, the proposed network achieves state-of-the-art performance.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2023)

Proceedings Paper Automation & Control Systems

Object recognition and robot grasping technology based on RGB-D data

Sheng Yu, Di-Hua Zhai, Haocun Wu, Hongda Yang, Yuanqing Xia

PROCEEDINGS OF THE 39TH CHINESE CONTROL CONFERENCE (2020)

暂无数据