4.6 Article

Visual-Tactile Fusion for Object Recognition

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TASE.2016.2549552

Keywords

Joint sparse coding; object recognition; tactile perception; visual perception

Funding

  1. National Key Project for Basic Research of China [2013CB329403]
  2. National Natural Science Foundation of China [61327809]
  3. National High-Tech Research and Development Plan [2015AA042306]

Ask authors/readers for more resources

The camera provides rich visual information regarding objects and becomes one of the most mainstream sensors in the automation community. However, it is often difficult to be applicable when the objects are not visually distinguished. On the other hand, tactile sensors can be used to capture multiple object properties, such as textures, roughness, spatial features, compliance, and friction, and therefore provide another important modality for the perception. Nevertheless, effective combination of the visual and tactile modalities is still a challenging problem. In this paper, we develop a visualtactile fusion framework for object recognition tasks. This paper uses the multivariate-time-series model to represent the tactile sequence and the covariance descriptor to characterize the image. Further, we design a joint group kernel sparse coding (JGKSC) method to tackle the intrinsically weak pairing problem in visual-tactile data samples. Finally, we develop a visual-tactile data set, composed of 18 household objects for validation. The experimental results show that considering both visual and tactile inputs is beneficial and the proposed method indeed provides an effective strategy for fusion. Note to Practitioners-Visual and tactile measurements offer complementary properties that make them particularly suitable for fusion in order to address the robust and accurate recognition of objects, which is a necessity for many automation systems. In this paper, we investigate a widely applicable scenario in grasp manipulation. When identifying an object, the manipulator may see it using the camera and touch it using its hand. Thus, we obtain a pair of test samples, including one image sample and one tactile sample. The manipulator then utilizes this sample pair to identify this object with a classifier that is constructed using the previously collected training samples. However, when collecting training samples, we may collect the image samples and the tactile samples separately. In other words, the training samples may not be paired, while the test samples are paired. This paper addresses this practical problem by developing a JGKSC method, which encourages the effects of the same group, but different atoms. Although our focus is on combining visual and tactile information, the described problem framework is common in the automation community. The algorithm described in this paper can therefore work with weak pairings between a variety of sensors.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available