4.6 Article

Object-difference drived graph convolutional networks for visual question answering

Journal

MULTIMEDIA TOOLS AND APPLICATIONS
Volume 80, Issue 11, Pages 16247-16265

Publisher

SPRINGER
DOI: 10.1007/s11042-020-08790-0

Keywords

Visual question answering; Graph convolutional networks; Object-difference

Funding

  1. National Key Research and Development Program of China [2016QY03D0505]
  2. National Natural Science Foundation of China [U19A2057]

Ask authors/readers for more resources

This research achieves outstanding performance in the VQA task by proposing an object-difference based graph learner, combining with a soft-attention mechanism and Graph Convolutional Networks. Experimental results demonstrate that the model outperforms baseline methods on the VQA 2.0 dataset.
Visual Question Answering(VQA), an important task to evaluate the cross-modal understanding capability of an Artificial Intelligence model, has been a hot research topic in both computer vision and natural language processing communities. Recently, graph-based models have received growing interest in VQA, for its potential of modeling the relationships between objects as well as its formidable interpretability. Nonetheless, those solutions mainly define the similarity between objects as their semantical relationships, while largely ignoring the critical point that the difference between objects can provide more information for establishing the relationship between nodes in the graph. To achieve this, we propose an object-difference based graph learner, which learns question-adaptive semantic relations by calculating inter-object difference under the guidance of questions. With the learned relationships, the input image can be represented as an object graph encoded with structural dependencies between objects. In addition, existing graph-based models leverage the pre-extracted object boxes by the object detection model as node features for convenience, but they are suffering from the redundancy problem. To reduce the redundant objects, we introduce a soft-attention mechanism to magnify the question-related objects. Moreover, we incorporate our object-difference based graph learner into the soft-attention based Graph Convolutional Networks to capture question-specific objects and their interactions for answer prediction. Our experimental results on the VQA 2.0 dataset demonstrate that our model gives significantly better performance than baseline methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available