4.6 Article

Multilevel Attention Residual Neural Network for Multimodal Online Social Network Rumor Detection

Journal

FRONTIERS IN PHYSICS
Volume 9, Issue -, Pages -

Publisher

FRONTIERS MEDIA SA
DOI: 10.3389/fphy.2021.711221

Keywords

online social networks; rumor detection; neural networks; multimodal fusion; attention residual network

Funding

  1. National Key Research and Development Program of China [2017YFB0803001]
  2. National Natural Science Foundation of China [61572459]

Ask authors/readers for more resources

This article proposes a multimodal online social network rumor detection model based on the multilevel attention residual neural network (MARN), which enhances image representation through cross-attention and self-attention residual mechanisms while retaining the unique attributes of original modal features. Experimental results on the Weibo dataset show that the MARN model outperforms state-of-the-art models in accuracy and F1 value.
In recent years, with the rapid rise of social networks, such as Weibo and Twitter, multimodal social network rumors have also spread. Unlike traditional unimodal rumor detection, the main difficulty of multimodal rumor detection is in avoiding the generation of noise information while using the complementarity of different modal features. In this article, we propose a multimodal online social network rumor detection model based on the multilevel attention residual neural network (MARN). First, the features of text and image are extracted by Bert and ResNet-18, respectively, and the cross-attention residual mechanism is used to enhance the representation of images with a text vector. Second, the enhanced image vector and text vector are concatenated and fused by the self-attention residual mechanism. Finally, the fused image-text vectors are classified into two categories. Among them, the attention mechanism can effectively enhance the image representation and further improve the fusion effect between the image and the text, while the residual mechanism retains the unique attributes of each original modal feature while using different modal features. To assess the performance of the MARN model, we conduct experiments on the Weibo dataset, and the results show that the MARN model outperforms the state-of-the-art models in terms of accuracy and F1 value.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available