期刊
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
卷 32, 期 6, 页码 2676-2690出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2020.3007534
关键词
Object detection; Data models; Saliency detection; Feature extraction; Object oriented modeling; Computational modeling; Optical imaging; Cross attention; inter and intraframe saliency; salient object; video saliency
类别
资金
- National Key Research and Development Program of China [2018YFB1003800, 2018YFB1003805]
- National Natural Science Foundation of China [61972112, 61832004]
- Shenzhen Science and Technology Program [JCYJ20170413105929681, JCYJ20170811161545863]
This article introduces a novel cross-attention based encoder-decoder model called CASNet for video salient object detection, incorporating self- and cross-attention modules to improve accuracy and consistency. Extensive experimental results demonstrate the effectiveness of CASNet model surpassing existing image- and video-based methods on multiple datasets.
Recent works on video salient object detection have demonstrated that directly transferring the generalization ability of image-based models to video data without modeling spatial-temporal information remains nontrivial and challenging. Considering both intraframe accuracy and interframe consistency of saliency detection, this article presents a novel cross-attention based encoder-decoder model under the Siamese framework (CASNet) for video salient object detection. A baseline encoder-decoder model trained with Lovasz softmax loss function is adopted as a backbone network to guarantee the accuracy of intraframe salient object detection. Self- and cross-attention modules are incorporated into our model in order to preserve the saliency correlation and improve intraframe salient detection consistency. Extensive experimental results obtained by ablation analysis and cross-data set validation demonstrate the effectiveness of our proposed method. Quantitative results indicate that our CASNet model outperforms 19 state-of-the-art image- and video-based methods on six benchmark data sets.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据