4.7 Article

From Deterministic to Generative: Multimodal Stochastic RNNs for Video Captioning

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2018.2851077

关键词

Recurrent neural network (RNN); uncertainty; video captioning

资金

  1. Fundamental Research Funds for the Central Universities [ZYGX2014J063, ZYGX2014Z007]
  2. National Natural Science Foundation of China [61772116, 61502080, 61632007, 61602049, 61761130079]
  3. National Key Research and Development Program of China [2018YFB1107400]
  4. 111 Project [B17008]

向作者/读者索取更多资源

Video captioning, in essential, is a complex natural process, which is affected by various uncertainties stemming from video content, subjective judgment, and so on. In this paper, we build on the recent progress in using encoder-decoder framework for video captioning and address what we find to be a critical deficiency of the existing methods that most of the decoders propagate deterministic hidden states. Such complex uncertainty cannot be modeled efficiently by the deterministic models. In this paper, we propose a generative approach, referred to as multimodal stochastic recurrent neural networks (MS-RNNs), which models the uncertainty observed in the data using latent stochastic variables. Therefore, MS-RNN can improve the performance of video captioning and generate multiple sentences to describe a video considering different random factors. Specifically, a multimodal long short-term memory (LSTM) is first proposed to interact with both visual and textual features to capture a high-level representation. Then, a backward stochastic LSTM is proposed to support uncertainty propagation by introducing latent variables. Experimental results on the challenging data sets, microsoft video description and microsoft research videoto-text, show that our proposed MS-RNN approach outperforms the state-of-the-art video captioning benchmarks.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据