4.6 Article

Weakly Labelled AudioSet Tagging With Attention Neural Networks

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TASLP.2019.2930913

关键词

Audio tagging; AudioSet; attention neural network; weakly labelled data; multiple instance learning

资金

  1. EPSRC [EP/N014111/1]
  2. China Scholarship Council [201406150082]
  3. EPSRC Doctoral Training Partnership [1976218, EP/N509772/1]
  4. EPSRC [EP/N014111/1] Funding Source: UKRI

向作者/读者索取更多资源

Audio tagging is the task of predicting the presence or absence of sound classes within an audio clip. Previous work in audio tagging focused on relatively small datasets limited to recognizing a small number of sound classes. We investigate audio tagging on AudioSet, which is a dataset consisting of over 2 million audio clips and 527 classes. AudioSet is weakly labelled, in that only the presence or absence of sound classes is known for each clip, whereas the onset and offset times are unknown. To address the weakly labelled audio tagging problem, we propose attention neural networks as a way to attend the most salient parts of an audio clip. We bridge the connection between attention neural networks and multiple instance learning (MIL) methods, and propose decision-level and feature-level attention neural networks for audio tagging. We investigate attention neural networks modeled by different functions, depths, and widths. Experiments on AudioSet show that the feature-level attention neural network achieves a state-of-the-art mean average precision of 0.369, outperforming the best MIL method of 0.317 and Google's deep neural network baseline of 0.314. In addition, we discover that the audio tagging performance on AudioSet-embedding features has a weak correlation with the number of training samples and the quality of labels of each sound class.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据