4.7 Article

Multimodal Affective States Recognition Based on Multiscale CNNs and Biologically Inspired Decision Fusion Model

期刊

IEEE TRANSACTIONS ON AFFECTIVE COMPUTING
卷 14, 期 2, 页码 1391-1403

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAFFC.2021.3093923

关键词

Physiology; Brain modeling; Feature extraction; Electroencephalography; Biological system modeling; Convolution; Reliability; Multimodal affective states recognition; convolutional neural network; decision fusion model; physiological signals

向作者/读者索取更多资源

This article introduces a method for multimodal affective states recognition based on multiple physiological signals. It uses Multiscale Convolutional Neural Networks (Multiscale CNNs) and a biologically inspired decision fusion model to improve recognition accuracy. The fusion model achieves recognition accuracies of 98.52% and 99.89% on the DEAP and AMIGOS dataset, respectively.
There has been an encouraging progress in the affective states recognition models based on the single-modality signals as electroencephalogram (EEG) signals or peripheral physiological signals in recent years. However, multimodal physiological signals-based affective states recognition methods have not been thoroughly exploited yet. Here we propose Multiscale Convolutional Neural Networks (Multiscale CNNs) and a biologically inspired decision fusion model for multimodal affective states recognition. First, the raw signals are pre-processed with baseline signals. Then, the High Scale CNN and Low Scale CNN in Multiscale CNNs are utilized to predict the probability of affective states output for EEG and each peripheral physiological signal respectively. Finally, the fusion model calculates the reliability of each single-modality signals by the euclidean distance between various class labels and the classification probability from Multiscale CNNs, and the decision is made by the more reliable modality information while other modalities information is retained. We use this model to classify four affective states from the arousal valence plane in the DEAP and AMIGOS dataset. The results show that the fusion model improves the accuracy of affective states recognition significantly compared with the result on single-modality signals, and the recognition accuracy of the fusion result achieve 98.52 and 99.89 percent in the DEAP and AMIGOS dataset respectively.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据