4.6 Article

Exploring EEG Features in Cross-Subject Emotion Recognition

期刊

FRONTIERS IN NEUROSCIENCE
卷 12, 期 -, 页码 -

出版社

FRONTIERS MEDIA SA
DOI: 10.3389/fnins.2018.00162

关键词

EEG; emotion recognition; feature engineering; DEAP dataset; SEED dataset

资金

  1. Chinese National Program on Key Basic Research Project (973 Program) [2014CB744604]
  2. Natural Science Foundation of China [U1636203, 61772363]
  3. European Union [721321]

向作者/读者索取更多资源

Recognizing cross-subject emotions based on brain imaging data, e.g., EEG, has always been difficult due to the poor generalizability of features across subjects. Thus, systematically exploring the ability of different EEG features to identify emotional information across subjects is crucial. Prior related work has explored this question based only on one or two kinds of features, and different findings and conclusions have been presented. In this work, we aim at a more comprehensive investigation on this question with a wider range of feature types, including 18 kinds of linear and non-linear EEG features. The effectiveness of these features was examined on two publicly accessible datasets, namely, the dataset for emotion analysis using physiological signals (DEAP) and the SJTU emotion EEG dataset (SEED). We adopted the support vector machine (SVM) approach and the leave-one-subject-out verification strategy to evaluate recognition performance. Using automatic feature selection methods, the highest mean recognition accuracy of 59.06% (AUC=0.605) on the DEAP dataset and of 83.33% (AUC = 0.904) on the SEED dataset were reached. Furthermore, using manually operated feature selection on the SEED dataset, we explored the importance of different EEG features in cross-subject emotion recognition from multiple perspectives, including different channels, brain regions, rhythms, and feature types. For example, we found that the Hjorth parameter of mobility in the beta rhythm achieved the best mean recognition accuracy compared to the other features. Through a pilot correlation analysis, we further examined the highly correlated features, for a better understanding of the implications hidden in those features that allow for differentiating cross-subject emotions. Various remarkable observations have been made. The results of this paper validate the possibility of exploring robust EEG features in cross-subject emotion recognition.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

Article Computer Science, Theory & Methods

A quantum-inspired multimodal sentiment analysis framework

Yazhou Zhang, Dawei Song, Peng Zhang, Panpan Wang, Jingfei Li, Xiang Li, Benyou Wang

THEORETICAL COMPUTER SCIENCE (2018)

Article Computer Science, Artificial Intelligence

A quantum-inspired sentiment representation model for twitter sentiment analysis

Yazhou Zhang, Dawei Song, Peng Zhang, Xiang Li, Panpan Wang

APPLIED INTELLIGENCE (2019)

Article Neurosciences

Latent Factor Decoding of Multi-Channel EEG for Emotion Recognition Through Autoencoder-Like Neural Networks

Xiang Li, Zhigang Zhao, Dawei Song, Yazhou Zhang, Jingshan Pan, Lu Wu, Jidong Huo, Chunyang Mu, Di Wang

FRONTIERS IN NEUROSCIENCE (2020)

Article Computer Science, Artificial Intelligence

A Quantum -Like multimodal network framework for modeling interaction dynamics in multiparty conversational sentiment analysis

Yazhou Zhang, Dawei Song, Xiang Li, Peng Zhang, Panpan Wang, Lu Rong, Guangliang Yu, Bo Wang

INFORMATION FUSION (2020)

Article Computer Science, Artificial Intelligence

Learning interaction dynamics with an interactive LSTM for conversational sentiment analysis

Yazhou Zhang, Prayag Tiwari, Dawei Song, Xiaoliu Mao, Panpan Wang, Xiang Li, Hari Mohan Pandey

Summary: Conversational sentiment analysis is an emerging and challenging task that aims to discover the affective state and sentimental change of each person in a conversation. Current sentiment analysis approaches are inadequate for this subtask due to the lack of benchmark datasets and the difficulty in modeling interactions between individuals.

NEURAL NETWORKS (2021)

Article Computer Science, Artificial Intelligence

CFN: A Complex-Valued Fuzzy Network for Sarcasm Detection in Conversations

Yazhou Zhang, Yaochen Liu, Qiuchi Li, Prayag Tiwari, Benyou Wang, Yuhua Li, Hari Mohan Pandey, Peng Zhang, Dawei Song

Summary: Sarcasm detection in conversation is a challenging artificial intelligence task that aims to discover ironically, contemptuously, and metaphorically implied information in daily conversations. A complex-valued fuzzy network leveraging quantum theory and fuzzy logic is proposed to address the intrinsic vagueness and uncertainty of human language in emotional expression and understanding, outperforming strong baselines in extensive experiments.

IEEE TRANSACTIONS ON FUZZY SYSTEMS (2021)

Article Computer Science, Information Systems

Stance-level Sarcasm Detection with BERT and Stance-centered Graph Attention Networks

Yazhou Zhang, Dan Ma, Prayag Tiwari, Chen Zhang, Mehedi Masud, Mohammad Shorfuzzaman, Dawei Song

Summary: Computational Linguistics (CL) associated with the Internet of Multimedia Things (IoMT) faces challenges in real-time speech understanding, deep fake video detection, emotion recognition, and home automation. The emergence of machine translation has greatly expanded CL solutions for natural language processing (NLP) applications. Sarcasm detection, a recent AI and NLP task, aims to uncover sarcastic information in texts generated in the IoMT. However, existing approaches often overlook the stance behind texts, which limits their effectiveness. In this research, we propose a new task called stance-level sarcasm detection (SLSD) and introduce an integral framework using BERT and a novel stance-centered graph attention network (SCGAT). Experimental results demonstrate the effectiveness of the SCGAT framework over state-of-the-art baselines.

ACM TRANSACTIONS ON INTERNET TECHNOLOGY (2023)

Article Computer Science, Information Systems

Affective Interaction: Attentive Representation Learning for Multi-Modal Sentiment Classification

Yazhou Zhang, Prayag Tiwari, Lu Rong, Rui Chen, Nojoom A. Alnajem, M. Shamim Hossain

Summary: The recent development of artificial intelligence applications has led to the creation of a large number of multi-modal records of human communication, which contain latent subjective attitudes and opinions. Sentiment and emotion analysis is of great value in improving affective services. Finding an optimal way to learn people's sentiments and emotional representations has been a challenging problem. In this work, a multi-task representation learning network called KAMT is proposed to address the challenges of multi-modal fused representation and the interaction between sentiment and emotion.

ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS (2022)

Article Computer Science, Artificial Intelligence

A Multitask learning model for multimodal sarcasm, sentiment and emotion recognition in conversations

Yazhou Zhang, Jinglin Wang, Yaochen Liu, Lu Rong, Qian Zheng, Dawei Song, Prayag Tiwari, Jing Qin

Summary: Sarcasm, sentiment, and emotion are closely related in the research of artificial intelligence and affective computing. This paper proposes a multimodal multitask learning model (M2Seq2Seq) to address the challenges of context dependency, multimodal fusion, and multitask interaction. The model incorporates encoder-decoder architecture and attention mechanisms to capture contextual dependency and multimodal interactions. Experimental results demonstrate the effectiveness of M2Seq2Seq over state-of-the-art baselines.

INFORMATION FUSION (2023)

Article Engineering, Civil

A Multimodal Coupled Graph Attention Network for Joint Traffic Event Detection and Sentiment Classification

Yazhou Zhang, Prayag Tiwari, Qian Zheng, Abdulmotaleb El Saddik, M. Shamim Hossain

Summary: Traffic events are a major cause of traffic accidents, and detecting these events poses a challenge in traffic management and intelligent transportation systems (ITSs). This paper proposes a multimodal coupled graph attention network (MCGAT) that extracts valuable information from various traffic data sources and represents it in a graphical structure. The proposed model outperforms state-of-the-art baselines in terms of F1 and accuracy, with significant improvements.

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS (2023)

Proceedings Paper Computer Science, Theory & Methods

QPIN: A Quantum-inspired Preference Interactive Network for E-commerce Recommendation

Panpan Wang, Zhao Li, Yazhou Zhang, Yuexian Hou, Liangzhu Ge

PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT (CIKM '19) (2019)

Proceedings Paper Computer Science, Information Systems

Unsupervised Sentiment Analysis of Twitter Posts Using Density Matrix Representation

Yazhou Zhang, Dawei Song, Xiang Li, Peng Zhang

ADVANCES IN INFORMATION RETRIEVAL (ECIR 2018) (2018)

暂无数据