4.6 Article

Speech emotion recognition based on an improved brain emotion learning model

Journal

NEUROCOMPUTING
Volume 309, Issue -, Pages 145-156

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2018.05.005

Keywords

Speech; Emotion recognition; Brain-inspired; Brain emotion learning; Genetic algorithm

Funding

  1. National Natural Science Foundation of China [61403422, 61273102]
  2. Hubei Provincial Natural Science Foundation of China [2018CFB447, 2015CFA010]
  3. 111 project [B17040]
  4. Wuhan Science and Technology Project [2017010201010133]
  5. Fundamental Research Funds for National University, China University of Geosciences (Wuhan) [1810491T07]

Ask authors/readers for more resources

Human-robot emotional interaction has developed rapidly in recent years, in which speech emotion recognition plays a significant role. In this paper, a speech emotion recognition method based on an improved brain emotional learning (BEL) model is proposed, which is inspired by the emotional processing mechanism of the limbic system in the brain. The reinforcement learning rule of BEL model, however, makes it have poor adaptation and affects its performance. To solve these problems, Genetic Algorithm (GA) is employed to update the weights of BEL model. The proposal is tested on the CASIA Chinese emotion corpus, SAVEE emotion corpus, and FAU Aibo dataset, in which MFCC related features and their 1st order delta coefficients are extracted. In addition, the proposal is tested on INTERSPEECH 2009 standard feature set, in which three dimensionality reduction methods of Linear Discriminant Analysis (LDA), Principal Component Analysis (PCA), and PCA+LDA are used to reduce the dimension of feature set. The experimental results show that the proposed method obtains average recognition accuracy of 90.28% (CASIA), 76.40% (SAVEE), and 71.05% (FAU Aibo) for speaker-dependent (SD) speech emotion recognition and the highest average accuracy of 38.55% (CASIA), 44.18% (SAVEE), 64.60% (FAU Aibo) for speaker-independent (SI) speech emotion recognition are obtained, which shows that the proposal is feasible in speech emotion recognition. (C) 2018 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available