What makes the difference? An empirical comparison of fusion strategies for multimodal language analysis
Published 2020 View Full Article
- Home
- Publications
- Publication Search
- Publication Details
Title
What makes the difference? An empirical comparison of fusion strategies for multimodal language analysis
Authors
Keywords
Multimodal human language understanding, Video sentiment analysis, Emotion recognition, Reproducibility in multimodal machine learning
Journal
Information Fusion
Volume 66, Issue -, Pages 184-197
Publisher
Elsevier BV
Online
2020-09-18
DOI
10.1016/j.inffus.2020.09.005
References
Ask authors/readers for more resources
Related references
Note: Only part of the references are listed.- Dialogue systems with audio context
- (2020) Tom Young et al. NEUROCOMPUTING
- Fuzzy commonsense reasoning for multimodal sentiment analysis
- (2019) Iti Chaturvedi et al. PATTERN RECOGNITION LETTERS
- Locally Confined Modality Fusion Network With a Global Perspective for Multimodal Human Affective Computing
- (2019) Sijie Mai et al. IEEE TRANSACTIONS ON MULTIMEDIA
- Multimodal Machine Learning: A Survey and Taxonomy
- (2018) Tadas Baltrusaitis et al. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
- Electroencephalogram Emotion Recognition Based on Empirical Mode Decomposition and Optimal Feature Selection
- (2018) Zhen-Tao Liu et al. IEEE Transactions on Cognitive and Developmental Systems
- Deep Multimodal Learning: A Survey on Recent Advances and Trends
- (2017) Dhanesh Ramachandram et al. IEEE SIGNAL PROCESSING MAGAZINE
- A review of affective computing: From unimodal analysis to multimodal fusion
- (2017) Soujanya Poria et al. Information Fusion
- Multimodal Sentiment Intensity Analysis in Videos: Facial Gestures and Verbal Messages
- (2016) Amir Zadeh et al. IEEE INTELLIGENT SYSTEMS
- Affective Computing and Sentiment Analysis
- (2016) Erik Cambria IEEE INTELLIGENT SYSTEMS
- A Review and Meta-Analysis of Multimodal Affect Detection Systems
- (2015) Sidney K. D'mello et al. ACM COMPUTING SURVEYS
- Medical image fusion: A survey of the state of the art
- (2014) Alex Pappachen James et al. Information Fusion
- A survey of multi-view machine learning
- (2013) Shiliang Sun NEURAL COMPUTING & APPLICATIONS
- Multimodal fusion for multimedia analysis: a survey
- (2010) Pradeep K. Atrey et al. MULTIMEDIA SYSTEMS
- Speaker identification on the SCOTUS corpus
- (2008) Jiahong Yuan et al. JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA
- IEMOCAP: interactive emotional dyadic motion capture database
- (2008) Carlos Busso et al. Language Resources and Evaluation
Find Funding. Review Successful Grants.
Explore over 25,000 new funding opportunities and over 6,000,000 successful grants.
ExploreAdd your recorded webinar
Do you already have a recorded webinar? Grow your audience and get more views by easily listing your recording on Peeref.
Upload Now