Human Emotion Understanding using multimodal dataset.
☆110Jul 27, 2020Updated 5 years ago
Alternatives and similar repositories for MULTIMODAL-EMOTION-RECOGNITION
Users that are interested in MULTIMODAL-EMOTION-RECOGNITION are comparing it to the libraries listed below
Sorting:
- A multimodal approach on emotion recognition using audio and text.☆188Jun 15, 2020Updated 5 years ago
- TensorFlow implementation of "Multimodal Speech Emotion Recognition using Audio and Text," IEEE SLT-18☆297Jun 17, 2024Updated last year
- A real time Multimodal Emotion Recognition web app for text, sound and video inputs☆1,070Apr 29, 2021Updated 4 years ago
- A Tensorflow implementation of Speech Emotion Recognition using Audio signals and Text Data☆12May 16, 2022Updated 3 years ago
- This repository contains the code for the paper `End-to-End Multimodal Emotion Recognition using Deep Neural Networks`.☆251Jan 22, 2021Updated 5 years ago
- MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation☆1,009Mar 10, 2024Updated 2 years ago
- Implementation of the paper "Emotion Identification from raw speech signals using DNNs"☆14Jun 11, 2020Updated 5 years ago
- Multimodal (text, acoustic, visual) Sentiment Analysis and Emotion Recognition on CMU-MOSEI dataset.☆29Nov 8, 2020Updated 5 years ago
- Multi-modal Emotion detection from IEMOCAP on Speech, Text, Motion-Capture Data using Neural Nets.☆171Dec 13, 2020Updated 5 years ago
- The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.☆123Sep 20, 2021Updated 4 years ago
- Attention-based multimodal fusion for sentiment analysis☆367Apr 8, 2024Updated last year
- 🔆 📝 A reading list focused on Multimodal Emotion Recognition (MER) 👂👄 👀 💬☆128Oct 6, 2020Updated 5 years ago
- Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"☆81Jun 16, 2021Updated 4 years ago
- [AAAI 2020] Official implementation of VAANet for Emotion Recognition☆83Oct 3, 2023Updated 2 years ago
- MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis☆274Mar 14, 2023Updated 2 years ago
- ☆14Sep 24, 2021Updated 4 years ago
- This repository contains the code for our ICASSP paper `Speech Emotion Recognition using Semantic Information` https://arxiv.org/pdf/2103…☆27Mar 18, 2021Updated 4 years ago
- My implementation for the paper Context-Aware Emotion Recognition Networks☆30Mar 12, 2022Updated 3 years ago
- ☆10Jul 24, 2019Updated 6 years ago
- Modality-Transferable-MER, multimodal emotion recognition model with zero-shot and few-shot abilities.☆66Apr 23, 2021Updated 4 years ago
- A fine multimodality fusion network :)☆11Aug 9, 2021Updated 4 years ago
- ☆11Sep 6, 2020Updated 5 years ago
- The code repository for NAACL 2021 paper "Multimodal End-to-End Sparse Model for Emotion Recognition".☆108Feb 9, 2023Updated 3 years ago
- Codes for paper "Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis"☆240Jun 25, 2022Updated 3 years ago
- Multimodal Affective Analysis Using Hierarchical Attention Strategy☆12Dec 7, 2018Updated 7 years ago
- Multi-task Learning for Multi-modal Emotion Recognition and Sentiment Analysis☆13Mar 17, 2021Updated 4 years ago
- ☆11Sep 29, 2020Updated 5 years ago
- This repository contains various models targetting multimodal representation learning, multimodal fusion for downstream tasks such as mul…☆906Mar 15, 2023Updated 2 years ago
- [CVPR 2023] Code for "Learning Emotion Representations from Verbal and Nonverbal Communication"☆53Feb 19, 2025Updated last year
- [ACL'19] [PyTorch] Multimodal Transformer☆961Sep 12, 2022Updated 3 years ago
- Multimodal datasets.☆34Jan 26, 2024Updated 2 years ago
- This is the repository for "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors", Liu and Shen, et. al. ACL 2018☆273May 31, 2020Updated 5 years ago
- Multimodal Speech Recognition for phoneme level prediction using Audio-Visual data from TCDTIMIT dataset implementing RNNs with LSTMs for…☆15Jul 27, 2023Updated 2 years ago
- SpeechGLUE is a speech version of the GLUE benchmark, driven by text-to-speech.☆13Jun 2, 2023Updated 2 years ago
- MultiModal Sentiment Analysis architectures for CMU-MOSEI.☆58Dec 9, 2022Updated 3 years ago
- Speech_Emotion_detection-SVM,RF,DT,MLP☆20Dec 2, 2022Updated 3 years ago
- ☆14Aug 24, 2018Updated 7 years ago
- 🎙️ Automatically transcribe audio/video into high-quality, speaker-specific Text-To-Speech datasets ✨☆17May 20, 2025Updated 9 months ago
- Multi-modal classifications of digits with image and audio modality. One shot learning with Siamese network is used to predict if the giv…☆15Mar 25, 2023Updated 2 years ago