declare-lab / MELD
MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
☆848Updated 10 months ago
Alternatives and similar repositories for MELD:
Users that are interested in MELD are comparing it to the libraries listed below
- This repo contains implementation of different architectures for emotion recognition in conversations.☆1,388Updated 10 months ago
- Lightweight and Interpretable ML Model for Speech Emotion Recognition and Ambiguity Resolution (trained on IEMOCAP dataset)☆409Updated last year
- A comprehensive reading list for Emotion Recognition in Conversations☆265Updated 11 months ago
- TensorFlow implementation of "Multimodal Speech Emotion Recognition using Audio and Text," IEEE SLT-18☆269Updated 7 months ago
- Multi-modal Emotion detection from IEMOCAP on Speech, Text, Motion-Capture Data using Neural Nets.☆162Updated 4 years ago
- Attention-based multimodal fusion for sentiment analysis☆334Updated 9 months ago
- This repository contains the code for the paper `End-to-End Multimodal Emotion Recognition using Deep Neural Networks`.☆239Updated 3 years ago
- Understanding emotions from audio files using neural networks and multiple datasets.☆416Updated last year
- A multimodal approach on emotion recognition using audio and text.☆169Updated 4 years ago
- speech emotion recognition using a convolutional recurrent networks based on IEMOCAP☆391Updated 5 years ago
- A real time Multimodal Emotion Recognition web app for text, sound and video inputs☆913Updated 3 years ago
- Multimodal Sarcasm Detection Dataset☆323Updated 4 months ago
- This is a short tutorial for using the CMU-MultimodalSDK.☆81Updated 5 years ago
- Human Emotion Understanding using multimodal dataset.☆91Updated 4 years ago
- A repository for emotion recognition from speech, text and mocap data from IEMOCAP dataset☆12Updated 6 years ago
- A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis☆120Updated 2 years ago
- 🔆 📝 A reading list focused on Multimodal Emotion Recognition (MER) 👂👄 👀 💬☆120Updated 4 years ago
- Official PyTorch implementation of Multilogue-Net (Best paper runner-up at Challenge-HML @ ACL 2020)☆57Updated 2 years ago
- A collection of datasets for the purpose of emotion recognition/detection in speech.☆309Updated 3 months ago
- The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion R…☆116Updated 3 years ago
- This repository contains the dataset and the PyTorch implementations of the models from the paper Recognizing Emotion Cause in Conversati…☆175Updated 2 years ago
- Bidirectional LSTM network for speech emotion recognition.☆262Updated 5 years ago
- Multi-modal Speech Emotion Recogniton on IEMOCAP dataset☆86Updated last year
- Crowd Sourced Emotional Multimodal Actors Dataset (CREMA-D)☆386Updated 2 years ago
- ☆199Updated 3 years ago
- Speaker independent emotion recognition☆314Updated 6 months ago
- Baseline scripts for the Audio/Visual Emotion Challenge 2019☆77Updated 2 years ago
- Using Convolutional Neural Networks in speech emotion recognition on the RAVDESS Audio Dataset.☆138Updated 3 years ago
- Dialogue model that produces empathetic responses when trained on the EmpatheticDialogues dataset.☆464Updated 3 years ago