CheyneyComputerScience / CREMA-D
Crowd Sourced Emotional Multimodal Actors Dataset (CREMA-D)
☆397Updated 2 years ago
Alternatives and similar repositories for CREMA-D:
Users that are interested in CREMA-D are comparing it to the libraries listed below
- A collection of datasets for the purpose of emotion recognition/detection in speech.☆310Updated 4 months ago
- Official implementation for the paper Exploring Wav2vec 2.0 fine-tuning for improved speech emotion recognition☆144Updated 3 years ago
- [ICASSP 2023] Official Tensorflow implementation of "Temporal Modeling Matters: A Novel Temporal Emotional Modeling Approach for Speech E…☆168Updated 9 months ago
- This is the GitHub page for publicly available emotional speech data.☆333Updated 3 years ago
- Official implementation of INTERSPEECH 2021 paper 'Emotion Recognition from Speech Using Wav2vec 2.0 Embeddings'☆127Updated last month
- Code for the InterSpeech 2023 paper: MMER: Multimodal Multi-task learning for Speech Emotion Recognition☆70Updated 11 months ago
- Python package for openSMILE☆263Updated 2 months ago
- Multi-modal Speech Emotion Recogniton on IEMOCAP dataset☆88Updated last year
- Wav2Vec for speech recognition, classification, and audio classification☆256Updated 2 years ago
- Repository with the code of the paper: A proposal for Multimodal Emotion Recognition using auraltransformers and Action Units on RAVDESS …☆101Updated 10 months ago
- ☆104Updated 2 years ago
- Multilingual datasets with raw audio for speech emotion recognition☆22Updated 3 years ago
- A PyTorch implementation of the Deep Audio-Visual Speech Recognition paper.☆219Updated last year
- Baseline scripts for the Audio/Visual Emotion Challenge 2019☆77Updated 2 years ago
- Lightweight and Interpretable ML Model for Speech Emotion Recognition and Ambiguity Resolution (trained on IEMOCAP dataset)☆412Updated last year
- A multimodal approach on emotion recognition using audio and text.☆171Updated 4 years ago
- This repository contains PyTorch implementation of 4 different models for classification of emotions of the speech.☆196Updated 2 years ago
- Repository for th OMG Emotion Challenge☆88Updated last month
- The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion R…☆117Updated 3 years ago
- Audio-Visual Speech Separation with Cross-Modal Consistency☆226Updated last year
- 😎 Awesome lists about Speech Emotion Recognition☆79Updated last month
- Deep-Learning-Based Audio-Visual Speech Enhancement and Separation☆205Updated last year
- ☆47Updated last year
- Supporting code for "Emotion Recognition in Speech using Cross-Modal Transfer in the Wild"☆102Updated 5 years ago
- Emotion Recognition ToolKit (ERTK): tools for emotion recognition. Dataset processing, feature extraction, experiments,☆55Updated 3 months ago
- Disentangled Speech Embeddings using Cross-Modal Self-Supervision☆156Updated 4 years ago
- Code for Speech Emotion Recognition with Co-Attention based Multi-level Acoustic Information☆137Updated last year
- ☆129Updated 5 months ago
- Deep speaker embeddings in PyTorch, including x-vectors. Code used in this work: https://arxiv.org/abs/2007.16196☆309Updated 4 years ago
- TensorFlow implementation of "Multimodal Speech Emotion Recognition using Audio and Text," IEEE SLT-18☆272Updated 7 months ago