CheyneyComputerScience / CREMA-DLinks
Crowd Sourced Emotional Multimodal Actors Dataset (CREMA-D)
☆492Updated 10 months ago
Alternatives and similar repositories for CREMA-D
Users that are interested in CREMA-D are comparing it to the libraries listed below
Sorting:
- A collection of datasets for the purpose of emotion recognition/detection in speech.☆395Updated last year
- [ICASSP 2023] Official Tensorflow implementation of "Temporal Modeling Matters: A Novel Temporal Emotional Modeling Approach for Speech E…☆187Updated last year
- Official implementation of INTERSPEECH 2021 paper 'Emotion Recognition from Speech Using Wav2vec 2.0 Embeddings'☆140Updated last year
- Official implementation for the paper Exploring Wav2vec 2.0 fine-tuning for improved speech emotion recognition☆153Updated 4 years ago
- A PyTorch implementation of the Deep Audio-Visual Speech Recognition paper.☆239Updated last year
- Repository with the code of the paper: A proposal for Multimodal Emotion Recognition using auraltransformers and Action Units on RAVDESS …☆113Updated last year
- This is the GitHub page for publicly available emotional speech data.☆378Updated 4 years ago
- Python package for openSMILE☆301Updated 2 months ago
- A multimodal approach on emotion recognition using audio and text.☆187Updated 5 years ago
- ☆112Updated 3 years ago
- This repository contains PyTorch implementation of 4 different models for classification of emotions of the speech.☆211Updated 3 years ago
- Code for the InterSpeech 2023 paper: MMER: Multimodal Multi-task learning for Speech Emotion Recognition☆81Updated last year
- Multilingual datasets with raw audio for speech emotion recognition☆30Updated 4 years ago
- The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion R…☆118Updated 4 years ago
- ☆49Updated 2 years ago
- TensorFlow implementation of "Multimodal Speech Emotion Recognition using Audio and Text," IEEE SLT-18☆298Updated last year
- ACM MM 2021: 'Is Someone Speaking? Exploring Long-term Temporal Features for Audio-visual Active Speaker Detection'☆440Updated 2 years ago
- Lightweight and Interpretable ML Model for Speech Emotion Recognition and Ambiguity Resolution (trained on IEMOCAP dataset)☆438Updated 2 years ago
- Baseline scripts for the Audio/Visual Emotion Challenge 2019☆80Updated 3 years ago
- Emotion Recognition ToolKit (ERTK): tools for emotion recognition. Dataset processing, feature extraction, experiments,☆55Updated 3 months ago
- Code for Speech Emotion Recognition with Co-Attention based Multi-level Acoustic Information☆164Updated 2 years ago
- Official implement of SpeechFormer written in Python (PyTorch).☆79Updated 2 years ago
- Multi-modal Speech Emotion Recogniton on IEMOCAP dataset☆95Updated 2 years ago
- Wav2Vec for speech recognition, classification, and audio classification☆271Updated 3 years ago
- ☆109Updated 3 years ago
- feature extraction from speech signals☆388Updated 7 months ago
- The Munich Open-Source Large-Scale Multimedia Feature Extractor☆758Updated 2 months ago
- PyTorch implementation for Audio-Visual Domain Adaptation Feature Fusion for Speech Emotion Recognition☆12Updated 3 years ago
- Phoneme Recognition using pre-trained models Wav2vec2, HuBERT and WavLM. Throughout this project, we compared specifically three differen…☆257Updated 3 years ago
- ☆176Updated last year