CheyneyComputerScience / CREMA-DLinks
Crowd Sourced Emotional Multimodal Actors Dataset (CREMA-D)
☆434Updated 2 months ago
Alternatives and similar repositories for CREMA-D
Users that are interested in CREMA-D are comparing it to the libraries listed below
Sorting:
- A collection of datasets for the purpose of emotion recognition/detection in speech.☆343Updated 8 months ago
- Python package for openSMILE☆279Updated 5 months ago
- This is the GitHub page for publicly available emotional speech data.☆350Updated 3 years ago
- Official implementation of INTERSPEECH 2021 paper 'Emotion Recognition from Speech Using Wav2vec 2.0 Embeddings'☆132Updated 4 months ago
- Official implementation for the paper Exploring Wav2vec 2.0 fine-tuning for improved speech emotion recognition☆150Updated 3 years ago
- A PyTorch implementation of the Deep Audio-Visual Speech Recognition paper.☆231Updated last year
- Lightweight and Interpretable ML Model for Speech Emotion Recognition and Ambiguity Resolution (trained on IEMOCAP dataset)☆417Updated last year
- [ICASSP 2023] Official Tensorflow implementation of "Temporal Modeling Matters: A Novel Temporal Emotional Modeling Approach for Speech E…☆174Updated last year
- The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion R…☆120Updated 4 years ago
- Wav2Vec for speech recognition, classification, and audio classification☆263Updated 3 years ago
- TensorFlow implementation of "Multimodal Speech Emotion Recognition using Audio and Text," IEEE SLT-18☆287Updated 11 months ago
- feature extraction from speech signals☆374Updated this week
- A multimodal approach on emotion recognition using audio and text.☆179Updated 4 years ago
- ☆108Updated 2 years ago
- Repository with the code of the paper: A proposal for Multimodal Emotion Recognition using auraltransformers and Action Units on RAVDESS …☆105Updated last year
- This repository contains PyTorch implementation of 4 different models for classification of emotions of the speech.☆203Updated 2 years ago
- Understanding emotions from audio files using neural networks and multiple datasets.☆418Updated last year
- ☆163Updated 10 months ago
- Speaker embedding (d-vector) trained with GE2E loss☆282Updated last year
- ☆132Updated 9 months ago
- Multilingual datasets with raw audio for speech emotion recognition☆25Updated 3 years ago
- Baseline scripts for the Audio/Visual Emotion Challenge 2019☆79Updated 3 years ago
- ☆50Updated last year
- VGGSound: A Large-scale Audio-Visual Dataset☆319Updated 3 years ago
- Official implementation of VQMIVC: One-shot (any-to-any) Voice Conversion @ Interspeech 2021 + Online playing demo!☆351Updated 3 years ago
- speech emotion recognition using a convolutional recurrent networks based on IEMOCAP☆398Updated 5 years ago
- Audio-Visual Speech Separation with Cross-Modal Consistency☆231Updated last year
- Code for the InterSpeech 2023 paper: MMER: Multimodal Multi-task learning for Speech Emotion Recognition☆74Updated last year
- INTERSPEECH 2023-2024 Papers: A complete collection of influential and exciting research papers from the INTERSPEECH 2023-24 conference. …☆673Updated 5 months ago
- The Emotional Voices Database: Towards Controlling the Emotional Expressiveness in Voice Generation Systems☆268Updated last year