CheyneyComputerScience / CREMA-DLinks
Crowd Sourced Emotional Multimodal Actors Dataset (CREMA-D)
☆461Updated 5 months ago
Alternatives and similar repositories for CREMA-D
Users that are interested in CREMA-D are comparing it to the libraries listed below
Sorting:
- A collection of datasets for the purpose of emotion recognition/detection in speech.☆375Updated 11 months ago
- A PyTorch implementation of the Deep Audio-Visual Speech Recognition paper.☆237Updated last year
- Official implementation of INTERSPEECH 2021 paper 'Emotion Recognition from Speech Using Wav2vec 2.0 Embeddings'☆136Updated 8 months ago
- [ICASSP 2023] Official Tensorflow implementation of "Temporal Modeling Matters: A Novel Temporal Emotional Modeling Approach for Speech E…☆177Updated last year
- Official implementation for the paper Exploring Wav2vec 2.0 fine-tuning for improved speech emotion recognition☆152Updated 3 years ago
- Python package for openSMILE☆290Updated last month
- This is the GitHub page for publicly available emotional speech data.☆365Updated 3 years ago
- Repository with the code of the paper: A proposal for Multimodal Emotion Recognition using auraltransformers and Action Units on RAVDESS …☆107Updated last year
- ☆109Updated 3 years ago
- Audio-Visual Speech Separation with Cross-Modal Consistency☆235Updated 2 years ago
- Wav2Vec for speech recognition, classification, and audio classification☆267Updated 3 years ago
- Visual Speech Recognition for Multiple Languages☆435Updated 2 years ago
- This repository contains PyTorch implementation of 4 different models for classification of emotions of the speech.☆208Updated 2 years ago
- A multimodal approach on emotion recognition using audio and text.☆184Updated 5 years ago
- ACM MM 2021: 'Is Someone Speaking? Exploring Long-term Temporal Features for Audio-visual Active Speaker Detection'☆417Updated last year
- feature extraction from speech signals☆380Updated 3 months ago
- Multilingual datasets with raw audio for speech emotion recognition☆28Updated 3 years ago
- Emotion Recognition ToolKit (ERTK): tools for emotion recognition. Dataset processing, feature extraction, experiments,☆56Updated 10 months ago
- INTERSPEECH 2023-2024 Papers: A complete collection of influential and exciting research papers from the INTERSPEECH 2023-24 conference. …☆682Updated 8 months ago
- ☆138Updated last year
- 😎 Awesome lists about Speech Emotion Recognition☆96Updated 8 months ago
- ☆172Updated last year
- ☆48Updated last year
- Official implement of SpeechFormer written in Python (PyTorch).☆81Updated 2 years ago
- [ACL 2024] Official PyTorch code for extracting features and training downstream models with emotion2vec: Self-Supervised Pre-Training fo…☆948Updated 8 months ago
- The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion R…☆120Updated 4 years ago
- Code for the InterSpeech 2023 paper: MMER: Multimodal Multi-task learning for Speech Emotion Recognition☆75Updated last year
- This is the main repository of open-sourced speech technology by Huawei Noah's Ark Lab.☆594Updated last year
- ICASSP'22 Training Strategies for Improved Lip-Reading; ICASSP'21 Towards Practical Lipreading with Distilled and Efficient Models; ICASS…☆418Updated 2 years ago
- A self-supervised learning framework for audio-visual speech☆937Updated last year