lstappen / MuSe-Toolbox
A Phyton toolbox to fuse multiple continuous emotion annotations from several raters and diarization them to classes!
☆15Updated 3 years ago
Alternatives and similar repositories for MuSe-Toolbox:
Users that are interested in MuSe-Toolbox are comparing it to the libraries listed below
- Accompany code to reproduce the baselines of the International Multimodal Sentiment Analysis Challenge (MuSe 2020).☆16Updated 2 years ago
- The code repository for NAACL 2021 paper "Multimodal End-to-End Sparse Model for Emotion Recognition".☆102Updated 2 years ago
- A survey of deep multimodal emotion recognition.☆52Updated 2 years ago
- ☆110Updated 2 years ago
- Multimodal Emotion Recognition in a video using feature level fusion of audio and visual modalities☆15Updated 6 years ago
- Group Gated Fusion on Attention-based Bidirectional Alignment for Multimodal Emotion Recognition☆13Updated 2 years ago
- ☆17Updated 2 years ago
- Reproducing the baselines of the 2nd Multimodal Sentiment Analysis Challenge (MuSe 2021)☆40Updated 3 years ago
- Baseline scripts of the 8th Audio/Visual Emotion Challenge (AVEC 2018)☆58Updated 6 years ago
- PyTorch implementation for Audio-Visual Domain Adaptation Feature Fusion for Speech Emotion Recognition☆12Updated 3 years ago
- Repository for my paper: Dimensional Speech Emotion Recognition Using Acoustic Features and Word Embeddings using Multitask Learning☆16Updated 8 months ago
- Emotion Recognition ToolKit (ERTK): tools for emotion recognition. Dataset processing, feature extraction, experiments,☆58Updated 5 months ago
- IEEE T-BIOM : "Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention"☆38Updated 4 months ago
- Multimodal SER Model meant to be trained on recognising emotions from speech (text + acoustic data). Fine-tuned the DeBERTaV3 model, resp…☆10Updated 10 months ago
- Multimodal sentiment analysis using hierarchical fusion with context modeling☆44Updated 2 years ago
- This is the official code for paper "Speech Emotion Recognition with Global-Aware Fusion on Multi-scale Feature Representation" published…☆46Updated 3 years ago
- ☆12Updated 4 years ago
- TensorFlow implementation of "Attentive Modality Hopping for Speech Emotion Recognition," ICASSP-20☆32Updated 4 years ago
- Baseline scripts for the Audio/Visual Emotion Challenge 2019☆79Updated 3 years ago
- Baseline scripts for AVEC 2019, Depression Detection Sub-challenge☆15Updated 5 years ago
- ☆14Updated 3 years ago
- ☆11Updated 4 years ago
- FG2021: Cross Attentional AV Fusion for Dimensional Emotion Recognition☆28Updated 4 months ago
- ABAW3 (CVPRW): A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion Recognition☆44Updated last year
- The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.☆119Updated 3 years ago
- Prosody-Aware Graph Neural Networks for Speech Emotion Recognition☆9Updated last year
- Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition☆30Updated 4 years ago
- Modality-Transferable-MER, multimodal emotion recognition model with zero-shot and few-shot abilities.☆64Updated 4 years ago
- The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion R…☆120Updated 4 years ago
- ☆16Updated last month