Vincent-ZHQ / DMERLinks
A survey of deep multimodal emotion recognition.
☆54Updated 3 years ago
Alternatives and similar repositories for DMER
Users that are interested in DMER are comparing it to the libraries listed below
Sorting:
- The code repository for NAACL 2021 paper "Multimodal End-to-End Sparse Model for Emotion Recognition".☆105Updated 2 years ago
- Baseline scripts for the Audio/Visual Emotion Challenge 2019☆81Updated 3 years ago
- IEEE T-BIOM : "Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention"☆43Updated 10 months ago
- A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis☆126Updated 7 months ago
- This is a short tutorial for using the CMU-MultimodalSDK.☆85Updated 6 years ago
- FG2021: Cross Attentional AV Fusion for Dimensional Emotion Recognition☆32Updated 10 months ago
- ☆14Updated 4 years ago
- The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.☆121Updated 4 years ago
- Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition☆29Updated 4 years ago
- AuxFormer: Robust Approach to Audiovisual Emotion Recognition☆14Updated 2 years ago
- A Pytorch implementation of emotion recognition from videos☆19Updated 5 years ago
- This repository provides implementation for the paper "Self-attention fusion for audiovisual emotion recognition with incomplete data".☆146Updated last year
- ABAW3 (CVPRW): A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion Recognition☆47Updated last year
- Emotion Recognition ToolKit (ERTK): tools for emotion recognition. Dataset processing, feature extraction, experiments,☆56Updated 11 months ago
- Modality-Transferable-MER, multimodal emotion recognition model with zero-shot and few-shot abilities.☆65Updated 4 years ago
- ☆28Updated 3 years ago
- PyTorch implementation for Audio-Visual Domain Adaptation Feature Fusion for Speech Emotion Recognition☆12Updated 3 years ago
- We achieved the 2nd and 3rd places in ABAW3 and ABAW5, respectively.☆31Updated last year
- ☆70Updated last year
- ☆94Updated 2 years ago
- MultiModal Sentiment Analysis architectures for CMU-MOSEI.☆50Updated 2 years ago
- ☆26Updated 3 years ago
- Code for the InterSpeech 2023 paper: MMER: Multimodal Multi-task learning for Speech Emotion Recognition☆75Updated last year
- [AAAI 2020] Official implementation of VAANet for Emotion Recognition☆80Updated 2 years ago
- TensorFlow implementation of "Attentive Modality Hopping for Speech Emotion Recognition," ICASSP-20☆34Updated 5 years ago
- ☆36Updated last year
- CM-BERT: Cross-Modal BERT for Text-Audio Sentiment Analysis(MM2020)☆114Updated 4 years ago
- Reproduction of DepAudioNet by Ma et al. {DepAudioNet: An Efficient Deep Model for Audio based Depression Classification,(https://dl.acm.…☆82Updated 4 years ago
- ABAW6 (CVPR-W) We achieved second place in the valence arousal challenge of ABAW6☆29Updated last year
- This is the official code for paper "Speech Emotion Recognition with Global-Aware Fusion on Multi-scale Feature Representation" published…☆48Updated 3 years ago