liaorongfan / DeepPersonality
Banchmark for personality traits prediction with neural networks
☆46Updated last month
Related projects ⓘ
Alternatives and complementary repositories for DeepPersonality
- Explainable Multimodal Emotion Reasoning (EMER) and AffectGPT☆120Updated 6 months ago
- This repository provides the codes for MMA-DFER: multimodal (audiovisual) emotion recognition method. This is an official implementation …☆21Updated 2 months ago
- [AAAI 2020] Official implementation of VAANet for Emotion Recognition☆75Updated last year
- A survey of deep multimodal emotion recognition.☆51Updated 2 years ago
- IEEE T-BIOM : "Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention"☆31Updated 9 months ago
- Pytorch implementation for Tailor Versatile Multi-modal Learning for Multi-label Emotion Recognition☆55Updated 2 years ago
- ☆83Updated last year
- Toolkits for Multimodal Emotion Recognition☆163Updated 5 months ago
- GPT-4V with Emotion☆84Updated 11 months ago
- MAE-DFER: Efficient Masked Autoencoder for Self-supervised Dynamic Facial Expression Recognition (ACM MM 2023)☆93Updated last month
- Official repository for "eMotions: A Large-Scale Dataset for Emotion Recognition in Short Videos"☆31Updated 5 months ago
- FG2021: Cross Attentional AV Fusion for Dimensional Emotion Recognition☆26Updated last year
- The code repository for NAACL 2021 paper "Multimodal End-to-End Sparse Model for Emotion Recognition".☆96Updated last year
- A Fully End2End Multimodal System for Fast Yet Effective Video Emotion Recognition☆30Updated 3 months ago
- This repository provides implementation for the paper "Self-attention fusion for audiovisual emotion recognition with incomplete data".☆109Updated 2 months ago
- We achieved the 2nd and 3rd places in ABAW3 and ABAW5, respectively.☆23Updated 8 months ago
- [CVPR 2023] Code for "Learning Emotion Representations from Verbal and Nonverbal Communication"☆39Updated last year
- ☆48Updated 3 months ago
- ABAW3 (CVPRW): A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion Recognition☆35Updated 10 months ago
- Code for paper "A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations"☆57Updated 2 weeks ago
- A Large-scale, Multi-modal, Compound Affective Database for Dynamic Facial Expression Recognition in the Wild.☆36Updated 2 months ago
- [AAAI 2023] AVCAffe: A Large Scale Audio-Visual Dataset of Cognitive Load and Affect for Remote Work☆17Updated last year
- Source code for ICASSP 2022 paper "MM-DFN: Multimodal Dynamic Fusion Network For Emotion Recognition in Conversations"☆83Updated last year
- This repository contains the implementation of the paper -- Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment An…☆65Updated last year
- An official implementation of "Decoupled Multimodal Distilling for Emotion Recognition" in PyTorch. (CVPR 2023 highlight)☆96Updated last year
- A Pytorch implementation of emotion recognition from videos☆16Updated 4 years ago
- ☆13Updated 6 months ago
- ☆28Updated 2 years ago
- Reproducing the baselines of the 2nd Multimodal Sentiment Analysis Challenge (MuSe 2021)☆38Updated 2 years ago
- 🔆 📝 A reading list focused on Multimodal Emotion Recognition (MER) 👂👄 👀 💬☆119Updated 4 years ago