AmirSh15 / graph_emotion_recognitionLinks
☆28Updated 2 years ago
Alternatives and similar repositories for graph_emotion_recognition
Users that are interested in graph_emotion_recognition are comparing it to the libraries listed below
Sorting:
- FG2021: Cross Attentional AV Fusion for Dimensional Emotion Recognition☆32Updated 10 months ago
- ABAW3 (CVPRW): A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion Recognition☆47Updated last year
- [AAAI 2023] AVCAffe: A Large Scale Audio-Visual Dataset of Cognitive Load and Affect for Remote Work☆22Updated 2 years ago
- A survey of deep multimodal emotion recognition.☆54Updated 3 years ago
- Official implementation of our NeurIPS2021 paper: Relative Uncertainty Learning for Facial Expression Recognition☆55Updated 2 years ago
- ☆16Updated last year
- [AAAI 2020] Official implementation of VAANet for Emotion Recognition☆80Updated 2 years ago
- ☆14Updated 4 years ago
- Banchmark for personality traits prediction with neural networks☆64Updated last year
- Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"☆82Updated 4 years ago
- Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition☆29Updated 4 years ago
- IEEE T-BIOM : "Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention"☆43Updated 10 months ago
- ☆19Updated 2 years ago
- [ECCV2022] The official repository of Emotion-aware Multi-view Contrastive Learning for Facial Emotion Recognition☆24Updated 2 years ago
- GCNet, official pytorch implementation of our paper "GCNet: Graph Completion Network for Incomplete Multimodal Learning in Conversation"☆92Updated 5 months ago
- ☆93Updated 2 years ago
- 🔆 📝 A reading list focused on Multimodal Emotion Recognition (MER) 👂👄 👀 💬☆122Updated 5 years ago
- We achieved the 2nd and 3rd places in ABAW3 and ABAW5, respectively.☆31Updated last year
- My implementation for the paper Context-Aware Emotion Recognition Networks☆30Updated 3 years ago
- ☆13Updated last year
- Multi-modal fusion framework based on Transformer Encoder☆16Updated 4 years ago
- The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.☆121Updated 4 years ago
- ☆41Updated 2 years ago
- [BMVC 2022] This is the official code of our Paper "Revisiting Self-Supervised Contrastive Learning for Facial Expression Recognition"☆23Updated last year
- Detecting depression levels in employees from videos of DAIC-WOZ dataset using LSTMs and Facial Action Units as input.☆28Updated 6 years ago
- PyTorch implementation of the models described in the IEEE ICASSP 2022 paper "Is cross-attention preferable to self-attention for multi-m…☆61Updated 6 months ago
- Code on selecting an action based on multimodal inputs. Here in this case inputs are voice and text.☆73Updated 4 years ago
- ☆27Updated 4 years ago
- [IJCAI2022] Unsupervised Voice-Face Representation Learning by Cross-Modal Prototype Contrast☆20Updated last year
- AuxFormer: Robust Approach to Audiovisual Emotion Recognition☆14Updated 2 years ago