AmirSh15 / graph_emotion_recognitionLinks
☆28Updated 3 years ago
Alternatives and similar repositories for graph_emotion_recognition
Users that are interested in graph_emotion_recognition are comparing it to the libraries listed below
Sorting:
- ABAW3 (CVPRW): A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion Recognition☆48Updated last year
- FG2021: Cross Attentional AV Fusion for Dimensional Emotion Recognition☆33Updated last year
- ☆16Updated last year
- ☆19Updated 3 years ago
- A survey of deep multimodal emotion recognition.☆54Updated 3 years ago
- [AAAI 2023] AVCAffe: A Large Scale Audio-Visual Dataset of Cognitive Load and Affect for Remote Work☆21Updated 2 weeks ago
- [ECCV2022] The official repository of Emotion-aware Multi-view Contrastive Learning for Facial Emotion Recognition☆24Updated 2 years ago
- Code for the BEEU challenge winning paper.☆21Updated 3 years ago
- [AAAI 2020] Official implementation of VAANet for Emotion Recognition☆81Updated 2 years ago
- Banchmark for personality traits prediction with neural networks☆66Updated last year
- Official implementation of our NeurIPS2021 paper: Relative Uncertainty Learning for Facial Expression Recognition☆56Updated 3 years ago
- Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition☆31Updated 4 years ago
- ☆14Updated 4 years ago
- ☆94Updated 3 years ago
- Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"☆81Updated 4 years ago
- Submission to the Affective Behavior Analysis in-the-wild (ABAW) 2020 competition.☆37Updated 2 years ago
- Multi-modal fusion framework based on Transformer Encoder☆16Updated 4 years ago
- Official Tensorflow Implementation of "Uncertainty-aware Label Distribution Learning for Facial Expression Recognition" paper☆25Updated 3 years ago
- IEEE T-BIOM : "Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention"☆45Updated last year
- My implementation for the paper Context-Aware Emotion Recognition Networks☆30Updated 3 years ago
- Code for NAACL 2021 paper: MTAG: Modal-Temporal Attention Graph for Unaligned Human Multimodal Language Sequences☆42Updated 2 years ago
- [BMVC 2022] This is the official code of our Paper "Revisiting Self-Supervised Contrastive Learning for Facial Expression Recognition"☆23Updated last year
- [IJCAI2022] Unsupervised Voice-Face Representation Learning by Cross-Modal Prototype Contrast☆21Updated 2 years ago
- ☆41Updated 3 years ago
- Code on selecting an action based on multimodal inputs. Here in this case inputs are voice and text.☆73Updated 4 years ago
- Reproducing the baselines of the 2nd Multimodal Sentiment Analysis Challenge (MuSe 2021)☆40Updated 4 years ago
- Official code of "IRNet: Iterative Refinement Network for Noisy Partial Label Learning"☆21Updated last month
- Baseline scripts for the Audio/Visual Emotion Challenge 2019☆80Updated 3 years ago
- Detecting depression levels in employees from videos of DAIC-WOZ dataset using LSTMs and Facial Action Units as input.☆28Updated 6 years ago
- Pytorch implementation for Tailor Versatile Multi-modal Learning for Multi-label Emotion Recognition☆65Updated 3 years ago