sunlicai / HiCMAE
[Information Fusion 2024] HiCMAE: Hierarchical Contrastive Masked Autoencoder for Self-Supervised Audio-Visual Emotion Recognition
☆98Updated 4 months ago
Alternatives and similar repositories for HiCMAE:
Users that are interested in HiCMAE are comparing it to the libraries listed below
- GPT-4V with Emotion☆89Updated last year
- Toolkits for Multimodal Emotion Recognition☆186Updated 9 months ago
- This repository provides the codes for MMA-DFER: multimodal (audiovisual) emotion recognition method. This is an official implementation …☆30Updated 5 months ago
- Official repository for "eMotions: A Large-Scale Dataset for Emotion Recognition in Short Videos"☆32Updated 8 months ago
- Explainable Multimodal Emotion Reasoning (EMER) and AffectGPT☆131Updated 10 months ago
- [CVPR 2023] Official code repository for "How you feelin'? Learning Emotions and Mental States in Movie Scenes". https://arxiv.org/abs/23…☆56Updated 4 months ago
- [BMVC'23] Prompting Visual-Language Models for Dynamic Facial Expression Recognition☆118Updated 3 months ago
- [CVPR 2023] This is the official implementation of "Weakly Supervised Video Emotion Detection and Prediction via Cross-Modal Temporal Era…☆37Updated last month
- av-SALMONN: Speech-Enhanced Audio-Visual Large Language Models☆14Updated 9 months ago
- Emotion Recognition ToolKit (ERTK): tools for emotion recognition. Dataset processing, feature extraction, experiments,☆57Updated 3 months ago
- [CVPR 2023] Code for "Learning Emotion Representations from Verbal and Nonverbal Communication"☆45Updated 2 weeks ago
- NeurIPS'2023 official implementation code☆59Updated last year
- ☆16Updated 8 months ago
- FRAME-LEVEL EMOTIONAL STATE ALIGNMENT METHOD FOR SPEECH EMOTION RECOGNITION☆18Updated 2 months ago
- [EMNLP2023] Conversation Understanding using Relational Temporal Graph Neural Networks with Auxiliary Cross-Modality Interaction☆61Updated 7 months ago
- Code for the InterSpeech 2023 paper: MMER: Multimodal Multi-task learning for Speech Emotion Recognition☆71Updated 11 months ago
- [INTERSPEECH 2024] EmoBox: Multilingual Multi-corpus Speech Emotion Recognition Toolkit and Benchmark☆206Updated 8 months ago
- A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations (ACL 2023)☆63Updated 4 months ago
- A Unimodal Valence-Arousal Driven Contrastive Learning Framework for Multimodal Multi-Label Emotion Recognition (ACM MM 2024 oral)☆15Updated 4 months ago
- A list of papers (with available code), tutorials, and surveys on recent AI for emotion recognition (AI4ER)☆19Updated 9 months ago
- ☆22Updated last year
- We achieved the 2nd and 3rd places in ABAW3 and ABAW5, respectively.☆27Updated 11 months ago
- IEEE T-BIOM : "Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention"☆36Updated 3 months ago
- FG2021: Cross Attentional AV Fusion for Dimensional Emotion Recognition☆26Updated 3 months ago
- [CVPR 2024] EmoVIT: Revolutionizing Emotion Insights with Visual Instruction Tuning☆26Updated 5 months ago
- SpeechFormer++ in PyTorch☆47Updated last year
- GCNet, official pytorch implementation of our paper "GCNet: Graph Completion Network for Incomplete Multimodal Learning in Conversation"☆76Updated last year
- ☆22Updated last year
- Pytorch implementation for codes in Noise Imitation Based Adversarial Training for Robust Multimodal Sentiment Analysis (Accepted by IEEE…☆11Updated last year
- ☆20Updated 4 months ago