24DavidHuang / Emotion-QwenLinks
Welcome to the official repository of Emotion-Qwen.
☆23Updated 7 months ago
Alternatives and similar repositories for Emotion-Qwen
Users that are interested in Emotion-Qwen are comparing it to the libraries listed below
Sorting:
- ☆20Updated 7 months ago
- Explainable Multimodal Emotion Reasoning (EMER), OV-MER (ICML), and AffectGPT (ICML, Oral)☆325Updated 5 months ago
- [ECCV’24] Official Implementation for CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual Scenario…☆58Updated last year
- [CVPR 2024] EmoVIT: Revolutionizing Emotion Insights with Visual Instruction Tuning☆38Updated 9 months ago
- Why We Feel: Breaking Boundaries in Emotional Reasoning with Multimodal Large Language Models☆25Updated 4 months ago
- [AAAI 2025] Empowering LLMs with Pseudo-Untrimmed Videos for Audio-Visual Temporal Understanding☆34Updated 10 months ago
- ☆22Updated last year
- [CVPR 2023] Code for "Learning Emotion Representations from Verbal and Nonverbal Communication"☆54Updated 11 months ago
- ☆27Updated 9 months ago
- HumanOmni☆216Updated 10 months ago
- [ACM ICMR'25]Official repository for "eMotions: A Large-Scale Dataset for Emotion Recognition in Short Videos"☆37Updated 6 months ago
- Official repository for the paper “MME-Emotion: A Holistic Evaluation Benchmark for Emotional Intelligence in Multimodal Large Language M…☆20Updated 2 weeks ago
- [ICML'25 Spotlight] Catch Your Emotion: Sharpening Emotion Perception in Multimodal Large Language Models☆45Updated last week
- [CVPR 2023] This is the official implementation of "Weakly Supervised Video Emotion Detection and Prediction via Cross-Modal Temporal Era…☆40Updated last year
- GPT-4V with Emotion☆96Updated 2 years ago
- ☆31Updated 3 weeks ago
- [ICCV 2025] This repo is the official implementation of "Music Grounding by Short Video"☆26Updated 4 months ago
- Unified Audio-Visual Perception for Multi-Task Video Localization☆30Updated last year
- Toolkits for Multimodal Emotion Recognition☆284Updated 8 months ago
- EmoLLM: Multimodal Emotional Understanding Meets Large Language Models☆19Updated last year
- A Unimodal Valence-Arousal Driven Contrastive Learning Framework for Multimodal Multi-Label Emotion Recognition (ACM MM 2024 oral)☆26Updated last year
- Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning☆518Updated 2 months ago
- [CVPR 2025] Crab: A Unified Audio-Visual Scene Understanding Model with Explicit Cooperation☆80Updated last month
- A Large-scale, Multi-modal, Compound Affective Database for Dynamic Facial Expression Recognition in the Wild.☆62Updated last month
- ☆37Updated 6 months ago
- [WACV'25 Oral] Enhancing Zero-Shot Facial Expression Recognition by LLM Knowledge Transfer☆55Updated 11 months ago
- Code for "Modeling Multimodal Social Interactions: New Challenges and Baselines with Densely Aligned Representations" (CVPR 2024 Oral)☆18Updated last year
- MAE-DFER: Efficient Masked Autoencoder for Self-supervised Dynamic Facial Expression Recognition (ACM MM 2023)☆140Updated 2 months ago
- This paper presents our winning submission to Subtask 2 of SemEval 2024 Task 3 on multimodal emotion cause analysis in conversations.☆24Updated last year
- The official implementation of ECCV2024 paper "Facial Affective Behavior Analysis with Instruction Tuning"☆29Updated last year