FunAudioLLM / MME-EmotionLinks
Official repository for the paper “MME-Emotion: A Holistic Evaluation Benchmark for Emotional Intelligence in Multimodal Large Language Models”
☆15Updated 3 months ago
Alternatives and similar repositories for MME-Emotion
Users that are interested in MME-Emotion are comparing it to the libraries listed below
Sorting:
- [ICML'25 Spotlight] Catch Your Emotion: Sharpening Emotion Perception in Multimodal Large Language Models☆41Updated 2 months ago
- An official implementation of "Incomplete Multimodality-Diffused Emotion Recognition" in PyTorch. (NeurIPS 2023)☆59Updated last year
- [CVPR25 Highlight] A ChatGPT-Prompted Visual hallucination Evaluation Dataset, featuring over 100,000 data samples and four advanced eval…☆26Updated 7 months ago
- [ECCV’24] Official Implementation for CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual Scenario…☆57Updated last year
- A python implement for Certifiable Robust Multi-modal Training☆18Updated 5 months ago
- TCL-MAP is a powerful method for multimodal intent recognition (AAAI 2024)☆52Updated last year
- ☆23Updated 7 months ago
- GPT-4V with Emotion☆97Updated last year
- [ACM ICMR'25]Official repository for "eMotions: A Large-Scale Dataset for Emotion Recognition in Short Videos"☆35Updated 4 months ago
- The repo for "On-the-fly Modulation for Balanced Multimodal Learning", T-PAMI 2024☆18Updated last year
- ☆37Updated 3 years ago
- EmoLLM: Multimodal Emotional Understanding Meets Large Language Models☆19Updated last year
- [CVPR'25] EMOE: Modality-Specific Enhanced Dynamic Emotion Experts☆86Updated 4 months ago
- GCNet, official pytorch implementation of our paper "GCNet: Graph Completion Network for Incomplete Multimodal Learning in Conversation"☆92Updated 6 months ago
- A curated list of balanced multimodal learning methods.☆131Updated last week
- Explainable Multimodal Emotion Reasoning (EMER), OV-MER (ICML), and AffectGPT (ICML, Oral)☆286Updated 3 months ago
- The official implementation of ECCV2024 paper "Facial Affective Behavior Analysis with Instruction Tuning"☆28Updated 10 months ago
- This is the first released survey paper on hallucinations of large vision-language models (LVLMs). To keep track of this field and contin…☆87Updated last year
- Multimodal Prompting with Missing Modalities for Visual Recognition, CVPR'23☆223Updated last year
- Official implementation of "Everything at Once - Multi-modal Fusion Transformer for Video Retrieval." CVPR 2022☆115Updated 3 years ago
- Video Chain of Thought, Codes for ICML 2024 paper: "Video-of-Thought: Step-by-Step Video Reasoning from Perception to Cognition"☆169Updated 8 months ago
- ☆77Updated 2 years ago
- An official implementation of "Decoupled Multimodal Distilling for Emotion Recognition" in PyTorch. (CVPR 2023 highlight)☆141Updated 2 years ago
- Code for ACM MM 2024 paper "A Picture Is Worth a Graph: A Blueprint Debate Paradigm for Multimodal Reasoning"☆19Updated 11 months ago
- Cross-modal Active Complementary Learning with Self-refining Correspondence (NeurIPS 2023, Pytorch Code)☆16Updated last year
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆150Updated last year
- The repo for "Enhancing Multi-modal Cooperation via Sample-level Modality Valuation", CVPR 2024☆57Updated last year
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆101Updated 11 months ago
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆59Updated last year
- [ICCV2023] The repo for "Boosting Multi-modal Model Performance with Adaptive Gradient Modulation".☆27Updated last year