HumanMLLM / Omni-EmotionLinks
☆20Updated 5 months ago
Alternatives and similar repositories for Omni-Emotion
Users that are interested in Omni-Emotion are comparing it to the libraries listed below
Sorting:
- GPT-4V with Emotion☆93Updated last year
- ☆32Updated 3 weeks ago
- [ECCV’24] Official Implementation for CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual Scenario…☆53Updated 9 months ago
- ☆21Updated last month
- Explainable Multimodal Emotion Reasoning (EMER), Open-vocabulary MER (OV-MER), and AffectGPT☆192Updated last month
- This is for ACL 2025 Findings Paper: From Specific-MLLMs to Omni-MLLMs: A Survey on MLLMs Aligned with Multi-modalitiesModels☆36Updated last week
- [CVPR 2024] EmoVIT: Revolutionizing Emotion Insights with Visual Instruction Tuning☆32Updated 2 months ago
- OpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Alignment and Rea…☆62Updated 3 weeks ago
- ☆30Updated 8 months ago
- ☆14Updated last year
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆46Updated 5 months ago
- [CVPR 2025] Crab: A Unified Audio-Visual Scene Understanding Model with Explicit Cooperation☆42Updated 3 weeks ago
- NeurIPS'2023 official implementation code☆64Updated last year
- MIntRec2.0 is the first large-scale dataset for multimodal intent recognition and out-of-scope detection in multi-party conversations (IC…☆48Updated last week
- TCL-MAP is a powerful method for multimodal intent recognition (AAAI 2024)☆43Updated last year
- a fully open-source implementation of a GPT-4o-like speech-to-speech video understanding model.☆20Updated 2 months ago
- Narrative movie understanding benchmark☆72Updated 2 weeks ago
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆25Updated 2 months ago
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆26Updated 6 months ago
- [ACM ICMR'25]Official repository for "eMotions: A Large-Scale Dataset for Emotion Recognition in Short Videos"☆33Updated last year
- UnifiedMLLM: Enabling Unified Representation for Multi-modal Multi-tasks With Large Language Model☆22Updated 10 months ago
- The code repo for ICASSP 2023 Paper "MMCosine: Multi-Modal Cosine Loss Towards Balanced Audio-Visual Fine-Grained Learning"☆21Updated 2 years ago
- ☆22Updated 2 months ago
- EchoInk-R1: Exploring Audio-Visual Reasoning in Multimodal LLMs via Reinforcement Learning [🔥The Exploration of R1 for General Audio-Vi…☆36Updated last month
- Sparrow: Data-Efficient Video-LLM with Text-to-Image Augmentation☆30Updated 2 months ago
- Official repository for "Boosting Audio Visual Question Answering via Key Semantic-Aware Cues" in ACM MM 2024.☆16Updated 8 months ago
- Towards Long Form Audio-visual Video Understanding☆15Updated 2 months ago
- av-SALMONN: Speech-Enhanced Audio-Visual Large Language Models☆13Updated last year
- [CVPR 2024 Highlight] Official implementation of the paper: Cooperation Does Matter: Exploring Multi-Order Bilateral Relations for Audio-…☆39Updated 2 months ago
- A Unimodal Valence-Arousal Driven Contrastive Learning Framework for Multimodal Multi-Label Emotion Recognition (ACM MM 2024 oral)☆23Updated 7 months ago