This is for ACL 2025 Findings Paper: From Specific-MLLMs to Omni-MLLMs: A Survey on MLLMs Aligned with Multi-modalitiesModels
☆92Jan 3, 2026Updated 2 months ago
Alternatives and similar repositories for Awesome-Omni-MLLMs
Users that are interested in Awesome-Omni-MLLMs are comparing it to the libraries listed below
Sorting:
- Awesome latest models, datasets and benchmarks on streaming/online video understanding.☆24Oct 19, 2025Updated 5 months ago
- [ICLR'25] Official repository for "AVHBench: A Cross-Modal Hallucination Evaluation for Audio-Visual Large Language Models"☆20Mar 8, 2026Updated last week
- EchoInk-R1: Exploring Audio-Visual Reasoning in Multimodal LLMs via Reinforcement Learning [🔥The Exploration of R1 for General Audio-Vi…☆75May 18, 2025Updated 10 months ago
- LongVALE: Vision-Audio-Language-Event Benchmark Towards Time-Aware Omni-Modal Perception of Long Videos. (CVPR 2025))☆57Jun 9, 2025Updated 9 months ago
- ☆186Feb 8, 2025Updated last year
- Official repository for "Boosting Audio Visual Question Answering via Key Semantic-Aware Cues" in ACM MM 2024.☆16Oct 25, 2024Updated last year
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆80Dec 27, 2025Updated 2 months ago
- A Unimodal Valence-Arousal Driven Contrastive Learning Framework for Multimodal Multi-Label Emotion Recognition (ACM MM 2024 oral)☆27Nov 4, 2024Updated last year
- 🔥🔥[NeurIPS2025]Exploring and mitigating semantic hallucinations in scene text perception and reasoning☆27Dec 11, 2025Updated 3 months ago
- [CVPR 2026] Thinking with Programming Vision: Towards a Unified View for Thinking with Images☆63Jan 23, 2026Updated last month
- SFT+RL boosts multimodal reasoning☆47Jun 27, 2025Updated 8 months ago
- [CVPR'25] 🌟🌟 EgoTextVQA: Towards Egocentric Scene-Text Aware Video Question Answering☆46Jun 19, 2025Updated 9 months ago
- KDD 2024 AQA competition 2nd place solution☆12Jul 21, 2024Updated last year
- ☆17Jul 22, 2024Updated last year
- This repository contains code for AAAI2025 paper "Dense Audio-Visual Event Localization under Cross-Modal Consistency and Multi-Temporal …☆23Aug 18, 2025Updated 7 months ago
- [CVPR2023] Context De-confounded Emotion Recognition☆18Jul 23, 2023Updated 2 years ago
- [ECCV’24] Official Implementation for CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual Scenario…☆58Sep 4, 2024Updated last year
- Evaluating Durability: Benchmark Insights into Multimodal Watermarking☆12Jun 7, 2024Updated last year
- The implementation for "Large Language Model Can Transcribe Speech in Multi-Talker Scenarios with Versatile Instructions"☆50Apr 7, 2025Updated 11 months ago
- The first Large Audio Language Model that enables native in-depth thinking, which is trained on large-scale audio Chain-of-Thought data.☆285May 15, 2025Updated 10 months ago
- ☆27Dec 1, 2025Updated 3 months ago
- [ICLR 2024] Scaling for Training Time and Post-hoc Out-of-distribution Detection Enhancement.☆15Mar 12, 2024Updated 2 years ago
- ☆13Feb 26, 2024Updated 2 years ago
- ☆24Jan 29, 2026Updated last month
- 🔥An open-source survey of the latest video reasoning tasks, paradigms, and benchmarks.☆154Updated this week
- Multimodal Chain-of-Thought Reasoning: A Comprehensive Survey☆960Nov 14, 2025Updated 4 months ago
- 🔥🔥🔥 Latest Papers, Codes and Datasets on Video-LMM Post-Training☆266Mar 3, 2026Updated 2 weeks ago
- The official code for “Dance-to-Music Generation with Encoder-based Textual Inversion“☆22Jun 17, 2025Updated 9 months ago
- Cross-modal generation of molecules from gene expression inputs. (Briefings in Bioinformatics 2024)☆11May 3, 2025Updated 10 months ago
- ☆41Sep 9, 2025Updated 6 months ago
- ViDRiP-LLaVA: A Dataset and Benchmark for Diagnostic Reasoning from Pathology Videos☆23May 21, 2025Updated 9 months ago
- MIO: A Foundation Model on Multimodal Tokens☆34Dec 13, 2024Updated last year
- ☆44Oct 20, 2025Updated 5 months ago
- Baichuan-Omni: Towards Capable Open-source Omni-modal LLM 🌊☆272Jan 27, 2025Updated last year
- Deep Learning for Recurrence Score☆18Jun 25, 2024Updated last year
- [NeurIPS 2025] Official Repo of Omni-R1: Reinforcement Learning for Omnimodal Reasoning via Two-System Collaboration☆116Dec 3, 2025Updated 3 months ago
- [CVPR 2026] UFVideo: Towards Unified Fine-Grained Video Cooperative Understanding with Large Language Models☆37Feb 21, 2026Updated 3 weeks ago
- [ICLR 2026] Data Pipeline, Models, and Benchmark for Omni-Captioner.☆118Oct 17, 2025Updated 5 months ago
- [ICLR 2026] Official Implementation of ProxyThinker: Test-Time Guidance through Small Visual Reasoners.☆20Sep 24, 2025Updated 5 months ago