NJU-LINK / OmniVideoBenchLinks
The Source Code for OmniVideoBench
β53Updated 2 months ago
Alternatives and similar repositories for OmniVideoBench
Users that are interested in OmniVideoBench are comparing it to the libraries listed below
Sorting:
- EchoInk-R1: Exploring Audio-Visual Reasoning in Multimodal LLMs via Reinforcement Learning [π₯The Exploration of R1 for General Audio-Viβ¦β70Updated 8 months ago
- This is for ACL 2025 Findings Paper: From Specific-MLLMs to Omni-MLLMs: A Survey on MLLMs Aligned with Multi-modalitiesModelsβ86Updated 2 weeks ago
- [CVPR 2024 Highlight] Official implementation of the paper: Cooperation Does Matter: Exploring Multi-Order Bilateral Relations for Audio-β¦β40Updated 9 months ago
- β37Updated 6 months ago
- A project for tri-modal LLM benchmarking and instruction tuning.β54Updated 9 months ago
- A list of current Audio-Vision Multimodal with awesome resources (paper, application, data, review, survey, etc.).β31Updated 2 years ago
- β18Updated 6 months ago
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Modelsβ77Updated 3 weeks ago
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMsβ37Updated 2 months ago
- β39Updated 4 months ago
- Official PyTorch implementation of EMOVA in CVPR 2025 (https://arxiv.org/abs/2409.18042)β76Updated 10 months ago
- [ACM-MM 2025 Workshop] More Is Better: A MoE-Based Emotion Recognition Framework with Human Preference Alignment.β25Updated last month
- LongVALE: Vision-Audio-Language-Event Benchmark Towards Time-Aware Omni-Modal Perception of Long Videos. (CVPR 2025))β54Updated 7 months ago
- [ECCVβ24] Official Implementation for CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual Scenarioβ¦β57Updated last year
- [AAAI 2024] AVSegFormer: Audio-Visual Segmentation with Transformerβ73Updated 10 months ago
- [CVPR 2025] Crab: A Unified Audio-Visual Scene Understanding Model with Explicit Cooperationβ80Updated 3 weeks ago
- [ICCV 2025] ONLY: One-Layer Intervention Sufficiently Mitigates Hallucinations in Large Vision-Language Modelsβ47Updated 6 months ago
- β22Updated last year
- (NIPS 2025) OpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Alignβ¦β121Updated 2 months ago
- Question-Aware Gaussian Experts for Audio-Visual Question Answering -- Official Pytorch Implementation (CVPR'25, Highlight)β25Updated 7 months ago
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language modelsβ76Updated 6 months ago
- Official Implementation of "Open-Vocabulary Audio-Visual Semantic Segmentation" [ACM MM 2024 Oral].β35Updated last year
- Mitigating Shortcuts in Visual Reasoning with Reinforcement Learningβ45Updated 6 months ago
- UnifiedMLLM: Enabling Unified Representation for Multi-modal Multi-tasks With Large Language Modelβ22Updated last year
- OmniZip: Audio-Guided Dynamic Token Compression for Fast Omnimodal Large Language Modelsβ47Updated last month
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigationβ130Updated 4 months ago
- [ICLR2025] Ξ³ -MOD: Mixture-of-Depth Adaptation for Multimodal Large Language Modelsβ41Updated 2 months ago
- Official Implementation of CODEβ16Updated last year
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMsβ161Updated last year
- β43Updated 8 months ago