zhourax / VEGA
☆36Updated 8 months ago
Alternatives and similar repositories for VEGA:
Users that are interested in VEGA are comparing it to the libraries listed below
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆46Updated 2 weeks ago
- Official repository of MMDU dataset☆86Updated 5 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆57Updated 6 months ago
- ☆95Updated last year
- ☆61Updated last year
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆99Updated 3 weeks ago
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.☆44Updated last week
- A collection of visual instruction tuning datasets.☆76Updated last year
- ☆91Updated last year
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆49Updated 8 months ago
- The Next Step Forward in Multimodal LLM Alignment☆135Updated 3 weeks ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆113Updated 4 months ago
- The official GitHub page for ''What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Ins…☆19Updated last year
- The official implementation of RAR☆82Updated last year
- ☆24Updated 10 months ago
- [CVPR 2024] LION: Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge☆134Updated 8 months ago
- [ICLR 2025] TRACE: Temporal Grounding Video LLM via Casual Event Modeling☆76Updated 2 months ago
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆48Updated 4 months ago
- Official implement of MIA-DPO☆54Updated 2 months ago
- A Self-Training Framework for Vision-Language Reasoning☆71Updated 2 months ago
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆89Updated last week
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆27Updated 9 months ago
- ☆31Updated 8 months ago
- ☆64Updated 9 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆109Updated 4 months ago
- R1-Vision: Let's first take a look at the image☆39Updated last month
- ☆69Updated 4 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistant☆57Updated 9 months ago
- ☆143Updated 4 months ago
- Repository of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models☆37Updated last year