dongyh20 / Insight-V
[CVPR2025] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models
☆155Updated this week
Alternatives and similar repositories for Insight-V:
Users that are interested in Insight-V are comparing it to the libraries listed below
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆96Updated 3 weeks ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆168Updated 5 months ago
- ☆68Updated 2 months ago
- [CVPR 2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆134Updated 2 weeks ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆80Updated 2 weeks ago
- The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆146Updated this week
- Official implementation of the Law of Vision Representation in MLLMs☆151Updated 4 months ago
- ☆37Updated 2 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆152Updated 2 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆130Updated 4 months ago
- Official implement of MIA-DPO☆54Updated 2 months ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆149Updated 5 months ago
- A Self-Training Framework for Vision-Language Reasoning☆70Updated last month
- A Survey on Benchmarks of Multimodal Large Language Models☆90Updated this week
- Official repository of MMDU dataset☆86Updated 5 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆48Updated 8 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆199Updated 2 months ago
- ☆111Updated 7 months ago
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆154Updated 5 months ago
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆63Updated 6 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆108Updated 4 months ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆157Updated 5 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆113Updated 3 months ago
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆97Updated 2 weeks ago
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆313Updated 3 weeks ago
- ☆80Updated 10 months ago