PhoenixZ810 / MG-LLaVAView external linksLinks
Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).
☆159Sep 27, 2024Updated last year
Alternatives and similar repositories for MG-LLaVA
Users that are interested in MG-LLaVA are comparing it to the libraries listed below
Sorting:
- 【NeurIPS 2024】Dense Connector for MLLMs☆180Oct 14, 2024Updated last year
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆159Dec 6, 2024Updated last year
- ☆124Jul 29, 2024Updated last year
- [NIPS 2025 DB Oral] Official Repository of paper: Envisioning Beyond the Pixels: Benchmarking Reasoning-Informed Visual Editing☆140Updated this week
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆164Dec 26, 2024Updated last year
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Jun 28, 2024Updated last year
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆246Aug 14, 2024Updated last year
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆367Jul 24, 2025Updated 6 months ago
- Official Repo For OMG-LLaVA and OMG-Seg codebase [CVPR-24 and NeurIPS-24]☆1,342Oct 15, 2025Updated 3 months ago
- Codes for ICLR 2025 Paper: Towards Semantic Equivalence of Tokenization in Multimodal LLM☆79Apr 19, 2025Updated 9 months ago
- The official implementation of ADDP (ICLR 2024)☆12Mar 27, 2024Updated last year
- Visual self-questioning for large vision-language assistant.☆45Jul 23, 2025Updated 6 months ago
- [TCSVT] state-of-the-art open vocabulary detector on COCO/LVIS/V3Det☆32Jun 3, 2025Updated 8 months ago
- [COLM'25] Official implementation of the Law of Vision Representation in MLLMs☆176Oct 6, 2025Updated 4 months ago
- The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM", IJCV2025☆276May 26, 2025Updated 8 months ago
- [TACL] Do Vision and Language Models Share Concepts? A Vector Space Alignment Study☆16Nov 22, 2024Updated last year
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024 Best Paper]☆239Jan 3, 2026Updated last month
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆504Aug 9, 2024Updated last year
- Official Repository of ACL 2025 paper OmniAlign-V: Towards Enhanced Alignment of MLLMs with Human Preference☆145Mar 2, 2025Updated 11 months ago
- [CVPR 2025 🔥]A Large Multimodal Model for Pixel-Level Visual Grounding in Videos☆96Apr 14, 2025Updated 9 months ago
- LLaVA-UHD v3: Progressive Visual Compression for Efficient Native-Resolution Encoding in MLLMs☆413Dec 20, 2025Updated last month
- When do we not need larger vision models?☆412Feb 8, 2025Updated last year
- [NeurIPS 2024] Official implementation of the paper "Interfacing Foundation Models' Embeddings"☆129Aug 21, 2024Updated last year
- Official Implementation of ICCV 2023 Paper - SegPrompt: Boosting Open-World Segmentation via Category-level Prompt Learning☆111May 28, 2025Updated 8 months ago
- [NeurIPS 2024] Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning☆72Feb 11, 2025Updated last year
- Official code for paper: Auto Cherry-Picker: Learning from High-quality Generative Data Driven by Language☆29Feb 28, 2025Updated 11 months ago
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,985Nov 7, 2025Updated 3 months ago
- Official repo of Griffon series including v1(ECCV 2024), v2(ICCV 2025), G, and R, and also the RL tool Vision-R1.☆249Aug 12, 2025Updated 6 months ago
- [ICCV 2025] GroundingSuite: Measuring Complex Multi-Granular Pixel Grounding☆73Jun 26, 2025Updated 7 months ago
- A RLHF Infrastructure for Vision-Language Models☆196Nov 15, 2024Updated last year
- NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing☆579Oct 20, 2024Updated last year
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆943Aug 5, 2025Updated 6 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆123Nov 25, 2024Updated last year
- [ECCV 2024] The official code of paper "Open-Vocabulary SAM".☆1,028Aug 4, 2025Updated 6 months ago
- Long Context Transfer from Language to Vision☆398Mar 18, 2025Updated 10 months ago
- [ICLR'26] Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology☆73Jan 26, 2026Updated 2 weeks ago
- ☆4,552Sep 14, 2025Updated 5 months ago
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆233Nov 7, 2025Updated 3 months ago
- [CVPR 2024] PixelLM is an effective and efficient LMM for pixel-level reasoning and understanding.☆253Feb 11, 2025Updated last year