PhoenixZ810 / MG-LLaVA
Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).
☆148Updated last month
Related projects ⓘ
Alternatives and complementary repositories for MG-LLaVA
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆132Updated last month
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆297Updated 4 months ago
- Diffusion Feedback Helps CLIP See Better☆215Updated 2 months ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆140Updated last month
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆148Updated last month
- PixelLM is an effective and efficient LMM for pixel-level reasoning and understanding. PixelLM is accepted by CVPR 2024.☆181Updated 5 months ago
- LLaVA-HR: High-Resolution Large Language-Vision Assistant☆212Updated 3 months ago
- [NeurIPS'24 Spotlight] EVE: Encoder-Free Vision-Language Models☆231Updated last month
- ☆105Updated 3 months ago
- Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆61Updated 3 weeks ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆96Updated last week
- Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Models☆186Updated 10 months ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆246Updated 4 months ago
- [CVPR 2024] Official implementation of "ViTamin: Designing Scalable Vision Models in the Vision-language Era"☆175Updated 5 months ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" (TMLR2024)☆184Updated this week
- Official implementation of the Law of Vision Representation in MLLMs☆131Updated this week
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆77Updated 5 months ago
- [ECCV24] VISA: Reasoning Video Object Segmentation via Large Language Model☆132Updated 3 months ago
- [ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captions☆157Updated 4 months ago
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆137Updated last week
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆245Updated 10 months ago
- 【ECCV2024】The official repo of Griffon series☆103Updated 2 weeks ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆115Updated last week
- [CVPR 24] The repository provides code for running inference and training for "Segment and Caption Anything" (SCA) , links for downloadin…☆202Updated last month
- [CVPR 2024] CapsFusion: Rethinking Image-Text Data at Scale☆194Updated 8 months ago
- SVIT: Scaling up Visual Instruction Tuning☆163Updated 5 months ago
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆118Updated last month
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆100Updated 6 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆72Updated 2 weeks ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆135Updated last month