[CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts
β336Jul 17, 2024Updated last year
Alternatives and similar repositories for ViP-LLaVA
Users that are interested in ViP-LLaVA are comparing it to the libraries listed below
Sorting:
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Modelβ281Jun 25, 2024Updated last year
- [CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses thaβ¦β945Aug 5, 2025Updated 6 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skillsβ763Feb 1, 2024Updated 2 years ago
- β424Jul 29, 2024Updated last year
- γNeurIPS 2024γDense Connector for MLLMsβ181Oct 14, 2024Updated last year
- γTMM 2025π₯γ Mixture-of-Experts for Large Vision-Language Modelsβ2,303Jul 15, 2025Updated 7 months ago
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(β¦β326Oct 14, 2025Updated 4 months ago
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Modelsβ155Apr 30, 2024Updated last year
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistantβ246Aug 14, 2024Updated last year
- Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?" (NeurIPS 2024)β96Oct 19, 2024Updated last year
- Official repository for the paper PLLaVAβ676Jul 28, 2024Updated last year
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)β322Jan 20, 2025Updated last year
- β360Jan 27, 2024Updated 2 years ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"β203Sep 26, 2024Updated last year
- LLaVA-UHD v3: Progressive Visual Compression for Efficient Native-Resolution Encoding in MLLMsβ415Dec 20, 2025Updated 2 months ago
- β4,577Sep 14, 2025Updated 5 months ago
- A RLHF Infrastructure for Vision-Language Modelsβ196Nov 15, 2024Updated last year
- (ECCVW 2025)GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interestβ551Jun 3, 2025Updated 8 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedbackβ305Sep 11, 2024Updated last year
- [ECCV2024 Oralπ₯] Official Implementation of "GiT: Towards Generalist Vision Transformer through Universal Language Interface"β360Jan 14, 2025Updated last year
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.β1,986Nov 7, 2025Updated 3 months ago
- A family of lightweight multimodal models.β1,052Nov 18, 2024Updated last year
- π₯π₯ LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)β848Aug 5, 2025Updated 6 months ago
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"β269Jun 12, 2024Updated last year
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)β859Jul 29, 2024Updated last year
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Modelsβ108May 29, 2025Updated 9 months ago
- [CVPR2024] The code for "Osprey: Pixel Understanding with Visual Instruction Tuning"β837Aug 19, 2025Updated 6 months ago
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Modelsβ262Aug 5, 2025Updated 6 months ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought β¦β426Dec 22, 2024Updated last year
- When do we not need larger vision models?β413Feb 8, 2025Updated last year
- β805Jul 8, 2024Updated last year
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuningβ296Mar 13, 2024Updated last year
- [CVPR 2024] VCoder: Versatile Vision Encoders for Multimodal Large Language Modelsβ279Apr 17, 2024Updated last year
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"β690Jan 7, 2024Updated 2 years ago
- [ECCV 2024] ControlCap: Controllable Region-level Captioningβ80Oct 25, 2024Updated last year
- Emu Series: Generative Multimodal Models from BAAIβ1,765Jan 12, 2026Updated last month
- SVIT: Scaling up Visual Instruction Tuningβ166Jun 20, 2024Updated last year
- [ICML2024] Repo for the paper `Evaluating and Analyzing Relationship Hallucinations in Large Vision-Language Models'β22Jan 1, 2025Updated last year
- Aligning LMMs with Factually Augmented RLHFβ392Nov 1, 2023Updated 2 years ago