WayneTomas / VPP-LLaVALinks
[TMM 2025] This is the official Pytorch code for our paper "Visual Position Prompt for MLLM based Visual Grounding".
☆22Updated 2 weeks ago
Alternatives and similar repositories for VPP-LLaVA
Users that are interested in VPP-LLaVA are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning☆69Updated 6 months ago
- ☆91Updated last year
- ☆118Updated last year
- Latest open-source "Thinking with images" (O3/O4-mini) papers, covering training-free, SFT-based, and RL-enhanced methods for "fine-grain…☆81Updated last month
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆129Updated 5 months ago
- Distilling Large Vision-Language Model with Out-of-Distribution Generalizability (ICCV 2023)☆58Updated last year
- A collection of visual instruction tuning datasets.☆76Updated last year
- [NeurIPS 2024] Classification Done Right for Vision-Language Pre-Training☆212Updated 4 months ago
- MLLM-DataEngine: An Iterative Refinement Approach for MLLM☆47Updated last year
- The official implementation of RAR☆90Updated last year
- [CVPR 2024] LION: Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge☆150Updated last year
- Repository of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models☆37Updated last year
- [NeurIPS'24] Official implementation of paper "Unveiling the Tapestry of Consistency in Large Vision-Language Models".☆36Updated 9 months ago
- ☆152Updated 9 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆71Updated last year
- ☆86Updated last year
- Repository for the paper: Teaching Structured Vision & Language Concepts to Vision & Language Models☆46Updated last year
- ☆85Updated 7 months ago
- SVIT: Scaling up Visual Instruction Tuning☆164Updated last year
- [ICLR'25] Reconstructive Visual Instruction Tuning☆102Updated 4 months ago
- The official GitHub page for ''What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Ins…☆19Updated last year
- Official repository for CoMM Dataset☆45Updated 7 months ago
- ☆133Updated last year
- A temporary webpage for our survey in AGI for computer vision☆119Updated last year
- ☆31Updated last year
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆150Updated 8 months ago
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆48Updated 5 months ago
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation☆190Updated 4 months ago
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆78Updated 9 months ago
- ☆45Updated 7 months ago