zzzhhzzz / Ground-R1Links
☆38Updated 6 months ago
Alternatives and similar repositories for Ground-R1
Users that are interested in Ground-R1 are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2025] Official repository for “FlowCut: Rethinking Redundancy via Information Flow for Efficient Vision-Language Models”☆28Updated last month
- [ICLR'25] Reconstructive Visual Instruction Tuning☆133Updated 9 months ago
- ☆132Updated 10 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆205Updated 6 months ago
- [ICLR 2025] See What You Are Told: Visual Attention Sink in Large Multimodal Models☆87Updated 11 months ago
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆167Updated last month
- 🔥CVPR 2025 Multimodal Large Language Models Paper List☆154Updated 10 months ago
- [CVPR 2025 Highlight] Your Large Vision-Language Model Only Needs A Few Attention Heads For Visual Grounding☆58Updated 5 months ago
- [ICCV25 Oral] Token Activation Map to Visually Explain Multimodal LLMs☆162Updated last month
- Official codebase for the paper Latent Visual Reasoning☆98Updated 3 months ago
- ☆155Updated 11 months ago
- [CVPR 2025] Devils in Middle Layers of Large Vision-Language Models: Interpreting, Detecting and Mitigating Object Hallucinations via Att…☆62Updated 3 months ago
- [EMNLP 2025 Main] Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Models☆57Updated this week
- [CVPR 2025] RAP: Retrieval-Augmented Personalization☆78Updated 2 months ago
- code for the paper "CoReS: Orchestrating the Dance of Reasoning and Segmentation"☆21Updated 2 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆141Updated 10 months ago
- [NeurIPS 2024] Mitigating Object Hallucination via Concentric Causal Attention☆66Updated 5 months ago
- MADTP: Multimodal Alignment-Guided Dynamic Token Pruning for Accelerating Vision-Language Transformer☆49Updated last year
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆331Updated 9 months ago
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiency☆60Updated 7 months ago
- ☆106Updated 5 months ago
- [ICCV 2025] ONLY: One-Layer Intervention Sufficiently Mitigates Hallucinations in Large Vision-Language Models☆47Updated 6 months ago
- [NeurIPS 2025] Reinforcing Spatial Reasoning in Vision-Language Models with Interwoven Thinking and Visual Drawing☆90Updated 6 months ago
- Latest open-source "Thinking with images" (O3/O4-mini) papers, covering training-free, SFT-based, and RL-enhanced methods for "fine-grain…☆110Updated 5 months ago
- [CVPR 2025 (Oral)] Mitigating Hallucinations in Large Vision-Language Models via DPO: On-Policy Data Hold the Key☆102Updated 3 weeks ago
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆152Updated 3 months ago
- [ICCV 2025] Official implementation of "InstructSeg: Unifying Instructed Visual Segmentation with Multi-modal Large Language Models"☆53Updated 11 months ago
- [ECCV 2024] API: Attention Prompting on Image for Large Vision-Language Models☆110Updated last year
- [ACL 2025] PruneVid: Visual Token Pruning for Efficient Video Large Language Models☆66Updated 8 months ago
- [NeurIPS 2025 Spotlight] Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆78Updated 4 months ago