zzzhhzzz / Ground-R1Links
☆32Updated 2 months ago
Alternatives and similar repositories for Ground-R1
Users that are interested in Ground-R1 are comparing it to the libraries listed below
Sorting:
- [ICCV25 Oral] Token Activation Map to Visually Explain Multimodal LLMs☆78Updated last month
- ☆122Updated 6 months ago
- 🔥CVPR 2025 Multimodal Large Language Models Paper List☆153Updated 6 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆192Updated 2 months ago
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆265Updated 5 months ago
- [ICLR'25] Reconstructive Visual Instruction Tuning☆116Updated 5 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆126Updated 6 months ago
- [LLaVA-Video-R1]✨First Adaptation of R1 to LLaVA-Video (2025-03-18)☆32Updated 4 months ago
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆104Updated last month
- Official PyTorch Code of ReKV (ICLR'25)☆52Updated 6 months ago
- MADTP: Multimodal Alignment-Guided Dynamic Token Pruning for Accelerating Vision-Language Transformer☆46Updated last year
- Official repository of the paper "A Glimpse to Compress: Dynamic Visual Token Pruning for Large Vision-Language Models"☆68Updated 2 weeks ago
- [NIPS 2025 DB Oral] Official Repository of paper: Envisioning Beyond the Pixels: Benchmarking Reasoning-Informed Visual Editing☆96Updated last week
- [CVPR 2025] RAP: Retrieval-Augmented Personalization☆70Updated last month
- [NeurIPS 2025] Official repository for “FlowCut: Rethinking Redundancy via Information Flow for Efficient Vision-Language Models”☆23Updated last week
- [ICLR 2025] See What You Are Told: Visual Attention Sink in Large Multimodal Models☆49Updated 7 months ago
- [NeurIPS 2024] Visual Perception by Large Language Model’s Weights☆45Updated 5 months ago
- [NeurIPS 2025 Spotlight] Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆66Updated last week
- [ICCV 2025] Official implementation of LLaVA-KD: A Framework of Distilling Multimodal Large Language Models☆97Updated 2 months ago
- [ICCV 2025] ONLY: One-Layer Intervention Sufficiently Mitigates Hallucinations in Large Vision-Language Models☆38Updated 2 months ago
- [ICCV 2025] Official implementation of "InstructSeg: Unifying Instructed Visual Segmentation with Multi-modal Large Language Models"☆47Updated 7 months ago
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆133Updated 6 months ago
- [CVPR2025] FlashSloth: Lightning Multimodal Large Language Models via Embedded Visual Compression☆49Updated 7 months ago
- Official repository for VisionZip (CVPR 2025)☆351Updated 2 months ago
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoning☆80Updated 2 months ago
- [CVPR2024] GSVA: Generalized Segmentation via Multimodal Large Language Models☆149Updated last year
- CrossLMM: Decoupling Long Video Sequences from LMMs via Dual Cross-Attention Mechanisms☆24Updated 3 months ago
- Official code for NeurIPS 2025 paper "GRIT: Teaching MLLMs to Think with Images"☆137Updated last month
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆374Updated 9 months ago
- The official implementation of "PixelThink: Towards Efficient Chain-of-Pixel Reasoning" (arXiv 2025)☆37Updated 3 months ago