eric-ai-lab / GRITLinks
Official code for NeurIPS 2025 paper "GRIT: Teaching MLLMs to Think with Images"
☆172Updated this week
Alternatives and similar repositories for GRIT
Users that are interested in GRIT are comparing it to the libraries listed below
Sorting:
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆98Updated 5 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆93Updated last month
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆131Updated 5 months ago
- Pixel-Level Reasoning Model trained with RL [NeuIPS25]☆260Updated 2 months ago
- ☆132Updated 9 months ago
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆80Updated last year
- [ICCV 2025] VisRL: Intention-Driven Visual Perception via Reinforced Reasoning☆42Updated 2 months ago
- [ICLR'25] Reconstructive Visual Instruction Tuning☆133Updated 9 months ago
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆72Updated 2 months ago
- [CVPR 2025 🔥]A Large Multimodal Model for Pixel-Level Visual Grounding in Videos☆94Updated 9 months ago
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆232Updated 2 months ago
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆151Updated 2 months ago
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiency☆60Updated 7 months ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆168Updated last year
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆64Updated 5 months ago
- ☆124Updated last year
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆203Updated 5 months ago
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆109Updated 7 months ago
- Official implement of MIA-DPO☆70Updated 11 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆92Updated last year
- [NeurIPS'25] Time-R1: Post-Training Large Vision Language Model for Temporal Video Grounding☆71Updated last month
- [CVPR 2025] RAP: Retrieval-Augmented Personalization☆78Updated last month
- [ECCV 2024] Elysium: Exploring Object-level Perception in Videos via MLLM☆86Updated last year
- The official implementation of RAR☆92Updated last month
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆158Updated 3 weeks ago
- [NeurlPS 2024] One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos☆143Updated last year
- [EMNLP-2025 Oral] ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆69Updated last month
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆66Updated 7 months ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning☆110Updated 3 weeks ago
- ☆96Updated 6 months ago