ligeng0197 / Awesome-Thinking-With-ImagesLinks
Latest open-source "Thinking with images" (O3/O4-mini) papers, covering training-free, SFT-based, and RL-enhanced methods for "fine-grained visual understanding".
☆52Updated this week
Alternatives and similar repositories for Awesome-Thinking-With-Images
Users that are interested in Awesome-Thinking-With-Images are comparing it to the libraries listed below
Sorting:
- [ICLR'25] Reconstructive Visual Instruction Tuning☆92Updated 2 months ago
- Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆49Updated last month
- ☆86Updated 3 months ago
- VCR-Bench: A Comprehensive Evaluation Framework for Video Chain-of-Thought Reasoning☆31Updated 2 months ago
- The official implementation of RAR☆88Updated last year
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆111Updated last month
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiency☆40Updated 2 weeks ago
- [CVPR 2025] LLaVA-ST: A Multimodal Large Language Model for Fine-Grained Spatial-Temporal Understanding☆46Updated 2 weeks ago
- Official repo for "Streaming Video Understanding and Multi-round Interaction with Memory-enhanced Knowledge" ICLR2025☆55Updated 3 months ago
- ☆91Updated last year
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆65Updated 11 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆87Updated 2 weeks ago
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆47Updated 2 months ago
- MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models☆40Updated 2 months ago
- The official repository for ACL2025 paper "PruneVid: Visual Token Pruning for Efficient Video Large Language Models".☆46Updated last month
- Repository of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models☆37Updated last year
- FreeVA: Offline MLLM as Training-Free Video Assistant☆60Updated last year
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆91Updated 5 months ago
- Mitigating Shortcuts in Visual Reasoning with Reinforcement Learning☆23Updated last week
- Awesome paper for multi-modal llm with grounding ability☆17Updated 10 months ago
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"☆91Updated last week
- [CVPR2024] The code of "UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory"☆67Updated 8 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆77Updated 2 weeks ago
- ☆115Updated 10 months ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆32Updated 2 months ago
- ☆58Updated last year
- ☆64Updated 2 months ago
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆77Updated 8 months ago
- [LLaVA-Video-R1]✨First Adaptation of R1 to LLaVA-Video (2025-03-18)☆29Updated last month
- Evaluation code for Ref-L4, a new REC benchmark in the LMM era☆35Updated 5 months ago