ligeng0197 / Awesome-Thinking-With-ImagesLinks
Latest open-source "Thinking with images" (O3/O4-mini) papers, covering training-free, SFT-based, and RL-enhanced methods for "fine-grained visual understanding".
☆95Updated 2 months ago
Alternatives and similar repositories for Awesome-Thinking-With-Images
Users that are interested in Awesome-Thinking-With-Images are comparing it to the libraries listed below
Sorting:
- ☆125Updated 7 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆75Updated last year
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆134Updated this week
- Collections of Papers and Projects for Multimodal Reasoning.☆105Updated 6 months ago
- The official implementation of RAR☆92Updated last year
- The Next Step Forward in Multimodal LLM Alignment☆183Updated 5 months ago
- [NeurIPS 2025 Spotlight] Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆69Updated last month
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆131Updated 2 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆194Updated 3 months ago
- R1-Vision: Let's first take a look at the image☆48Updated 8 months ago
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.☆69Updated 7 months ago
- 🔥CVPR 2025 Multimodal Large Language Models Paper List☆156Updated 7 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆147Updated 11 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆132Updated 7 months ago
- Evaluation code for Ref-L4, a new REC benchmark in the LMM era☆49Updated 9 months ago
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiency☆55Updated 4 months ago
- [NeurIPS 2024] Visual Perception by Large Language Model’s Weights☆52Updated 6 months ago
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation☆204Updated 6 months ago
- [ICLR'25] Reconstructive Visual Instruction Tuning☆121Updated 6 months ago
- [NeurIPS 2025] MINT-CoT: Enabling Interleaved Visual Tokens in Mathematical Chain-of-Thought Reasoning☆85Updated last month
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language models☆74Updated 3 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆125Updated 2 months ago
- ☆143Updated 8 months ago
- [CVPR 2025 (Oral)] Mitigating Hallucinations in Large Vision-Language Models via DPO: On-Policy Data Hold the Key☆83Updated 3 weeks ago
- [LLaVA-Video-R1]✨First Adaptation of R1 to LLaVA-Video (2025-03-18)☆33Updated 5 months ago
- Official repository for CoMM Dataset☆48Updated 9 months ago
- [ACM MM 2025] TimeChat-online: 80% Visual Tokens are Naturally Redundant in Streaming Videos☆86Updated last month
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆224Updated 3 months ago
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆278Updated 6 months ago
- [CVPR 2025] LamRA: Large Multimodal Model as Your Advanced Retrieval Assistant☆167Updated 3 months ago