ligeng0197 / Awesome-Thinking-With-ImagesLinks
Latest open-source "Thinking with images" (O3/O4-mini) papers, covering training-free, SFT-based, and RL-enhanced methods for "fine-grained visual understanding".
β107Updated 4 months ago
Alternatives and similar repositories for Awesome-Thinking-With-Images
Users that are interested in Awesome-Thinking-With-Images are comparing it to the libraries listed below
Sorting:
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiencyβ135Updated 4 months ago
- π₯CVPR 2025 Multimodal Large Language Models Paper Listβ153Updated 9 months ago
- β133Updated 9 months ago
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.β71Updated 9 months ago
- Collections of Papers and Projects for Multimodal Reasoning.β106Updated 8 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Modelsβ77Updated last year
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'β201Updated 5 months ago
- β¨β¨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?β150Updated 2 months ago
- The Next Step Forward in Multimodal LLM Alignmentβ193Updated 7 months ago
- [NeurIPS 2025 Spotlight] Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuningβ78Updated 3 months ago
- [NeurIPS 2025] MINT-CoT: Enabling Interleaved Visual Tokens in Mathematical Chain-of-Thought Reasoningβ93Updated 3 months ago
- R1-Vision: Let's first take a look at the imageβ48Updated 10 months ago
- β153Updated 10 months ago
- [ICLR'25] Reconstructive Visual Instruction Tuningβ133Updated 8 months ago
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language modelsβ75Updated 5 months ago
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'β310Updated 8 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMsβ157Updated last year
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiencyβ59Updated 6 months ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"β201Updated last year
- β54Updated 3 weeks ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought β¦β414Updated last year
- [CVPR 2025 (Oral)] Mitigating Hallucinations in Large Vision-Language Models via DPO: On-Policy Data Hold the Keyβ95Updated 2 weeks ago
- [NeurIPS 2024] Visual Perception by Large Language Modelβs Weightsβ55Updated 8 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reductionβ135Updated 9 months ago
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillationβ214Updated 8 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selectionβ131Updated 4 months ago
- R1-like Video-LLM for Temporal Groundingβ130Updated 6 months ago
- β62Updated 7 months ago
- A Survey on Benchmarks of Multimodal Large Language Modelsβ145Updated 5 months ago
- [LLaVA-Video-R1]β¨First Adaptation of R1 to LLaVA-Video (2025-03-18)β36Updated 7 months ago