zhaochen0110 / Awesome_Think_With_ImagesLinks
Resources and paper list for "Thinking with Images for LVLMs". This repository accompanies our survey on how LVLMs can leverage visual information for complex reasoning, planning, and generation.
☆126Updated this week
Alternatives and similar repositories for Awesome_Think_With_Images
Users that are interested in Awesome_Think_With_Images are comparing it to the libraries listed below
Sorting:
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆172Updated last week
- ☆84Updated 2 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆120Updated 7 months ago
- ☆147Updated 7 months ago
- ☆119Updated 3 months ago
- [CVPR' 25] Interleaved-Modal Chain-of-Thought☆45Updated last month
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆181Updated 8 months ago
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.☆62Updated 2 months ago
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆203Updated last month
- 🔥CVPR 2025 Multimodal Large Language Models Paper List☆142Updated 2 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆74Updated 11 months ago
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆130Updated last year
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆88Updated last year
- A RLHF Infrastructure for Vision-Language Models☆176Updated 6 months ago
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆82Updated 5 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆63Updated 10 months ago
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆107Updated last month
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆278Updated 7 months ago
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆112Updated last month
- ☆74Updated 11 months ago
- code for "Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization"☆55Updated 9 months ago
- ☆24Updated 3 months ago
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆88Updated 6 months ago
- TimeChat-online: 80% Visual Tokens are Naturally Redundant in Streaming Videos☆42Updated 2 weeks ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆47Updated last year
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆163Updated last week
- ☆100Updated last month
- 📖 This is a repository for organizing papers, codes, and other resources related to unified multimodal models.☆218Updated last week
- Visualizing the attention of vision-language models☆176Updated 3 months ago
- [LLaVA-Video-R1]✨First Adaptation of R1 to LLaVA-Video (2025-03-18)☆27Updated 3 weeks ago