GAIR-NLP / thinking-with-generated-imagesLinks
Doodling our way to AGI ✏️ 🖼️ 🧠
☆63Updated 3 weeks ago
Alternatives and similar repositories for thinking-with-generated-images
Users that are interested in thinking-with-generated-images are comparing it to the libraries listed below
Sorting:
- ☆37Updated last month
- SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Reward☆54Updated this week
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆50Updated 6 months ago
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆69Updated 2 weeks ago
- ☆44Updated 5 months ago
- ☆37Updated 11 months ago
- VisRL: Intention-Driven Visual Perception via Reinforced Reasoning☆29Updated last week
- Official Repository of paper: Envisioning Beyond the Pixels: Benchmarking Reasoning-Informed Visual Editing☆60Updated 2 weeks ago
- ☆78Updated 5 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆56Updated 8 months ago
- Official implement of MIA-DPO☆58Updated 5 months ago
- MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models☆40Updated 2 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆78Updated 9 months ago
- Official repository for CoMM Dataset☆36Updated 5 months ago
- Github repository for "Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging" (ICML 2025)☆60Updated 2 weeks ago
- A Collection of Papers on Diffusion Language Models☆81Updated last week
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆54Updated 7 months ago
- ☆66Updated last week
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆76Updated last year
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆65Updated 11 months ago
- [MM2024, oral] "Self-Supervised Visual Preference Alignment" https://arxiv.org/abs/2404.10501☆55Updated 10 months ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆45Updated 11 months ago
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆47Updated 3 months ago
- MM-PRM: Enhancing Multimodal Mathematical Reasoning with Scalable Step-Level Supervision☆22Updated 3 weeks ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆32Updated 2 months ago
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆111Updated last month
- Fast-Slow Thinking for Large Vision-Language Model Reasoning☆15Updated last month
- code for "Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization"☆55Updated 10 months ago
- ☆152Updated last week
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆68Updated last year