mm-vl / ULM-R1Links
Co-Reinforcement Learning for Unified Multimodal Understanding and Generation
☆30Updated 3 months ago
Alternatives and similar repositories for ULM-R1
Users that are interested in ULM-R1 are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2025] NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆95Updated last month
- Official implement of MIA-DPO☆66Updated 9 months ago
- MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models☆41Updated 6 months ago
- MM-PRM: Enhancing Multimodal Mathematical Reasoning with Scalable Step-Level Supervision☆25Updated 5 months ago
- Github repository for "Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging" (ICML 2025)☆80Updated last month
- [NeurIPS 2025] MINT-CoT: Enabling Interleaved Visual Tokens in Mathematical Chain-of-Thought Reasoning☆86Updated last month
- (ICLR 2025 Spotlight) Official code repository for Interleaved Scene Graph.☆29Updated 3 months ago
- Official repository of the video reasoning benchmark MMR-V. Can Your MLLMs "Think with Video"?☆36Updated 4 months ago
- A Self-Training Framework for Vision-Language Reasoning☆85Updated 9 months ago
- Official repo for "PAPO: Perception-Aware Policy Optimization for Multimodal Reasoning"☆94Updated 2 months ago
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆133Updated 3 months ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆57Updated 10 months ago
- [NeurIPS 2025 Spotlight] Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆71Updated last month
- This repository is the official implementation of "Look-Back: Implicit Visual Re-focusing in MLLM Reasoning".☆68Updated 4 months ago
- A Massive Multi-Discipline Lecture Understanding Benchmark☆30Updated last week
- Doodling our way to AGI ✏️ 🖼️ 🧠☆109Updated 5 months ago
- Official repository of 'ScaleCap: Inference-Time Scalable Image Captioning via Dual-Modality Debiasing’☆57Updated 4 months ago
- G1: Bootstrapping Perception and Reasoning Abilities of Vision-Language Model via Reinforcement Learning☆87Updated 5 months ago
- [Blog 1] Recording a bug of grpo_trainer in some R1 projects☆21Updated 8 months ago
- ☆45Updated 10 months ago
- [CVPR' 25] Interleaved-Modal Chain-of-Thought☆90Updated this week
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆88Updated last year
- Fast-Slow Thinking for Large Vision-Language Model Reasoning☆19Updated 6 months ago
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆49Updated 8 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆80Updated 2 weeks ago
- ☆98Updated 10 months ago
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language models☆74Updated 3 months ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning" [NeurIPS25]☆164Updated 5 months ago
- ☆14Updated last week
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆75Updated last year