[NeurIPS 2025] Official code implementation of Perception R1: Pioneering Perception Policy with Reinforcement Learning
☆286Jul 15, 2025Updated 7 months ago
Alternatives and similar repositories for PR1
Users that are interested in PR1 are comparing it to the libraries listed below
Sorting:
- [ICLR2025] Official code implementation of Video-UTR: Unhackable Temporal Rewarding for Scalable Video MLLMs☆61Feb 27, 2025Updated last year
- [NeurIPS 2025] The official repository for our paper, "Open Vision Reasoner: Transferring Linguistic Cognitive Behavior for Visual Reason…☆154Sep 12, 2025Updated 5 months ago
- Official repository of 'Visual-RFT: Visual Reinforcement Fine-Tuning' & 'Visual-ARFT: Visual Agentic Reinforcement Fine-Tuning'’☆2,305Oct 29, 2025Updated 4 months ago
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆368Jul 24, 2025Updated 7 months ago
- ☆107Jun 10, 2025Updated 8 months ago
- ☆46Dec 30, 2024Updated last year
- [NeurIPS25] Official Implementation (Pytorch) of "DeepVideo-R1"☆31Feb 22, 2026Updated last week
- [ICLR2026] This is the first paper to explore how to effectively use R1-like RL for MLLMs and introduce Vision-R1, a reasoning MLLM that…☆773Jan 26, 2026Updated last month
- Project Page For "Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement"☆604Jan 17, 2026Updated last month
- Solve Visual Understanding with Reinforced VLMs☆5,850Oct 21, 2025Updated 4 months ago
- ☆28Apr 8, 2025Updated 10 months ago
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilities☆1,164Jul 15, 2025Updated 7 months ago
- ☆13Jul 20, 2024Updated last year
- MM-EUREKA: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learning☆770Sep 7, 2025Updated 5 months ago
- Official repo of Griffon series including v1(ECCV 2024), v2(ICCV 2025), G, and R, and also the RL tool Vision-R1.☆248Aug 12, 2025Updated 6 months ago
- [NeurIPS 2025 Spotlight] Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆80Sep 19, 2025Updated 5 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [🔥the first paper to explore R1 for video]☆831Dec 14, 2025Updated 2 months ago
- Eagle: Frontier Vision-Language Models with Data-Centric Strategies☆930Oct 25, 2025Updated 4 months ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆181Oct 14, 2024Updated last year
- VisionReasoner: Unified Reasoning-Integrated Visual Perception via Reinforcement Learning☆321Feb 9, 2026Updated 3 weeks ago
- INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language Model☆42Aug 4, 2024Updated last year
- [NeurIPS'24] MemVLT: Vision-Language Tracking with Adaptive Memory-based Prompts☆18Oct 7, 2024Updated last year
- ☆10Apr 7, 2025Updated 10 months ago
- Official implementation of the paper "LTrack: Generalizing Multiple Object Tracking to Unseen Domains by Introducing Natural Language Rep…☆12Jul 26, 2023Updated 2 years ago
- ✨First Open-Source R1-like Video-LLM [2025/02/18]☆381Feb 23, 2025Updated last year
- Explore the Multimodal “Aha Moment” on 2B Model☆623Mar 18, 2025Updated 11 months ago
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Minds☆96Jul 4, 2024Updated last year
- Code for paper: Reinforced Vision Perception with Tools☆71Oct 3, 2025Updated 5 months ago
- [ICCV 2025] Dynamic-VLM☆28Dec 16, 2024Updated last year
- Code for Scaling Language-Free Visual Representation Learning (WebSSL)☆245Apr 24, 2025Updated 10 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆66Jun 10, 2025Updated 8 months ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆87Jul 13, 2025Updated 7 months ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning☆114Dec 24, 2025Updated 2 months ago
- EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL☆4,649Updated this week
- ☆14Dec 18, 2024Updated last year
- [NeurIPS2025 Spotlight 🔥 ] Official implementation of 🛸 "UFO: A Unified Approach to Fine-grained Visual Perception via Open-ended Langu…☆268Nov 5, 2025Updated 3 months ago
- Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving stat…☆1,548Jun 14, 2025Updated 8 months ago
- ☆1,137Nov 20, 2025Updated 3 months ago
- A fork to add multimodal model training to open-r1☆1,493Feb 8, 2025Updated last year