wangqinsi1 / Vision-ZeroLinks
This is the official Python version of Vision-Zero: Scalable VLM Self-Improvement via Strategic Gamified Self-Play.
☆105Updated 2 months ago
Alternatives and similar repositories for Vision-Zero
Users that are interested in Vision-Zero are comparing it to the libraries listed below
Sorting:
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆123Updated 5 months ago
- [MTI-LLM@NeurIPS 2025] Official implementation of "PyVision: Agentic Vision with Dynamic Tooling."☆141Updated 5 months ago
- Geometric-Mean Policy Optimization☆95Updated last month
- The official repository of "R-4B: Incentivizing General-Purpose Auto-Thinking Capability in MLLMs via Bi-Mode Integration"☆130Updated 3 months ago
- X-Reasoner: Towards Generalizable Reasoning Across Modalities and Domains☆50Updated 7 months ago
- [NeurIPS 2025] Thinkless: LLM Learns When to Think☆246Updated 3 months ago
- [ACL 2025] A Generalizable and Purely Unsupervised Self-Training Framework☆71Updated 6 months ago
- The code and data of We-Math 2.0.☆163Updated 4 months ago
- A reproduction of the Deepseek-OCR model including training☆200Updated last month
- 🚀ReVisual-R1 is a 7B open-source multimodal language model that follows a three-stage curriculum—cold-start pre-training, multimodal rei…☆191Updated 2 weeks ago
- ☆68Updated 3 months ago
- [NeurIPS 2025] Elevating Visual Perception in Multimodal LLMs with Visual Embedding Distillation☆68Updated 2 months ago
- ☆84Updated 8 months ago
- ☆363Updated last month
- Demystifying Reinforcement Learning in Agentic Reasoning☆137Updated 2 months ago
- Reinforcement Learning of Vision Language Models with Self Visual Perception Reward☆154Updated 3 months ago
- ☆41Updated 7 months ago
- [EMNLP 2025 Main] AlphaOne: Reasoning Models Thinking Slow and Fast at Test Time☆88Updated 6 months ago
- ☆140Updated 2 months ago
- ☆105Updated 6 months ago
- ☆226Updated 10 months ago
- Official implementation of "Grasp Any Region: Towards Precise, Contextual Pixel Understanding for Multimodal LLMs".☆95Updated last month
- ☆56Updated last year
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆227Updated last month
- LIMI: Less is More for Agency☆155Updated 2 months ago
- ☆148Updated 3 months ago
- Probing Scientific General Intelligence of LLMs with Scientist-Aligned Workflows☆119Updated this week
- Ring-V2 is a reasoning MoE LLM provided and open-sourced by InclusionAI.☆86Updated 2 months ago
- Visual Planning: Let's Think Only with Images☆289Updated 7 months ago
- [NeurIPS 2024] A task generation and model evaluation system for multimodal language models.☆73Updated last year