Visual-Agent / DeepEyesV2Links
☆507Updated last week
Alternatives and similar repositories for DeepEyesV2
Users that are interested in DeepEyesV2 are comparing it to the libraries listed below
Sorting:
- A Scientific Multimodal Foundation Model☆629Updated 4 months ago
- This repository collects and organises state‑of‑the‑art papers on spatial reasoning for Multimodal Vision–Language Models (MVLMs).☆270Updated last week
- OpenThinkIMG is an end-to-end open-source framework that empowers LVLMs to think with images.☆349Updated 8 months ago
- Step3-VL-10B: A compact yet frontier multimodal model achieving SOTA performance at the 10B scale, matching open-source models 10-20x its…☆378Updated 2 weeks ago
- Fully Open Framework for Democratized Multimodal Training☆710Updated last month
- Official Repository for "Glyph: Scaling Context Windows via Visual-Text Compression"☆558Updated 3 months ago
- Official implementation of GDPO: Group reward-Decoupled Normalization Policy Optimization for Multi-reward RL Optimization☆349Updated 3 weeks ago
- [NeurIPS 2025] Thinkless: LLM Learns When to Think☆250Updated 4 months ago
- 🚀ReVisual-R1 is a 7B open-source multimodal language model that follows a three-stage curriculum—cold-start pre-training, multimodal rei…☆194Updated last month
- codes for R-Zero: Self-Evolving Reasoning LLM from Zero Data (https://www.arxiv.org/pdf/2508.05004)☆746Updated last month
- [ArXiv 2025] DiffusionVL: Translating Any Autoregressive Models into Diffusion Vision Language Models☆127Updated last month
- A reproduction of the Deepseek-OCR model including training☆206Updated 2 months ago
- DiffThinker: Towards Generative Multimodal Reasoning with Diffusion Models☆166Updated last month
- The code and data of We-Math 2.0.☆164Updated 5 months ago
- The offical repo for "Parallel-R1: Towards Parallel Thinking via Reinforcement Learning"☆255Updated 2 months ago
- The official repository of "R-4B: Incentivizing General-Purpose Auto-Thinking Capability in MLLMs via Bi-Mode Integration"☆136Updated 5 months ago
- MiMo-VL☆622Updated 5 months ago
- (ICLR 2026) An official implementation of "CapRL: Stimulating Dense Image Caption Capabilities via Reinforcement Learning"☆182Updated last week
- Survey and paper list on efficiency-guided LLM agents (memory, tool learning, planning).☆122Updated this week
- qqr is an RL training framework for open-ended agents.☆198Updated 2 weeks ago
- MMSearch-R1 is an end-to-end RL framework that enables LMMs to perform on-demand, multi-turn search with real-world multimodal search too…☆387Updated 5 months ago
- Official Code for "Mini-o3: Scaling Up Reasoning Patterns and Interaction Turns for Visual Search"☆395Updated last week
- A curated collection of papers, datasets, and resources on Scientific Datasets and Large Language Models (LLMs)☆433Updated 4 months ago
- [MTI-LLM@NeurIPS 2025] Official implementation of "PyVision: Agentic Vision with Dynamic Tooling."☆147Updated 6 months ago
- Probing Scientific General Intelligence of LLMs with Scientist-Aligned Workflows☆147Updated 2 weeks ago
- Agent0 Series: Self-Evolving Agents from Zero Data☆1,021Updated last month
- This is the official Python version of Vision-Zero: Scalable VLM Self-Improvement via Strategic Gamified Self-Play.☆110Updated 3 months ago
- OmniVinci is an omni-modal LLM for joint understanding of vision, audio, and language.☆631Updated 3 months ago
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆233Updated 2 months ago
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆129Updated 6 months ago