Visual-Agent / DeepEyesV2Links
☆413Updated last week
Alternatives and similar repositories for DeepEyesV2
Users that are interested in DeepEyesV2 are comparing it to the libraries listed below
Sorting:
- A Scientific Multimodal Foundation Model☆608Updated 2 months ago
- This repository collects and organises state‑of‑the‑art papers on spatial reasoning for Multimodal Vision–Language Models (MVLMs).☆244Updated this week
- The offical repo for "Parallel-R1: Towards Parallel Thinking via Reinforcement Learning"☆237Updated 2 weeks ago
- Official Repository for "Glyph: Scaling Context Windows via Visual-Text Compression"☆521Updated last month
- Agent0 Series: Self-Evolving Agents from Zero Data☆767Updated this week
- codes for R-Zero: Self-Evolving Reasoning LLM from Zero Data (https://www.arxiv.org/pdf/2508.05004)☆687Updated last month
- 🛠️ DeepAgent: A General Reasoning Agent with Scalable Toolsets☆837Updated last month
- 🚀ReVisual-R1 is a 7B open-source multimodal language model that follows a three-stage curriculum—cold-start pre-training, multimodal rei…☆193Updated last month
- Code and implementations for the paper "AgentGym-RL: Training LLM Agents for Long-Horizon Decision Making through Multi-Turn Reinforcemen…☆511Updated 2 months ago
- Chain-of-Agents: End-to-End Agent Foundation Models via Multi-Agent Distillation and Agentic RL.☆497Updated 2 months ago
- Next paradigm for LLM Agent. Unify plan and action through recursive code generation for adaptive, human-like decision-making.☆452Updated this week
- OpenThinkIMG is an end-to-end open-source framework that empowers LVLMs to think with images.☆329Updated 6 months ago
- OpenCUA: Open Foundations for Computer-Use Agents☆582Updated last week
- ☆843Updated 2 months ago
- Fully Open Framework for Democratized Multimodal Training☆640Updated last week
- [EMNLP 2025] Awesome RAG Reasoning Resources☆360Updated 4 months ago
- GELab: GUI Exploration Lab. One of the best GUI agent solutions in the galaxy, built by the StepFun-GELab team and powered by Step’s rese…☆549Updated this week
- [NeurIPS 2025] Thinkless: LLM Learns When to Think☆242Updated 2 months ago
- A general memory system for agents, powered by deep-research☆660Updated this week
- An Open-Source Large-Scale Reinforcement Learning Project for Search Agents☆504Updated last week
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆122Updated 4 months ago
- Latent Collaboration in Multi-Agent Systems (LatentMAS)☆142Updated last week
- The official repository of "R-4B: Incentivizing General-Purpose Auto-Thinking Capability in MLLMs via Bi-Mode Integration"☆125Updated 3 months ago
- A curated collection of papers, datasets, and resources on Scientific Datasets and Large Language Models (LLMs)☆412Updated 2 months ago
- NEO Series: Native Vision-Language Models from First Principles☆225Updated last month
- Official repository for DR Tulu: Reinforcement Learning with Evolving Rubrics for Deep Research☆428Updated this week
- MiMo-VL☆594Updated 3 months ago
- OmniVinci is an omni-modal LLM for joint understanding of vision, audio, and language.☆591Updated last month
- Tiny Model, Big Logic: Diversity-Driven Optimization Elicits Large-Model Reasoning Ability in VibeThinker-1.5B☆527Updated 2 weeks ago
- ☆174Updated 2 weeks ago