InternRobotics / InternVLA-M1Links
InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy
☆335Updated 2 weeks ago
Alternatives and similar repositories for InternVLA-M1
Users that are interested in InternVLA-M1 are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2025 Spotlight] SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation☆221Updated 6 months ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆333Updated 3 months ago
- Unified Vision-Language-Action Model☆260Updated 3 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆206Updated 7 months ago
- [NeurIPS 2025] DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆279Updated last week
- ICCV2025☆146Updated last month
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆382Updated 2 months ago
- InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation☆90Updated 3 months ago
- [ICLR 2025 Oral] Seer: Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation☆273Updated 6 months ago
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintained🔥]☆174Updated 2 months ago
- Official repository of LIBERO-plus, a generalized benchmark for in-depth robustness analysis of vision-language-action models.☆181Updated last month
- [NeurIPS 2025] Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆220Updated last month
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆365Updated 2 months ago
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆117Updated 4 months ago
- RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulation☆277Updated last month
- A curated list of large VLM-based VLA models for robotic manipulation.☆313Updated 3 weeks ago
- Official Code For VLA-OS.☆132Updated 6 months ago
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation☆337Updated 4 months ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆315Updated 5 months ago
- StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing☆806Updated this week
- The offical Implementation of "Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model"☆436Updated this week
- F1: A Vision Language Action Model Bridging Understanding and Generation to Actions☆154Updated 2 weeks ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆155Updated 9 months ago
- [AAAI26 oral] CronusVLA: Towards Efficient and Robust Manipulation via Multi-Frame Vision-Language-Action Modeling☆77Updated last week
- EVOLVE-VLA: Test-Time Training from Environment Feedback for Vision-Language-Action Models☆58Updated last month
- ☆360Updated this week
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆438Updated 11 months ago
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆632Updated 6 months ago
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆159Updated 3 months ago
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆397Updated 2 months ago