InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy
☆374Feb 11, 2026Updated 2 weeks ago
Alternatives and similar repositories for InternVLA-M1
Users that are interested in InternVLA-M1 are comparing it to the libraries listed below
Sorting:
- [ICLR 2026] InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation☆102Jan 27, 2026Updated last month
- F1: A Vision Language Action Model Bridging Understanding and Generation to Actions☆161Jan 2, 2026Updated last month
- [ICLR 2025 Oral] Seer: Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation☆280Jul 8, 2025Updated 7 months ago
- [CVPR 2025] Official implementation of "GenManip: LLM-driven Simulation for Generalizable Instruction-Following Manipulation"☆144Jan 15, 2026Updated last month
- VLAC: A Vision-Language-Action-Critic Model for Robotic Real-World Reinforcement Learning☆275Jan 23, 2026Updated last month
- [AAAI26 oral] CronusVLA: Towards Efficient and Robust Manipulation via Multi-Frame Vision-Language-Action Modeling☆88Jan 11, 2026Updated last month
- ☆31Dec 17, 2025Updated 2 months ago
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆990Nov 19, 2025Updated 3 months ago
- Official code of RDT 2☆724Feb 7, 2026Updated 3 weeks ago
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success☆1,051Sep 9, 2025Updated 5 months ago
- StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing☆1,169Updated this week
- Vlaser: Vision-Language-Action Model with Synergistic Embodied Reasoning☆41Updated this week
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation☆346Aug 27, 2025Updated 6 months ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆338Oct 3, 2025Updated 4 months ago
- [ICLR 2026] SimpleVLA-RL: Scaling VLA Training via Reinforcement Learning☆1,422Jan 6, 2026Updated last month
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Goo…☆980Dec 20, 2025Updated 2 months ago
- [ICLR 2026] Unified Vision-Language-Action Model☆277Oct 15, 2025Updated 4 months ago
- [Actively Maintained🔥] A list of Embodied AI papers accepted by top conferences (ICLR, NeurIPS, ICML, RSS, CoRL, ICRA, IROS, CVPR, ICCV,…☆483Dec 1, 2025Updated 3 months ago
- ☆16Mar 26, 2025Updated 11 months ago
- ☆68Jan 8, 2025Updated last year
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆5,317Mar 23, 2025Updated 11 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆300Apr 22, 2024Updated last year
- An All-in-one robot manipulation learning suite for policy models training and evaluation on various datasets and benchmarks.☆169Oct 15, 2025Updated 4 months ago
- RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation☆1,622Jan 21, 2026Updated last month
- A simulation platform for versatile Embodied AI research and developments.☆1,209Sep 4, 2025Updated 5 months ago
- Official implementation of the paper: "NavDP: Learning Sim-to-Real Navigation Diffusion Policy with Privileged Information Guidance"☆529Jan 12, 2026Updated last month
- RoboVerse: Towards a Unified Platform, Dataset and Benchmark for Scalable and Generalizable Robot Learning☆1,670Updated this week
- 🎁 A collection of utilities for LeRobot.☆873Feb 7, 2026Updated 3 weeks ago
- StereoVLA is powered by stereo vision and supports flexible deployment with high tolerance to camera pose variations.☆52Jan 12, 2026Updated last month
- InternRobotics' open platform for building generalized navigation foundation models.☆688Feb 11, 2026Updated 2 weeks ago
- InternVLA-A1: Unifying Understanding, Generation, and Action for Robotic Manipulation☆342Feb 14, 2026Updated 2 weeks ago
- Official code of "RoboOmni: Proactive Robot Manipulation in Omni-modal Context"☆82Nov 17, 2025Updated 3 months ago
- Building General-Purpose Robots Based on Embodied Foundation Model☆774Feb 11, 2026Updated 2 weeks ago
- Code of paper "HyperVLA: Efficient Inference in Vision-Language-Action Models via Hypernetworks"☆22Oct 8, 2025Updated 4 months ago
- 1st place solution of 2025 BEHAVIOR Challenge☆247Jan 24, 2026Updated last month
- This repository is the official implementation of our paper (From reactive to cognitive: brain-inspired spatial intelligence for embodied…☆79Nov 6, 2025Updated 3 months ago
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆657Jun 23, 2025Updated 8 months ago
- ✨✨【NeurIPS 2025】Official implementation of BridgeVLA☆176Sep 20, 2025Updated 5 months ago
- A curated list of state-of-the-art research in embodied AI, focusing on vision-language-action (VLA) models, vision-language navigation (…☆2,607Updated this week