InternRobotics / InternVLA-A1Links
InternVLA-A1: Unifying Understanding, Generation, and Action for Robotic Manipulation
☆51Updated 2 months ago
Alternatives and similar repositories for InternVLA-A1
Users that are interested in InternVLA-A1 are comparing it to the libraries listed below
Sorting:
- Visual Embodied Brain: Let Multimodal Large Language Models See, Think, and Control in Spaces☆85Updated 5 months ago
- [NeurIPS 2025] Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆195Updated 3 weeks ago
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆76Updated 6 months ago
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆102Updated 3 months ago
- ☆61Updated 9 months ago
- Unified Vision-Language-Action Model☆226Updated last month
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintained🔥]☆172Updated 3 weeks ago
- ICCV2025☆142Updated last week
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆110Updated 7 months ago
- ☆95Updated last month
- 🦾 A Dual-System VLA with System2 Thinking☆116Updated 3 months ago
- ☆86Updated last year
- The offical repo for paper "VQ-VLA: Improving Vision-Language-Action Models via Scaling Vector-Quantized Action Tokenizers" (ICCV 2025)☆94Updated last week
- VLA-RFT: Vision-Language-Action Models with Reinforcement Fine-Tuning☆81Updated last month
- [ICCV2025] RoBridge: A Hierarchical Architecture Bridging Cognition and Execution for General Robotic Manipulation☆33Updated 4 months ago
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆147Updated last month
- F1: A Vision Language Action Model Bridging Understanding and Generation to Actions☆135Updated last month
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆117Updated 9 months ago
- ☆84Updated 6 months ago
- [NeurIPS 2025] DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆224Updated 2 months ago
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆139Updated 11 months ago
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆134Updated last year
- ☆60Updated 11 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆151Updated 7 months ago
- [RSS 2024] Learning Manipulation by Predicting Interaction☆115Updated 4 months ago
- [NeurIPS 2024] CLOVER: Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation☆129Updated 2 months ago
- A comprehensive list of papers about dual-system VLA models, including papers, codes, and related websites.☆83Updated 4 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆44Updated last year
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos☆179Updated 2 months ago
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆269Updated last week