alibaba-damo-academy / RynnVLA-001Links
RynnVLA-001: A Vision-Language-Action Model Boosted by Generative Priors
☆153Updated 2 weeks ago
Alternatives and similar repositories for RynnVLA-001
Users that are interested in RynnVLA-001 are comparing it to the libraries listed below
Sorting:
- WorldVLA: Towards Autoregressive Action World Model☆363Updated last month
- ICCV2025☆114Updated this week
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos☆138Updated 3 weeks ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆178Updated 3 months ago
- Unified Vision-Language-Action Model☆181Updated last month
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.☆297Updated 3 months ago
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆122Updated 3 months ago
- Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations https://video-prediction-policy.github.io☆252Updated 3 months ago
- 🦾 A Dual-System VLA with System2 Thinking☆99Updated last week
- Official Code For VLA-OS.☆101Updated 2 months ago
- ☆200Updated this week
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆132Updated 8 months ago
- Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive Tasks☆163Updated 3 months ago
- ☆55Updated 6 months ago
- Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆138Updated last month
- ☆48Updated 5 months ago
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintained🔥]☆134Updated this week
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆73Updated 3 months ago
- DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆158Updated this week
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆277Updated 2 months ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆197Updated 5 months ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆259Updated last month
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆72Updated 8 months ago
- A curated list of large VLM-based VLA models for robotic manipulation.☆84Updated this week
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆363Updated 7 months ago
- [CVPR 2025]Lift3D Foundation Policy: Lifting 2D Large-Scale Pretrained Models for Robust 3D Robotic Manipulation☆155Updated 2 months ago
- Galaxea's first VLA release☆158Updated this week
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆104Updated 6 months ago
- ☆109Updated last month
- Paper list in the survey: A Survey on Vision-Language-Action Models: An Action Tokenization Perspective☆214Updated last month