AgibotTech / Genie-EnvisionerLinks
☆367Updated last week
Alternatives and similar repositories for Genie-Envisioner
Users that are interested in Genie-Envisioner are comparing it to the libraries listed below
Sorting:
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆319Updated 6 months ago
- Spirit-v1.5: A Robotic Foundation Model by Spirit AI☆465Updated 2 weeks ago
- Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations https://video-prediction-policy.github.io☆339Updated 8 months ago
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆344Updated 3 weeks ago
- RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulation☆280Updated last week
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆336Updated 3 months ago
- The offical Implementation of "Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model"☆461Updated last week
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆226Updated 2 months ago
- Galaxea's first VLA release☆503Updated 2 weeks ago
- Being-H0.5: Scaling Human-Centric Robot Learning for Cross-Embodiment Generalization☆313Updated this week
- ☆426Updated 2 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆207Updated 8 months ago
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆637Updated 7 months ago
- F1: A Vision Language Action Model Bridging Understanding and Generation to Actions☆156Updated 3 weeks ago
- VITRA: Scalable Vision-Language-Action Model Pretraining for Robotic Manipulation with Real-Life Human Activity Videos☆273Updated last week
- A Pragmatic VLA Foundation Model☆247Updated this week
- InternVLA-A1: Unifying Understanding, Generation, and Action for Robotic Manipulation☆305Updated last week
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆301Updated last year
- [ICLR 2026] Unified Vision-Language-Action Model☆268Updated 3 months ago
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆388Updated 2 months ago
- [ICLR 2025 Oral] Seer: Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation☆277Updated 6 months ago
- [CoRL25] GraspVLA: a Grasping Foundation Model Pre-trained on Billion-scale Synthetic Action Data☆326Updated last month
- Ctrl-World: A Controllable Generative World Model for Robot Manipualtion☆259Updated last month
- Official Code for EnerVerse-AC: Envisioning EmbodiedEnvironments with Action Condition☆144Updated 6 months ago
- [NeurIPS 2025 Spotlight] SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation☆221Updated 7 months ago
- A curated list of large VLM-based VLA models for robotic manipulation.☆328Updated last month
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆452Updated last year
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆968Updated 2 months ago
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆401Updated 3 months ago
- Official code for EWMBench: Evaluating Scene, Motion, and Semantic Quality in Embodied World Models☆96Updated 7 months ago