bytedance / GR-1Links
Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"
☆257Updated last year
Alternatives and similar repositories for GR-1
Users that are interested in GR-1 are comparing it to the libraries listed below
Sorting:
- [ICLR 2025 Oral] Seer: Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation☆183Updated last month
- Reimplementation of GR-1, a generalized policy for robotics manipulation.☆137Updated 8 months ago
- ☆349Updated 4 months ago
- [RSS 2024] Code for "Multimodal Diffusion Transformer: Learning Versatile Behavior from Multimodal Goals" for CALVIN experiments with pre…☆140Updated 7 months ago
- A simple testbed for robotics manipulation policies☆90Updated last month
- ☆181Updated last year
- Official codebase for "Any-point Trajectory Modeling for Policy Learning"☆219Updated 9 months ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆169Updated 2 months ago
- Official implementation of GR-MG☆80Updated 4 months ago
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆265Updated last month
- Official implementation of "Data Scaling Laws in Imitation Learning for Robotic Manipulation"☆172Updated 6 months ago
- Code for the paper "3D Diffuser Actor: Policy Diffusion with 3D Scene Representations"☆318Updated 9 months ago
- DROID Policy Learning and Evaluation☆194Updated last month
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆199Updated 2 months ago
- Embodied Chain of Thought: A robotic policy that reason to solve the task.☆254Updated last month
- GraspVLA: a Grasping Foundation Model Pre-trained on Billion-scale Synthetic Action Data☆107Updated 3 weeks ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆283Updated 4 months ago
- ☆62Updated 3 months ago
- DexGraspVLA: A Vision-Language-Action Framework Towards General Dexterous Grasping☆262Updated last week
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆197Updated 2 months ago
- [RSS25] Official implementation of DemoGen: Synthetic Demonstration Generation for Data-Efficient Visuomotor Policy Learning☆152Updated last month
- Official repository of Learning to Act from Actionless Videos through Dense Correspondences.☆216Updated last year
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆351Updated this week
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆222Updated last month
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆326Updated this week
- [NeurIPS 2024] CLOVER: Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation☆115Updated 5 months ago
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success☆415Updated last month
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆227Updated this week
- 🚀 A collection of utilities and tools for LeRobot.☆208Updated last week
- ☆77Updated last month