GR1-Manipulation / GR-1Links
Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"
☆44Updated last year
Alternatives and similar repositories for GR-1
Users that are interested in GR-1 are comparing it to the libraries listed below
Sorting:
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆45Updated 2 years ago
- ☆33Updated last year
- ☆79Updated last year
- ☆74Updated last year
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆98Updated last year
- code for the paper Predicting Point Tracks from Internet Videos enables Diverse Zero-Shot Manipulation☆100Updated last year
- [ICRA 2025] In-Context Imitation Learning via Next-Token Prediction☆102Updated 9 months ago
- ☆129Updated 2 years ago
- MOKA: Open-World Robotic Manipulation through Mark-based Visual Prompting (RSS 2024)☆92Updated last year
- ☆44Updated last year
- Official implementation of GROOT, CoRL 2023☆66Updated 2 years ago
- ☆60Updated last year
- [ICRA 2025] RACER: Rich Language-Guided Failure Recovery Policies for Imitation Learning☆40Updated last year
- ☆27Updated last year
- [MMM 2025 Best Paper] RoLD: Robot Latent Diffusion for Multi-Task Policy Modeling☆22Updated last year
- [CoRL2024] Official repo of `A3VLM: Actionable Articulation-Aware Vision Language Model`☆121Updated last year
- ☆15Updated 9 months ago
- ☆100Updated 2 months ago
- ☆61Updated 10 months ago
- ☆94Updated 2 years ago
- InterPreT: Interactive Predicate Learning from Language Feedback for Generalizable Task Planning (RSS 2024)☆31Updated last year
- Code release for paper "Autonomous Improvement of Instruction Following Skills via Foundation Models" | CoRL 2024☆76Updated 2 months ago
- ☆47Updated last year
- [ECCV 2024] 🎉 Official repository of "Robo-ABC: Affordance Generalization Beyond Categories via Semantic Correspondence for Robot Manipu…☆92Updated last year
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆79Updated 7 months ago
- Repo for Bring Your Own Vision-Language-Action (VLA) model, arxiv 2024☆33Updated 11 months ago
- ☆89Updated last year
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆97Updated 7 months ago
- [CoRL2023] Official PyTorch implementation of PolarNet: 3D Point Clouds for Language-Guided Robotic Manipulation☆42Updated last year
- [CVPR 2025🎉] Official implementation for paper "Point-Level Visual Affordance Guided Retrieval and Adaptation for Cluttered Garments Man…☆41Updated 9 months ago