vlc-robot / hiveformer-corl
PyTorch implementation of the Hiveformer research paper
☆48Updated last year
Alternatives and similar repositories for hiveformer-corl
Users that are interested in hiveformer-corl are comparing it to the libraries listed below
Sorting:
- Hierarchical Universal Language Conditioned Policies☆72Updated last year
- ☆31Updated 7 months ago
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆42Updated last year
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆91Updated this week
- Decomposing the Generalization Gap in Imitation Learning for Visual Robotic Manipulation (2023)☆38Updated last year
- VP2 Benchmark (A Control-Centric Benchmark for Video Prediction, ICLR 2023)☆26Updated 2 months ago
- The repository for a thorough empirical evaluation of pre-trained vision model performance across different downstream policy learning me…☆23Updated last year
- Code for Paper "Towards More Generalizable One-Shot Visual Imitation Learning", ICRA 2022☆20Updated 3 years ago
- Official repository for "VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training"☆155Updated last year
- ☆69Updated 6 months ago
- ☆84Updated 11 months ago
- ☆40Updated last year
- [CoRL 2023] XSkill: cross embodiment skill discovery☆60Updated last year
- ☆15Updated last year
- [ICRA 2025] RACER: Rich Language-Guided Failure Recovery Policies for Imitation Learning☆29Updated 7 months ago
- simulations used in "Concept2Robot: Learning Manipulation Concepts from Instructions and Human Demonstrations"☆26Updated 2 years ago
- ☆32Updated last year
- A Benchmark for Evaluating Generalization for Robotic Manipulation☆117Updated 2 months ago
- Code for https://jangirrishabh.github.io/lookcloser/☆37Updated 2 years ago
- Bottom-Up Skill Discovery from Unsegmented Demonstrations for Long-Horizon Robot Manipulation (BUDS)☆49Updated 3 years ago
- ☆41Updated last year
- Code for the Behavior Retrieval Paper☆34Updated last year
- SpawnNet: Learning Generalizable Visuomotor Skills from Pre-trained Networks☆36Updated last year
- The repository of ICML2023 paper: On Pre-Training for Visuo-Motor Control: Revisiting a Learning-from-Scratch Baseline☆23Updated last year
- Coarse-to-fine Q-Network☆47Updated 9 months ago
- InterPreT: Interactive Predicate Learning from Language Feedback for Generalizable Task Planning (RSS 2024)☆30Updated 10 months ago
- Official Code Repo for GENIMA☆71Updated 7 months ago
- Voltron Evaluation: Diverse Evaluation Tasks for Robotic Representation Learning☆36Updated last year
- Official code for the long-horizon language-conditioned robotic manipulation benchmark LoHoRavens.☆14Updated 7 months ago
- MOKA: Open-World Robotic Manipulation through Mark-based Visual Prompting (RSS 2024)☆78Updated 9 months ago