Dobb·E: An open-source, general framework for learning household robotic manipulation
☆613Oct 15, 2024Updated last year
Alternatives and similar repositories for dobb-e
Users that are interested in dobb-e are comparing it to the libraries listed below
Sorting:
- An open, modular framework for zero-shot, language conditioned pick-and-drop tasks in arbitrary homes.☆574Mar 4, 2024Updated last year
- Robot Utility Models are trained on a diverse set of environments and objects, and then can be deployed in novel environments with novel …☆243Jan 19, 2026Updated last month
- Mobile manipulation research tools for roboticists☆1,189Jun 8, 2024Updated last year
- Octo is a transformer-based robot policy trained on a diverse mix of 800k robot trajectories.☆1,552Jul 31, 2024Updated last year
- Code for Teach a Robot to FISH: Versatile Imitation from One Minute of Demonstrations☆77Sep 14, 2023Updated 2 years ago
- Repository to train and evaluate RoboAgent☆360Apr 2, 2024Updated last year
- Generating Robotic Simulation Tasks via Large Language Models☆347Mar 23, 2024Updated last year
- ☆1,678Jan 31, 2024Updated 2 years ago
- Official implementation for paper "EquiBot: SIM(3)-Equivariant Diffusion Policy for Generalizable and Data Efficient Learning".☆169Jul 2, 2024Updated last year
- Official code for "Behavior Generation with Latent Actions" (ICML 2024 Spotlight)☆197Feb 28, 2024Updated 2 years ago
- Official Code for RVT-2 and RVT☆398Feb 14, 2025Updated last year
- ☆1,682Nov 5, 2025Updated 3 months ago
- PyTorch implementation of YAY Robot☆169Apr 7, 2024Updated last year
- ☆52Feb 17, 2023Updated 3 years ago
- ☆19May 7, 2025Updated 9 months ago
- Universal Manipulation Interface: In-The-Wild Robot Teaching Without In-The-Wild Robots☆1,261Jul 21, 2025Updated 7 months ago
- ☆157Nov 10, 2024Updated last year
- Code for Point Policy: Unifying Observations and Actions with Key Points for Robot Manipulation☆90Jul 21, 2025Updated 7 months ago
- This code corresponds to simulation environments used as part of the MimicGen project.☆548Aug 16, 2025Updated 6 months ago
- Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment: Robotic Learning Datasets and RT-X Models"☆237Feb 20, 2026Updated last week
- [ICCV 2023] ARNOLD: Language-Grounded Robot Manipulation with Continuous Object States in Realistic 3D Scenes☆181Mar 16, 2025Updated 11 months ago
- Official repository of Learning to Act from Actionless Videos through Dense Correspondences.☆248Apr 25, 2024Updated last year
- A generative and self-guided robotic agent that endlessly propose and master new skills.☆1,149May 31, 2024Updated last year
- ☆61Jan 15, 2024Updated 2 years ago
- EdgeVLA: An open-source edge vision-language-action model for robotics.☆102Apr 29, 2025Updated 10 months ago
- Imitation learning algorithms with Co-training for Mobile ALOHA: ACT, Diffusion Policy, VINN☆3,562May 15, 2024Updated last year
- Code for BAKU: An Efficient Transformer for Multi-Task Policy Learning☆129Mar 16, 2025Updated 11 months ago
- [CoRL 2024] Open-TeleVision: Teleoperation with Immersive Active Visual Feedback☆1,197Sep 27, 2024Updated last year
- "MimicPlay: Long-Horizon Imitation Learning by Watching Human Play" code repository☆306Apr 23, 2024Updated last year
- ☆2,137Apr 19, 2024Updated last year
- Official implementation for VIOLA☆120Jun 18, 2023Updated 2 years ago
- Code for the paper "3D Diffuser Actor: Policy Diffusion with 3D Scene Representations"☆384Aug 17, 2024Updated last year
- Repo for Bring Your Own Vision-Language-Action (VLA) model, arxiv 2024☆36Jan 22, 2025Updated last year
- ☆278Aug 26, 2024Updated last year
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆300Apr 22, 2024Updated last year
- 🤗 LeRobot: Making AI for Robotics more accessible with end-to-end learning☆21,780Updated this week
- Mobile ALOHA: Learning Bimanual Mobile Manipulation with Low-Cost Whole-Body Teleoperation☆4,365Jun 22, 2024Updated last year
- Official implementation of "Data Scaling Laws in Imitation Learning for Robotic Manipulation"☆204Nov 13, 2024Updated last year
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆5,317Mar 23, 2025Updated 11 months ago