☆63Feb 23, 2025Updated last year
Alternatives and similar repositories for BUMBLE
Users that are interested in BUMBLE are comparing it to the libraries listed below
Sorting:
- OKAMI: Teaching Humanoid Robots Manipulation Skills through Single Video Imitation☆32Jun 18, 2025Updated 8 months ago
- Official PyTorch implementation of Doduo: Dense Visual Correspondence from Unsupervised Semantic-Aware Flow☆45Feb 1, 2024Updated 2 years ago
- ☆27Mar 6, 2025Updated 11 months ago
- Official codebase for LEGATO (Learning with a Handheld Grasping Tool)☆72Aug 19, 2025Updated 6 months ago
- TeleMoMa: A Modular and Versatile Teleoperation System for Mobile Manipulation☆80Jan 27, 2026Updated last month
- Learning Hierarchical Interactive Multi-Object Search for Mobile Manipulation. Project website: http://himos.cs.uni-freiburg.de☆21Oct 21, 2024Updated last year
- [CoRL 2024] Im2Flow2Act: Flow as the Cross-domain Manipulation Interface☆150Oct 17, 2024Updated last year
- Code for "DrS: Learning Reusable Dense Rewards for Multi-Stage Tasks"☆22Apr 26, 2024Updated last year
- Code for BAKU: An Efficient Transformer for Multi-Task Policy Learning☆129Mar 16, 2025Updated 11 months ago
- Language-based navigation project☆22Feb 9, 2024Updated 2 years ago
- ☆74Aug 29, 2025Updated 6 months ago
- ☆46Jan 11, 2024Updated 2 years ago
- Agent-to-Sim Learning Interactive Behavior from Casual Videos.☆48Oct 16, 2024Updated last year
- ☆10Jul 5, 2024Updated last year
- ☆12Mar 17, 2025Updated 11 months ago
- An end-to-end fully parametric method for image-goal navigation that leverages self-supervised and manifold learning to replace the topol…☆11Jun 18, 2024Updated last year
- Accompanying code for training VisuoSkin policies as described in the paper☆31Oct 25, 2024Updated last year
- A framework for integrated task and motion planning from perception☆28Dec 31, 2024Updated last year
- Subtask-Aware Visual Reward Learning from Segmented Demonstrations (ICLR 2025 accepted)☆18Apr 11, 2025Updated 10 months ago
- ☆56Jun 21, 2024Updated last year
- Code for "FF-LOGO: Cross-Modality Registration with Feature Filtering and Local to Global Optimization"☆11Sep 14, 2023Updated 2 years ago
- Collection of MuJoCo robotics environments equipped with both vision and tactile sensing☆90Jul 8, 2024Updated last year
- [ICLR25] BID-Robot☆65Oct 19, 2025Updated 4 months ago
- Cross-Embodiment Robot Learning Codebase☆52Apr 20, 2024Updated last year
- Official implementation for VIOLA☆120Jun 18, 2023Updated 2 years ago
- Sirius-Fleet: Multi-Task Interactive Robot Fleet Learning with Visual World Models☆17Mar 12, 2025Updated 11 months ago
- Code for ICCV 2023 paper "Multi-Object Navigation with dynamically learned neural implicit representations"☆13Mar 20, 2024Updated last year
- ☆16Jan 13, 2023Updated 3 years ago
- Spatial Aptitude Training for Multimodal Langauge Models☆24Feb 8, 2026Updated 2 weeks ago
- From Imitation to Refinement -- Residual RL for Precise Assembly☆213Dec 2, 2025Updated 2 months ago
- Code for Few-View Object Reconstruction with Unknown Categories and Camera Poses at 3DV 2024 (oral)☆93Jan 23, 2024Updated 2 years ago
- InterPreT: Interactive Predicate Learning from Language Feedback for Generalizable Task Planning (RSS 2024)☆30Jun 18, 2024Updated last year
- [ECCV 2022, Oral] OPD: Single-view 3D Openable Part Detection☆34May 18, 2023Updated 2 years ago
- Official codebase for PRESTO (Planning with Environment Representation, Sampling, and Trajectory Optimization)☆44Aug 19, 2025Updated 6 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆159Apr 6, 2025Updated 10 months ago
- [CoRL 2024] Official codebase of paper "ManiWAV: Learning Robot Manipulation from In-the-Wild Audio-Visual Data"☆64Jul 14, 2024Updated last year
- Official implementation of GROOT, CoRL 2023☆67Nov 4, 2023Updated 2 years ago
- Interactive Post-Training for Vision-Language-Action Models☆160Jun 4, 2025Updated 8 months ago
- ☆19May 7, 2025Updated 9 months ago