☆278Aug 26, 2024Updated last year
Alternatives and similar repositories for crossformer
Users that are interested in crossformer are comparing it to the libraries listed below
Sorting:
- Heterogeneous Pre-trained Transformer (HPT) as Scalable Policy Learner.☆532Dec 6, 2024Updated last year
- Cross-Embodiment Robot Learning Codebase☆52Apr 20, 2024Updated last year
- Octo is a transformer-based robot policy trained on a diverse mix of 800k robot trajectories.☆1,575Jul 31, 2024Updated last year
- Official repository for LeLaN training and inference code☆131Sep 27, 2024Updated last year
- Code for the paper "3D Diffuser Actor: Policy Diffusion with 3D Scene Representations"☆384Aug 17, 2024Updated last year
- Code release for LiReN: Lifelong Autonomous Improvement of Robot Foundation Models in the Wild☆11Jan 28, 2025Updated last year
- Body Transformer: Leveraging Robot Embodiment for Policy Learning☆185Sep 18, 2025Updated 6 months ago
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Goo…☆1,005Dec 20, 2025Updated 3 months ago
- "MimicPlay: Long-Horizon Imitation Learning by Watching Human Play" code repository☆305Apr 23, 2024Updated last year
- ☆446Nov 29, 2025Updated 3 months ago
- RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation☆1,650Jan 21, 2026Updated 2 months ago
- Demo-Driven Mobile Bi-Manual Manipulation Benchmark.☆212Mar 2, 2026Updated 2 weeks ago
- robomimic: A Modular Framework for Robot Learning from Demonstration☆1,334Feb 5, 2026Updated last month
- Bimanual Dexterous Teleoperation with Real-Time Retargeting using VisionPro☆346Sep 18, 2024Updated last year
- Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment: Robotic Learning Datasets and RT-X Models"☆239Updated this week
- Official codebase for "Any-point Trajectory Modeling for Policy Learning"☆275Jun 19, 2025Updated 9 months ago
- This code corresponds to simulation environments used as part of the MimicGen project.☆556Aug 16, 2025Updated 7 months ago
- Official implementation of "Data Scaling Laws in Imitation Learning for Robotic Manipulation"☆204Nov 13, 2024Updated last year
- Official implementation of Diffusion Policy Policy Optimization, arxiv 2024☆775Feb 4, 2025Updated last year
- ReKep: Spatio-Temporal Reasoning of Relational Keypoint Constraints for Robotic Manipulation☆921Feb 20, 2025Updated last year
- ☆148Oct 15, 2024Updated last year
- Official Code for RVT-2 and RVT☆400Feb 14, 2025Updated last year
- Re-implementation of pi0 vision-language-action (VLA) model from Physical Intelligence☆1,424Jan 31, 2025Updated last year
- [RSS 2024] Consistency Policy: Accelerated Visuomotor Policies via Consistency Distillation☆199Jul 20, 2024Updated last year
- world modeling challenge for humanoid robots☆556Nov 8, 2024Updated last year
- [RSS 2024] 3D Diffusion Policy: Generalizable Visuomotor Policy Learning via Simple 3D Representations☆1,294Oct 17, 2025Updated 5 months ago
- [IROS 2025] Generalizable Humanoid Manipulation with 3D Diffusion Policies. Part 1: Train & Deploy of iDP3☆515Jun 16, 2025Updated 9 months ago
- Mobile manipulation research tools for roboticists☆1,193Jun 8, 2024Updated last year
- [CoRL 2024] HumanPlus: Humanoid Shadowing and Imitation from Humans☆829Jul 1, 2024Updated last year
- Official Repo for the paper "Learning Visual Parkour from Generated Images" (CoRL 2024).☆154Nov 15, 2024Updated last year
- RoboCasa: Large-Scale Simulation of Everyday Tasks for Generalist Robots☆1,251Mar 12, 2026Updated last week
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆116Apr 14, 2025Updated 11 months ago
- DROID Policy Learning and Evaluation☆270Apr 22, 2025Updated 10 months ago
- ACE: A Cross-platform Visual-Exoskeletons for Low-Cost Dexterous Teleoperation☆130Oct 1, 2024Updated last year
- Theia: Distilling Diverse Vision Foundation Models for Robot Learning☆270Nov 6, 2025Updated 4 months ago
- RialTo Policy Learning Pipeline☆201Sep 17, 2024Updated last year
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆350Mar 19, 2025Updated last year
- ☆15Sep 4, 2025Updated 6 months ago
- Official code and checkpoint release for mobile robot foundation models: GNM, ViNT, and NoMaD.☆1,158Sep 15, 2024Updated last year