apple / ml-llarpLinks
☆89Updated 2 months ago
Alternatives and similar repositories for ml-llarp
Users that are interested in ml-llarp are comparing it to the libraries listed below
Sorting:
- Code for Reinforcement Learning from Vision Language Foundation Model Feedback☆135Updated last year
- [CoRL 2023] REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction☆101Updated last year
- [ICCV 2023] Official code repository for ARNOLD benchmark☆179Updated 10 months ago
- MiniGrid Implementation of BEHAVIOR Tasks☆57Updated 3 months ago
- Official repository for "LIV: Language-Image Representations and Rewards for Robotic Control" (ICML 2023)☆130Updated 2 years ago
- Code for subgoal synthesis via image editing☆144Updated 2 years ago
- Embodied Agent Interface (EAI): Benchmarking LLMs for Embodied Decision Making (NeurIPS D&B 2024 Oral)☆278Updated 10 months ago
- Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment: Robotic Learning Datasets and RT-X Models"☆234Updated last week
- ☆261Updated last year
- Official code for "Behavior Generation with Latent Actions" (ICML 2024 Spotlight)☆195Updated last year
- ☆47Updated last year
- [ICML 2024] The offical Implementation of "DecisionNCE: Embodied Multimodal Representations via Implicit Preference Learning"☆82Updated 7 months ago
- [ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policy☆226Updated 9 months ago
- ☆67Updated last year
- Code release for paper "Autonomous Improvement of Instruction Following Skills via Foundation Models" | CoRL 2024☆76Updated 3 months ago
- ☆46Updated last year
- Using advances in generative modeling to learn reward functions from unlabeled videos.☆136Updated last year
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆98Updated 8 months ago
- Code for the ICLR 2024 spotlight paper: "Learning to Act without Actions" (introducing Latent Action Policies)☆132Updated last year
- ☆36Updated 2 years ago
- [NeurIPS 2024] GenRL: Multimodal-foundation world models enable grounding language and video prompts into embodied domains, by turning th…☆86Updated 9 months ago
- Official code for "QueST: Self-Supervised Skill Abstractions for Continuous Control" [NeurIPS 2024]☆104Updated last year
- Official repository of Learning to Act from Actionless Videos through Dense Correspondences.☆246Updated last year
- Interactive Post-Training for Vision-Language-Action Models☆157Updated 7 months ago
- [ICRA 2025] In-Context Imitation Learning via Next-Token Prediction☆107Updated 10 months ago
- SPOC: Imitating Shortest Paths in Simulation Enables Effective Navigation and Manipulation in the Real World☆145Updated last year
- ☆79Updated last year
- [ICLR 2024 Spotlight] Text2Reward: Reward Shaping with Language Models for Reinforcement Learning☆194Updated last year
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆155Updated 9 months ago
- ProgPrompt for Virtualhome☆145Updated 2 years ago