allenai / spoc-robot-trainingLinks
SPOC: Imitating Shortest Paths in Simulation Enables Effective Navigation and Manipulation in the Real World
☆137Updated 11 months ago
Alternatives and similar repositories for spoc-robot-training
Users that are interested in spoc-robot-training are comparing it to the libraries listed below
Sorting:
- PoliFormer: Scaling On-Policy RL with Transformers Results in Masterful Navigators☆99Updated 10 months ago
- Public release for "Explore until Confident: Efficient Exploration for Embodied Question Answering"☆67Updated last year
- Code repository for the Habitat Synthetic Scenes Dataset (HSSD) paper.☆105Updated last year
- [CoRL2024] Official repo of `A3VLM: Actionable Articulation-Aware Vision Language Model`☆120Updated last year
- The project repository for paper EMOS: Embodiment-aware Heterogeneous Multi-robot Operating System with LLM Agents: https://arxiv.org/abs…☆51Updated 9 months ago
- Find What You Want: Learning Demand-conditioned Object Attribute Space for Demand-driven Navigation☆61Updated 8 months ago
- RL training scripts for learning an agent using ProcTHOR.☆35Updated 7 months ago
- [ICRA 25] FLaRe: Achieving Masterful and Adaptive Robot Policies with Large-Scale Reinforcement Learning Fine-Tuning☆36Updated 9 months ago
- Code for training embodied agents using IL and RL finetuning at scale for ObjectNav☆80Updated 5 months ago
- [CoRL 2024] RoboEXP: Action-Conditioned Scene Graph via Interactive Exploration for Robotic Manipulation☆114Updated 11 months ago
- A Benchmark for Low-Level Manipulation in Home Rearrangement Tasks☆146Updated last month
- Official GitHub Repository for Paper "Bridging Zero-shot Object Navigation and Foundation Models through Pixel-Guided Navigation Skill", …☆117Updated 11 months ago
- ☆169Updated 6 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆185Updated 4 months ago
- A Vision-Language Model for Spatial Affordance Prediction in Robotics☆193Updated 2 months ago
- Code for ICRA24 paper "Think, Act, and Ask: Open-World Interactive Personalized Robot Navigation" Paper//arxiv.org/abs/2310.07968 …☆31Updated last year
- Vision-Language Navigation Benchmark in Isaac Lab☆245Updated last month
- ☆54Updated 7 months ago
- Code for Reinforcement Learning from Vision Language Foundation Model Feedback☆124Updated last year
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆142Updated 6 months ago
- Official Repository for SAM2Act☆173Updated last month
- Autoregressive Policy for Robot Learning (RA-L 2025)☆139Updated 6 months ago
- [RSS 2024] Code for "Multimodal Diffusion Transformer: Learning Versatile Behavior from Multimodal Goals" for CALVIN experiments with pre…☆153Updated 11 months ago
- Cross-Embodiment Robot Learning Codebase☆50Updated last year
- [ICRA 2025] In-Context Imitation Learning via Next-Token Prediction☆93Updated 6 months ago
- ☆44Updated 2 years ago
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆98Updated last year
- ☆115Updated last year
- Official implementation of "Data Scaling Laws in Imitation Learning for Robotic Manipulation"☆192Updated 10 months ago
- Manipulate-Anything: Automating Real-World Robots using Vision-Language Models [CoRL 2024]☆46Updated 6 months ago