anuragajay / hip
Codebase for HiP
☆88Updated last year
Alternatives and similar repositories for hip:
Users that are interested in hip are comparing it to the libraries listed below
- ☆75Updated 7 months ago
- ☆67Updated 6 months ago
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆90Updated 2 years ago
- Official repository of Learning to Act from Actionless Videos through Dense Correspondences.☆204Updated 10 months ago
- ☆43Updated last year
- ☆45Updated last year
- ☆94Updated 7 months ago
- Official repository for "LIV: Language-Image Representations and Rewards for Robotic Control" (ICML 2023)☆103Updated last year
- Using advances in generative modeling to learn reward functions from unlabeled videos.☆124Updated last year
- Code for subgoal synthesis via image editing☆130Updated last year
- [NeurIPS 2024] GenRL: Multimodal-foundation world models enable grounding language and video prompts into embodied domains, by turning th…☆70Updated 2 months ago
- MiniGrid Implementation of BEHAVIOR Tasks☆41Updated 7 months ago
- ☆62Updated 5 months ago
- Instruction Following Agents with Multimodal Transforemrs☆52Updated 2 years ago
- ☆21Updated last year
- [ECCV 2024] 💐Official implementation of the paper "Diffusion Reward: Learning Rewards via Conditional Video Diffusion"☆94Updated 8 months ago
- Streaming Diffusion Policy: Fast Policy Synthesis with Variable Noise Diffusion Models☆52Updated 5 months ago
- ☆66Updated 5 months ago
- Official code for "QueST: Self-Supervised Skill Abstractions for Continuous Control" [NeurIPS 2024]☆72Updated 4 months ago
- VP2 Benchmark (A Control-Centric Benchmark for Video Prediction, ICLR 2023)☆27Updated 2 weeks ago
- Chain-of-Thought Predictive Control☆56Updated last year
- ☆37Updated 6 months ago
- Source codes for the paper "COMBO: Compositional World Models for Embodied Multi-Agent Cooperation"☆28Updated last week
- Codebase for PRISE: Learning Temporal Action Abstractions as a Sequence Compression Problem☆22Updated 8 months ago
- ☆46Updated 3 months ago
- Repo for Bring Your Own Vision-Language-Action (VLA) model, arxiv 2024☆27Updated 2 months ago
- Code for paper "Grounding Video Models to Actions through Goal Conditioned Exploration".☆43Updated 2 months ago
- A Benchmark for Low-Level Manipulation in Home Rearrangement Tasks☆93Updated 2 weeks ago