ykarmesh / stable-control-representationsLinks
Code for Stable Control Representations
☆25Updated 3 months ago
Alternatives and similar repositories for stable-control-representations
Users that are interested in stable-control-representations are comparing it to the libraries listed below
Sorting:
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆49Updated 2 months ago
- ☆75Updated 10 months ago
- Code release for "Pre-training Contextualized World Models with In-the-wild Videos for Reinforcement Learning" (NeurIPS 2023), https://ar…☆66Updated 9 months ago
- Responsible Robotic Manipulation☆11Updated last month
- ☆76Updated last month
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆67Updated 7 months ago
- Distributed, scalable benchmarking of generalist robot policies.☆35Updated 3 weeks ago
- Codebase for HiP☆90Updated last year
- Repo for Bring Your Own Vision-Language-Action (VLA) model, arxiv 2024☆30Updated 5 months ago
- ☆44Updated last year
- ☆42Updated last year
- ☆105Updated last week
- Dreamitate: Real-World Visuomotor Policy Learning via Video Generation (CoRL 2024)☆48Updated last month
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆57Updated 9 months ago
- ☆26Updated last year
- ☆26Updated 3 weeks ago
- ☆49Updated 7 months ago
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆87Updated 3 months ago
- Official repository for "iVideoGPT: Interactive VideoGPTs are Scalable World Models" (NeurIPS 2024), https://arxiv.org/abs/2405.15223☆136Updated last month
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆32Updated 6 months ago
- (CVPR 2025) A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning☆16Updated 4 months ago
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆68Updated 2 months ago
- ☆28Updated 2 months ago
- [ECCV 2024] 💐Official implementation of the paper "Diffusion Reward: Learning Rewards via Conditional Video Diffusion"☆108Updated last year
- Unified Vision-Language-Action Model☆128Updated 2 weeks ago
- [ICML'25] The PyTorch implementation of paper: "AdaWorld: Learning Adaptable World Models with Latent Actions".☆125Updated last month
- The official codebase for running the experiments described in the AVDC paper.☆17Updated 9 months ago
- Implementation of Language-Conditioned Path Planning (Amber Xie, Youngwoon Lee, Pieter Abbeel, Stephen James)☆23Updated last year
- Implementation of Latent Diffusion Planning (Amber Xie, Oleh Rybkin, Dorsa Sadigh, Chelsea Finn)☆38Updated 2 weeks ago
- Visual Representation Learning with Stochastic Frame Prediction (ICML 2024)☆22Updated 7 months ago