ykarmesh / stable-control-representationsLinks
Code for Stable Control Representations
☆25Updated 4 months ago
Alternatives and similar repositories for stable-control-representations
Users that are interested in stable-control-representations are comparing it to the libraries listed below
Sorting:
- ☆77Updated 11 months ago
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆50Updated 3 months ago
- ☆76Updated 2 months ago
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆34Updated 7 months ago
- Code release for "Pre-training Contextualized World Models with In-the-wild Videos for Reinforcement Learning" (NeurIPS 2023), https://ar…☆66Updated 10 months ago
- ☆106Updated last month
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆72Updated 7 months ago
- ☆26Updated last year
- ☆42Updated last year
- Codebase for HiP☆90Updated last year
- Official repository for "iVideoGPT: Interactive VideoGPTs are Scalable World Models" (NeurIPS 2024), https://arxiv.org/abs/2405.15223☆137Updated 2 months ago
- [ECCV 2024] 💐Official implementation of the paper "Diffusion Reward: Learning Rewards via Conditional Video Diffusion"☆108Updated last year
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆68Updated 2 months ago
- ☆53Updated 7 months ago
- [ICML 2024] The offical Implementation of "DecisionNCE: Embodied Multimodal Representations via Implicit Preference Learning"☆82Updated 2 months ago
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆96Updated 3 months ago
- [ICML'25] The PyTorch implementation of paper: "AdaWorld: Learning Adaptable World Models with Latent Actions".☆133Updated last month
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆81Updated 2 months ago
- Dreamitate: Real-World Visuomotor Policy Learning via Video Generation (CoRL 2024)☆50Updated 2 months ago
- ☆44Updated last year
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆57Updated 10 months ago
- official implementation for our paper Steering Your Generalists: Improving Robotic Foundation Models via Value Guidance (CoRL 2024)☆33Updated 3 months ago
- Repo for Bring Your Own Vision-Language-Action (VLA) model, arxiv 2024☆30Updated 6 months ago
- (CVPR 2025) A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning☆16Updated 4 months ago
- ☆21Updated 9 months ago
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆66Updated 2 months ago
- Official implementation of "Self-Improving Video Generation"☆68Updated 3 months ago
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆120Updated 2 months ago
- The official codebase for running the experiments described in the AVDC paper.☆17Updated 10 months ago
- main augmentation script for real world robot dataset.☆35Updated 2 years ago