ykarmesh / stable-control-representationsLinks
Code for Stable Control Representations
☆26Updated 8 months ago
Alternatives and similar repositories for stable-control-representations
Users that are interested in stable-control-representations are comparing it to the libraries listed below
Sorting:
- ☆87Updated last year
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆57Updated 7 months ago
- ☆78Updated 6 months ago
- Subtask-Aware Visual Reward Learning from Segmented Demonstrations (ICLR 2025 accepted)☆18Updated 7 months ago
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆42Updated 2 months ago
- ☆46Updated last year
- Code release for "Pre-training Contextualized World Models with In-the-wild Videos for Reinforcement Learning" (NeurIPS 2023), https://ar…☆69Updated last year
- ☆60Updated 11 months ago
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆80Updated 11 months ago
- HD-EPIC Python script to download the entire datasets or parts of it☆14Updated 2 months ago
- ☆41Updated 5 months ago
- ☆23Updated last month
- ☆33Updated last year
- [ECCV 2024] 💐Official implementation of the paper "Diffusion Reward: Learning Rewards via Conditional Video Diffusion"☆113Updated last year
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆78Updated 6 months ago
- VLA-RFT: Vision-Language-Action Models with Reinforcement Fine-Tuning☆96Updated 2 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆44Updated last year
- Visual Representation Learning with Stochastic Frame Prediction (ICML 2024)☆23Updated last year
- ☆13Updated 7 months ago
- Codebase for HiP☆90Updated last year
- ☆135Updated 5 months ago
- The official codebase for running the experiments described in the AVDC paper.☆18Updated last year
- official implementation for our paper Steering Your Generalists: Improving Robotic Foundation Models via Value Guidance (CoRL 2024)☆39Updated 7 months ago
- Official repository for "iVideoGPT: Interactive VideoGPTs are Scalable World Models" (NeurIPS 2024), https://arxiv.org/abs/2405.15223☆158Updated 2 months ago
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆58Updated last year
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆111Updated 7 months ago
- Dreamitate: Real-World Visuomotor Policy Learning via Video Generation (CoRL 2024)☆57Updated 6 months ago
- ☆44Updated last year
- [CoRL 2025] UniSkill: Imitating Human Videos via Cross-Embodiment Skill Representations☆71Updated 3 months ago
- (CVPR 2025) A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning☆22Updated 8 months ago