roboterax / video-prediction-policyView external linksLinks
Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations https://video-prediction-policy.github.io
☆346May 17, 2025Updated 9 months ago
Alternatives and similar repositories for video-prediction-policy
Users that are interested in video-prediction-policy are comparing it to the libraries listed below
Sorting:
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆984Nov 19, 2025Updated 2 months ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆332Jul 23, 2025Updated 6 months ago
- [IROS 2025] Generalizable Humanoid Manipulation with 3D Diffusion Policies. Part 1: Train & Deploy of iDP3☆501Jun 16, 2025Updated 8 months ago
- [RSS 2024] 3D Diffusion Policy: Generalizable Visuomotor Policy Learning via Simple 3D Representations☆1,251Oct 17, 2025Updated 4 months ago
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success☆1,037Sep 9, 2025Updated 5 months ago
- [CoRL 2024] Im2Flow2Act: Flow as the Cross-domain Manipulation Interface☆150Oct 17, 2024Updated last year
- Code for the paper "3D Diffuser Actor: Policy Diffusion with 3D Scene Representations"☆384Aug 17, 2024Updated last year
- ICCV 2025 | TesserAct: Learning 4D Embodied World Models☆379Aug 4, 2025Updated 6 months ago
- ☆88Sep 23, 2025Updated 4 months ago
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Goo…☆974Dec 20, 2025Updated last month
- RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation☆1,615Jan 21, 2026Updated 3 weeks ago
- [RSS25] Official implementation of DemoGen: Synthetic Demonstration Generation for Data-Efficient Visuomotor Policy Learning☆238Jul 18, 2025Updated 6 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆460Jan 22, 2025Updated last year
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation☆343Aug 27, 2025Updated 5 months ago
- Unfied World Models: Coupling Video and Action Diffusion for Pretraining on Large Robotic Datasets☆186Oct 8, 2025Updated 4 months ago
- [ICLR 2025 Oral] Seer: Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation☆279Jul 8, 2025Updated 7 months ago
- CALVIN - A benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasks☆829Sep 8, 2025Updated 5 months ago
- Re-implementation of pi0 vision-language-action (VLA) model from Physical Intelligence☆1,389Jan 31, 2025Updated last year
- [IROS 2025 Best Paper Award Finalist & IEEE TRO 2026] The Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems☆2,775Dec 16, 2025Updated 2 months ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆381Nov 11, 2025Updated 3 months ago
- This is the official implementation of the paper "ConRFT: A Reinforced Fine-tuning Method for VLA Models via Consistency Policy".☆320Nov 11, 2025Updated 3 months ago
- Official codebase for "Any-point Trajectory Modeling for Policy Learning"☆271Jun 19, 2025Updated 7 months ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆338Oct 3, 2025Updated 4 months ago
- [CVPR 2025] RoboBrain: A Unified Brain Model for Robotic Manipulation from Abstract to Concrete. Official Repository.☆364Oct 13, 2025Updated 4 months ago
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆79Dec 12, 2024Updated last year
- ☆245May 12, 2025Updated 9 months ago
- [CVPR 2024] Hierarchical Diffusion Policy for Multi-Task Robotic Manipulation☆224Apr 9, 2024Updated last year
- ☆10,231Dec 27, 2025Updated last month
- ☆432Nov 29, 2025Updated 2 months ago
- ICCV2025☆155Dec 10, 2025Updated 2 months ago
- Code for Point Policy: Unifying Observations and Actions with Key Points for Robot Manipulation☆90Jul 21, 2025Updated 6 months ago
- [ICLR 2026] SimpleVLA-RL: Scaling VLA Training via Reinforcement Learning☆1,380Jan 6, 2026Updated last month
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆5,251Mar 23, 2025Updated 10 months ago
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Model☆620Oct 29, 2024Updated last year
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆162Oct 1, 2025Updated 4 months ago
- code for the paper Predicting Point Tracks from Internet Videos enables Diverse Zero-Shot Manipulation☆100Jul 31, 2024Updated last year
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆229Nov 6, 2025Updated 3 months ago
- RynnVLA-002: A Unified Vision-Language-Action and World Model☆889Dec 2, 2025Updated 2 months ago
- ☆75Jan 8, 2025Updated last year