thuml / Vid2WorldLinks
Official repository for "Vid2World: Crafting Video Diffusion Models to Interactive World Models", https://arxiv.org/abs/2505.14357
☆26Updated 3 weeks ago
Alternatives and similar repositories for Vid2World
Users that are interested in Vid2World are comparing it to the libraries listed below
Sorting:
- VLA-RFT: Vision-Language-Action Models with Reinforcement Fine-Tuning☆113Updated 3 months ago
- The offical repo for paper "VQ-VLA: Improving Vision-Language-Action Models via Scaling Vector-Quantized Action Tokenizers" (ICCV 2025)☆107Updated 2 months ago
- Official Implementation of Paper: WMPO: World Model-based Policy Optimization for Vision-Language-Action Models☆96Updated last week
- Code implementation of the paper "World-in-World: World Models in a Closed-Loop World"☆121Updated 3 weeks ago
- Visual Embodied Brain: Let Multimodal Large Language Models See, Think, and Control in Spaces☆87Updated 7 months ago
- ☆113Updated last week
- [NeurIPS 2025] Source codes for the paper "MindJourney: Test-Time Scaling with World Models for Spatial Reasoning"☆121Updated 2 months ago
- Official implementation of "RoboTracer: Mastering Spatial Trace with Reasoning in Vision-Language Models for Robotics"☆43Updated last week
- [NeurIPS 2025] OST-Bench: Evaluating the Capabilities of MLLMs in Online Spatio-temporal Scene Understanding☆69Updated 3 months ago
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆159Updated 3 months ago
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆59Updated 8 months ago
- [ICML'25] The PyTorch implementation of paper: "AdaWorld: Learning Adaptable World Models with Latent Actions".☆190Updated 6 months ago
- [Nips 2025] EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆126Updated 5 months ago
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆79Updated last year
- List of papers on video-centric robot learning☆22Updated last year
- Unified Vision-Language-Action Model☆257Updated 3 months ago
- ☆89Updated last year
- [arXiv 2025] MMSI-Bench: A Benchmark for Multi-Image Spatial Intelligence☆68Updated 2 weeks ago
- Geometry-Consistent Video Diffusion for Robotic Visual Policy Transfer☆26Updated 2 months ago
- [CoRL 2025] Robot Learning from Any Images☆34Updated 2 months ago
- ☆138Updated 6 months ago
- [CVPR 2025] Tra-MoE: Learning Trajectory Prediction Model from Multiple Domains for Adaptive Policy Conditioning☆53Updated 9 months ago
- ☆116Updated 2 months ago
- WoW (World-Omniscient World Model) is a generative world model trained on 2 million robotic interaction trajectories, designed to imagine…☆130Updated last week
- Multi-SpatialMLLM Multi-Frame Spatial Understanding with Multi-Modal Large Language Models☆166Updated 3 months ago
- Awesome paper list and repos of the paper "A comprehensive survey of embodied world models".☆55Updated 2 months ago
- Official implementation of Spatial-Forcing: Implicit Spatial Representation Alignment for Vision-language-action Model☆159Updated last week
- Official repository for "iVideoGPT: Interactive VideoGPTs are Scalable World Models" (NeurIPS 2024), https://arxiv.org/abs/2405.15223☆162Updated 3 months ago
- [CVPR 2024] Situational Awareness Matters in 3D Vision Language Reasoning☆42Updated last year
- [NeurIPS 2025] Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆220Updated 3 weeks ago