ziqihuangg / Awesome-From-Video-Generation-to-World-ModelLinks
A list of works on video generation towards world model
☆157Updated last week
Alternatives and similar repositories for Awesome-From-Video-Generation-to-World-Model
Users that are interested in Awesome-From-Video-Generation-to-World-Model are comparing it to the libraries listed below
Sorting:
- [Neurips 2024] Video Diffusion Models are Training-free Motion Interpreter and Controller☆44Updated 3 months ago
- A comprehensive list of papers investigating physical cognition in video generation, including papers, codes, and related websites.☆135Updated this week
- PyTorch implementation of DiffMoE, TC-DiT, EC-DiT and Dense DiT☆119Updated 2 months ago
- Code release for "PISA Experiments: Exploring Physics Post-Training for Video Diffusion Models by Watching Stuff Drop" (ICML 2025)☆36Updated 2 months ago
- Video Generation, Physical Commonsense, Semantic Adherence, VideoCon-Physics☆127Updated 2 months ago
- [ARXIV’25] Learning Video Generation for Robotic Manipulation with Collaborative Trajectory Control☆71Updated last week
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆109Updated 8 months ago
- Official Implementation of Paper Transfer between Modalities with MetaQueries☆139Updated this week
- GenDoP: Auto-regressive Camera Trajectory Generation as a Director of Photography☆65Updated last week
- [ICLR 2025] Trajectory Attention For Fine-grained Video Motion Control☆85Updated 2 months ago
- ☆37Updated 2 weeks ago
- Official implementation for WorldScore: A Unified Evaluation Benchmark for World Generation☆114Updated last week
- open-sourced video dataset with dynamic scenes and camera movements annotation☆63Updated 2 months ago
- [CVPR 25] A framework named B^2-DiffuRL for RL-based diffusion model fine-tuning.☆32Updated 3 months ago
- ☆50Updated 7 months ago
- [CVPR'25 - Rating 555] Official PyTorch implementation of Lumos: Learning Visual Generative Priors without Text☆51Updated 4 months ago
- UniFork: Exploring Modality Alignment for Unified Multimodal Understanding and Generation☆38Updated last week
- official code repo of CVPR 2025 paper PhyT2V: LLM-Guided Iterative Self-Refinement for Physics-Grounded Text-to-Video Generation☆38Updated 3 months ago
- [ArXiv 2025] WorldMem: Long-term Consistent World Simulation with Memory☆176Updated last month
- VideoREPA: Learning Physics for Video Generation through Relational Alignment with Foundation Models☆52Updated last month
- Diffusion Powers Video Tokenizer for Comprehension and Generation (CVPR 2025)☆72Updated 4 months ago
- Frequency Autoregressive Image Generation with Continuous Tokens☆79Updated last month
- The official repository of "Sekai: A Video Dataset towards World Exploration"☆98Updated this week
- Omni Controllable Video Diffusion☆24Updated 2 months ago
- Multi-SpatialMLLM Multi-Frame Spatial Understanding with Multi-Modal Large Language Models☆133Updated last month
- ☆46Updated 4 months ago
- Official PyTorch implementation - Video Motion Transfer with Diffusion Transformers☆67Updated 2 months ago
- Official Pytorch implementation for LARP: Tokenizing Videos with a Learned Autoregressive Generative Prior (ICLR 2025 Oral).☆75Updated 5 months ago
- Training-free Guidance in Text-to-Video Generation via Multimodal Planning and Structured Noise Initialization☆21Updated 3 months ago
- [ICCV2025]Code Release of Harmonizing Visual Representations for Unified Multimodal Understanding and Generation☆141Updated last month