minnie-lin / Awesome-Physics-Cognition-based-Video-Generation
A comprehensive list of papers investigating physical cognition in video generation, including papers, codes, and related websites.
☆82Updated last week
Alternatives and similar repositories for Awesome-Physics-Cognition-based-Video-Generation
Users that are interested in Awesome-Physics-Cognition-based-Video-Generation are comparing it to the libraries listed below
Sorting:
- Code release for "PISA Experiments: Exploring Physics Post-Training for Video Diffusion Models by Watching Stuff Drop" (ICML 2025)☆29Updated this week
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆103Updated 6 months ago
- Video Generation, Physical Commonsense, Semantic Adherence, VideoCon-Physics☆99Updated last week
- A list of works on video generation towards world model☆58Updated this week
- Physical laws underpin all existence, and harnessing them for generative modeling opens boundless possibilities for advancing science and…☆147Updated 3 weeks ago
- [CVPR'24] GraphDreamer: a novel framework of generating compositional 3D scenes from scene graphs.☆180Updated last year
- [ICLR 2025] Trajectory Attention For Fine-grained Video Motion Control☆75Updated last month
- [ArXiv 2025] WORLDMEM: Long-term Consistent World Simulation with Memory☆97Updated this week
- ☆126Updated 4 months ago
- Official implementation for WorldScore: A Unified Evaluation Benchmark for World Generation☆96Updated 3 weeks ago
- [AAAI 2025] DreamPhysics: Learning Physics-Based 3D Dynamics with Video Diffusion Priors☆204Updated 11 months ago
- Diffusion Powers Video Tokenizer for Comprehension and Generation (CVPR 2025)☆67Updated 2 months ago
- ☆60Updated 3 months ago
- An organized list of academic papers focused on the topic of 4D Generation. If you have any additions or suggestions, feel free to contri…☆56Updated last year
- [ICML2025] The code and data of Paper: Towards World Simulator: Crafting Physical Commonsense-Based Benchmark for Video Generation☆102Updated 6 months ago
- Code release of our paper "DI-PCG: Diffusion-based Efficient Inverse Procedural Content Generation for High-quality 3D Asset Creation".☆100Updated last month
- Code release for NeurIPS 2023 paper SlotDiffusion: Object-centric Learning with Diffusion Models☆86Updated last year
- [ECCV 2024] Official Implementation of DragAPart: Learning a Part-Level Motion Prior for Articulated Objects.☆80Updated 9 months ago
- open-sourced video dataset with dynamic scenes and camera movements annotation☆51Updated 3 weeks ago
- Official Implementation of paper "Telling Left from Right: Identifying Geometry-Aware Semantic Correspondence"☆122Updated last month
- "Comp4D: Compositional 4D Scene Generation", Dejia Xu*, Hanwen Liang*, Neel P. Bhatt, Hezhen Hu, Hanxue Liang, Konstantinos N. Platanioti…☆78Updated 8 months ago
- [CVPR 2025] Uni4D: Unifying Visual Foundation Models for 4D Modeling from a Single Video☆70Updated this week
- [ICLR 2024] Seer: Language Instructed Video Prediction with Latent Diffusion Models☆31Updated 11 months ago
- Training-free Guidance in Text-to-Video Generation via Multimodal Planning and Structured Noise Initialization☆20Updated last month
- A curated list of awesome autoregressive papers in Generative AI☆57Updated 3 weeks ago
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆48Updated last week
- Seeing World Dynamics in a Nutshell☆108Updated last month
- [Neurips 2024] Video Diffusion Models are Training-free Motion Interpreter and Controller☆40Updated last month
- Official Pytorch implementation for LARP: Tokenizing Videos with a Learned Autoregressive Generative Prior (ICLR 2025 Oral).☆68Updated 3 months ago
- List of papers on 4D Generation.☆272Updated 7 months ago