mayuelala / Awesome-Controllable-Video-GenerationLinks
๐๐๐A curated list of papers on controllable video generation.
โ299Updated last week
Alternatives and similar repositories for Awesome-Controllable-Video-Generation
Users that are interested in Awesome-Controllable-Video-Generation are comparing it to the libraries listed below
Sorting:
- [CVPR 2025] Official code of "DiTCtrl: Exploring Attention Control in Multi-Modal Diffusion Transformer for Tuning-Free Multi-Prompt Longโฆโ277Updated 3 months ago
- [ICLR 2025] Autoregressive Video Generation without Vector Quantizationโ545Updated last month
- [ICCV 2025] VideoVAE+: Large Motion Video Autoencoding with Cross-modal Video VAEโ339Updated 5 months ago
- A collection of diffusion models based on FLUX/DiT for image/video generation, editing, reconstruction, inpainting .etc.โ79Updated 3 weeks ago
- VARGPT-v1.1: Improve Visual Autoregressive Large Unified Model via Iterative Instruction Tuning and Reinforcement Learningโ259Updated 3 months ago
- โ457Updated last week
- [ICCV 2025] MagicMotion: Controllable Video Generation with Dense-to-Sparse Trajectory Guidanceโ137Updated 3 weeks ago
- Pytorch implementation for the paper titled "SimpleAR: Pushing the Frontier of Autoregressive Visual Generation"โ385Updated 3 weeks ago
- [CVPR'25 Highlight] Official implementation for paper - LeviTor: 3D Trajectory Oriented Image-to-Video Synthesisโ151Updated 3 months ago
- [ICLR 2025] ControlAR: Controllable Image Generation with Autoregressive Modelsโ278Updated 2 months ago
- โ84Updated last year
- Official Implementation of VideoGen-of-Thought: Step-by-step generating multi-shot video with minimal manual interventionโ39Updated 2 months ago
- ใCVPR 2025 OralใOfficial Repo for Paper "AnyEdit: Mastering Unified High-Quality Image Editing for Any Idea"โ164Updated 3 months ago
- [ICLR 2025] VideoGrain: This repo is the official implementation of "VideoGrain: Modulating Space-Time Attention for Multi-Grained Video โฆโ139Updated 3 months ago
- Let's finetune video generation models!โ487Updated 2 months ago
- Code for: "Long-Context Autoregressive Video Modeling with Next-Frame Prediction"โ230Updated 2 months ago
- Awesome diffusion Video-to-Video (V2V). A collection of paper on diffusion model-based video editing, aka. video-to-video (V2V) translatiโฆโ235Updated last month
- Official implementation of "Perception-as-Control: Fine-grained Controllable Image Animation with 3D-aware Motion Representation" (ICCV 2โฆโ55Updated 3 weeks ago
- A curated list of awesome autoregressive papers in Generative AIโ84Updated this week
- UniWorld: High-Resolution Semantic Encoders for Unified Visual Understanding and Generationโ637Updated 2 weeks ago
- A comprehensive list of papers investigating physical cognition in video generation, including papers, codes, and related websites.โ137Updated last week
- You can easily calculate FVD, PSNR, SSIM, LPIPS for evaluating the quality of generated or predicted videos.โ414Updated 6 months ago
- The official implementation of the paper titled "StableV2V: Stablizing Shape Consistency in Video-to-Video Editing".โ158Updated 7 months ago
- โ565Updated last year
- A list of works on video generation towards world modelโ157Updated this week
- Selftok: Discrete Visual Tokens of Autoregression, by Diffusion, and for Reasoningโ190Updated last month
- Simple Controlnet module for CogvideoX model.โ164Updated 6 months ago
- Video Generation, Physical Commonsense, Semantic Adherence, VideoCon-Physicsโ127Updated 2 months ago
- [ICCV 2025] GameFactory: Creating New Games with Generative Interactive Videosโ325Updated 3 months ago
- UniEdit: A Unified Tuning-Free Framework for Video Motion and Appearance Editingโ108Updated 3 months ago