showlab / Awesome-Video-DiffusionLinks
A curated list of recent diffusion models for video generation, editing, and various other applications.
☆5,369Updated last month
Alternatives and similar repositories for Awesome-Video-Diffusion
Users that are interested in Awesome-Video-Diffusion are comparing it to the libraries listed below
Sorting:
- [CSUR] A Survey on Video Diffusion Models☆2,257Updated 6 months ago
- collection of diffusion model papers categorized by their subareas☆2,112Updated last week
- PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis☆3,260Updated last year
- [TMLR 2025] Latte: Latent Diffusion Transformer for Video Generation.☆1,903Updated 2 months ago
- (ෆ`꒳´ෆ) A Survey on Text-to-Image Generation/Synthesis.☆2,417Updated 2 months ago
- A collection of resources on controllable generation with text-to-image diffusion models.☆1,104Updated last year
- VideoSys: An easy and efficient system for video generation☆2,016Updated 4 months ago
- A collection of awesome video generation studies.☆717Updated 3 weeks ago
- [CVPR2024 Highlight] VBench - We Evaluate Video Generation☆1,430Updated last week
- Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"☆8,275Updated last year
- Implementation of Video Diffusion Models, Jonathan Ho's new paper extending DDPMs to Video Generation - in Pytorch☆1,373Updated last year
- Controllable video and image Generation, SVD, Animate Anyone, ControlNet, ControlNeXt, LoRA☆1,629Updated last year
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,921Updated last year
- [CVPR 2025 Oral]Infinity ∞ : Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis☆1,539Updated 2 months ago
- Diffusion Model-Based Image Editing: A Survey (TPAMI 2025)☆700Updated 6 months ago
- Open-Set Grounded Text-to-Image Generation☆2,188Updated last year
- T2I-Adapter☆3,783Updated last year
- Diffusion model papers, survey, and taxonomy☆3,312Updated 3 months ago
- General AI methods for Anything: AnyObject, AnyGeneration, AnyModel, AnyTask, AnyX☆1,839Updated 2 years ago
- FreeU: Free Lunch in Diffusion U-Net (CVPR2024 Oral)☆1,894Updated last year
- ☆3,428Updated last year
- [ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors☆2,983Updated last year
- Lumina-T2X is a unified framework for Text to Any Modality Generation☆2,249Updated 11 months ago
- Fine-Grained Open Domain Image Animation with Motion Guidance☆957Updated last year
- Paint by Example: Exemplar-based Image Editing with Diffusion Models☆1,241Updated 2 years ago
- One-step image-to-image with Stable Diffusion turbo: sketch2image, day2night, and more☆2,358Updated 5 months ago
- Implementation of GigaGAN, new SOTA GAN out of Adobe. Culmination of nearly a decade of research into GANs☆1,935Updated last year
- VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models☆5,014Updated last week
- The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.☆6,411Updated last year
- PixArt-Σ: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation☆1,891Updated last year