showlab / Awesome-Video-Diffusion
A curated list of recent diffusion models for video generation, editing, restoration, understanding, etc.
☆4,001Updated this week
Alternatives and similar repositories for Awesome-Video-Diffusion:
Users that are interested in Awesome-Video-Diffusion are comparing it to the libraries listed below
- [CSUR] A Survey on Video Diffusion Models☆1,952Updated 2 months ago
- VideoSys: An easy and efficient system for video generation☆1,921Updated last month
- collection of diffusion model papers categorized by their subareas☆1,507Updated this week
- FreeU: Free Lunch in Diffusion U-Net (CVPR2024 Oral)☆1,813Updated last month
- Latte: Latent Diffusion Transformer for Video Generation.☆1,775Updated 3 weeks ago
- Lumina-T2X is a unified framework for Text to Any Modality Generation☆2,156Updated this week
- PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis☆2,971Updated 3 months ago
- [ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors☆2,773Updated 5 months ago
- A collection of awesome video generation studies.☆451Updated last month
- A collection of resources on controllable generation with text-to-image diffusion models.☆990Updated last month
- Controllable video and image Generation, SVD, Animate Anyone, ControlNet, ControlNeXt, LoRA☆1,520Updated 4 months ago
- Fine-Grained Open Domain Image Animation with Motion Guidance☆870Updated 4 months ago
- Implementation of Video Diffusion Models, Jonathan Ho's new paper extending DDPMs to Video Generation - in Pytorch☆1,278Updated 9 months ago
- [CVPR2024 Highlight] VBench - We Evaluate Video Generation☆758Updated this week
- Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"☆6,833Updated 8 months ago
- Official repo for VGen: a holistic video generation ecosystem for video generation building on diffusion models☆3,065Updated last month
- PixArt-Σ: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation☆1,761Updated 3 months ago
- (ෆ`꒳´ෆ) A Survey on Text-to-Image Generation/Synthesis.☆2,264Updated last week
- The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.☆5,637Updated 7 months ago
- [ICML 2024] Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMs (RPG)☆1,755Updated 2 weeks ago
- Open-Set Grounded Text-to-Image Generation☆2,076Updated 11 months ago
- Official repo for VideoComposer: Compositional Video Synthesis with Motion Controllability☆924Updated last year
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆1,676Updated last week
- Diffusion Model-Based Image Editing: A Survey (arXiv)☆557Updated this week
- InstaFlow! One-Step Stable Diffusion with Rectified Flow (ICLR 2024)☆1,259Updated 8 months ago
- StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text☆1,503Updated 2 months ago
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,576Updated 6 months ago
- [CVPR2024, Highlight] Official code for DragDiffusion☆1,190Updated last year
- Official Code for MotionCtrl [SIGGRAPH 2024]☆1,391Updated 5 months ago
- [ICLR 2024] SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Prediction☆927Updated 3 months ago