lucidrains / make-a-video-pytorchLinks
Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch
☆1,971Updated last year
Alternatives and similar repositories for make-a-video-pytorch
Users that are interested in make-a-video-pytorch are comparing it to the libraries listed below
Sorting:
- Implementation of Phenaki Video, which uses Mask GIT to produce text guided videos of up to 2 minutes in length, in Pytorch☆776Updated 11 months ago
- Implementation of Video Diffusion Models, Jonathan Ho's new paper extending DDPMs to Video Generation - in Pytorch☆1,329Updated last year
- Versatile Diffusion: Text, Images and Variations All in One Diffusion Model, arXiv 2022 / ICCV 2023☆1,333Updated last year
- ☆3,349Updated last year
- [ICML'23] StyleGAN-T: Unlocking the Power of GANs for Fast Large-Scale Text-to-Image Synthesis☆1,185Updated 2 years ago
- ☆3,021Updated 2 years ago
- Unofficial implementation of "Prompt-to-Prompt Image Editing with Cross Attention Control" with Stable Diffusion☆1,335Updated 2 years ago
- Official Pytorch Implementation for "Text2LIVE: Text-Driven Layered Image and Video Editing" (ECCV 2022 Oral)☆889Updated 2 years ago
- Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)☆1,952Updated last year
- Implementation of GigaGAN, new SOTA GAN out of Adobe. Culmination of nearly a decade of research into GANs☆1,911Updated 6 months ago
- Implementation of Muse: Text-to-Image Generation via Masked Generative Transformers, in Pytorch☆901Updated last year
- ☆1,476Updated last year
- Official implementation of "Composer: Creative and Controllable Image Synthesis with Composable Conditions"☆1,562Updated last year
- A large-scale text-to-image prompt gallery dataset based on Stable Diffusion☆1,288Updated last year
- official code repo for paper "CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers"☆952Updated 2 years ago
- Deep Learning Examples☆824Updated 9 months ago
- ☆1,578Updated 3 years ago
- [ICCV 2023 Oral] Text-to-Image Diffusion Models are Zero-Shot Video Generators☆4,197Updated 2 years ago
- Implementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch☆550Updated 2 years ago
- Karras et al. (2022) diffusion models for PyTorch☆2,483Updated 6 months ago
- [ICCV 2023 Oral] "FateZero: Fusing Attentions for Zero-shot Text-based Video Editing"☆1,148Updated last year
- Official implementation of VQ-Diffusion☆948Updated last year
- Pretrained Dalle2 from laion☆503Updated 2 years ago
- Official Pytorch Implementation for “Plug-and-Play Diffusion Features for Text-Driven Image-to-Image Translation” (CVPR 2023)☆981Updated 2 years ago
- ☆1,035Updated 2 years ago
- ☆1,042Updated 10 months ago
- Official Pytorch Implementation for "TokenFlow: Consistent Diffusion Features for Consistent Video Editing" presenting "TokenFlow" (ICLR …☆1,664Updated 5 months ago
- Zero-shot Image-to-Image Translation [SIGGRAPH 2023]☆1,124Updated 9 months ago
- Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts☆4,612Updated 9 months ago
- Finetune ModelScope's Text To Video model using Diffusers 🧨☆687Updated last year