AILab-CVC / VideoCrafterLinks
VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models
☆4,923Updated last year
Alternatives and similar repositories for VideoCrafter
Users that are interested in VideoCrafter are comparing it to the libraries listed below
Sorting:
- [ICCV 2023 Oral] Text-to-Image Diffusion Models are Zero-Shot Video Generators☆4,199Updated 2 years ago
- [ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors☆2,913Updated 10 months ago
- [ICCV 2023] StableVideo: Text-driven Consistency-aware Diffusion Video Editing☆1,437Updated last year
- Official Pytorch Implementation for "TokenFlow: Consistent Diffusion Features for Consistent Video Editing" presenting "TokenFlow" (ICLR …☆1,669Updated 5 months ago
- [ICCV 2023] Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation☆4,345Updated last year
- Edit anything in images powered by segment-anything, ControlNet, StableDiffusion, etc. (ACM MM)☆3,406Updated 5 months ago
- Official repo for VGen: a holistic video generation ecosystem for video generation building on diffusion models☆3,122Updated 6 months ago
- [ICCV 2023 Oral] "FateZero: Fusing Attentions for Zero-shot Text-based Video Editing"☆1,148Updated last year
- [SIGGRAPH Asia 2023] Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation☆2,986Updated last year
- MagicEdit: High-Fidelity Temporally Coherent Video Editing☆1,801Updated last year
- Official implementation of AnimateDiff.☆11,620Updated last year
- Text To Video Synthesis Colab☆1,513Updated last year
- [ICML 2024] Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMs (RPG)☆1,821Updated 6 months ago
- PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis☆3,136Updated 9 months ago
- ☆2,457Updated last year
- Official implementation of DreaMoving☆1,802Updated last year
- Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts☆4,611Updated 10 months ago
- T2I-Adapter☆3,720Updated last year
- The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.☆6,151Updated last year
- Official implementations for paper: Anydoor: zero-shot object-level image customization☆4,170Updated last year
- ☆7,840Updated last year
- Character Animation (AnimateAnyone, Face Reenactment)☆3,426Updated last year
- Official implementation code of the paper <AnyText: Multilingual Visual Text Generation And Editing>☆4,727Updated 4 months ago
- Official repo for VideoComposer: Compositional Video Synthesis with Motion Controllability☆940Updated last year
- Open-Set Grounded Text-to-Image Generation☆2,139Updated last year
- Consistency Distilled Diff VAE☆2,192Updated last year
- Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference☆4,542Updated last year
- Mora: More like Sora for Generalist Video Generation☆1,565Updated 9 months ago
- Official implementation of "DreamPose: Fashion Image-to-Video Synthesis via Stable Diffusion"☆1,007Updated last year
- [CVPR'24 Highlight] Official PyTorch implementation of CoDeF: Content Deformation Fields for Temporally Consistent Video Processing☆4,871Updated last year