ExponentialML / Text-To-Video-FinetuningLinks
Finetune ModelScope's Text To Video model using Diffusers π§¨
β688Updated last year
Alternatives and similar repositories for Text-To-Video-Finetuning
Users that are interested in Text-To-Video-Finetuning are comparing it to the libraries listed below
Sorting:
- Official repo for VideoComposer: Compositional Video Synthesis with Motion Controllabilityβ936Updated last year
- [ICLR 2024] Official pytorch implementation of "ControlVideo: Training-free Controllable Text-to-Video Generation"β830Updated last year
- Official Implementation of "Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models"β394Updated last year
- Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion Models, arxiv 2023 / CVPR 2024β752Updated last year
- [ICLR 2024] SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Predictionβ940Updated 7 months ago
- [IJCV 2024] LaVie: High-Quality Video Generation with Cascaded Latent Diffusion Modelsβ931Updated 7 months ago
- Official Pytorch Implementation for "MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation" presenting "MultiDiffusion" β¦β1,038Updated last year
- The official implementation for "Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising".β298Updated last year
- ICLR 2024 (Spotlight)β769Updated last year
- Transfer the ControlNet with any basemodel in diffusersπ₯β832Updated 2 years ago
- [ECCV 2024] FreeInit: Bridging Initialization Gap in Video Diffusion Modelsβ512Updated last year
- [ICCV 2023] Consistent Image Synthesis and Editingβ797Updated 10 months ago
- ELITE: Encoding Visual Concepts into Textual Embeddings for Customized Text-to-Image Generation (ICCV 2023, Oral)β536Updated last year
- Official implementation of "DreamPose: Fashion Image-to-Video Synthesis via Stable Diffusion"β1,007Updated last year
- β469Updated last week
- LVDM: Latent Video Diffusion Models for High-Fidelity Long Video Generationβ484Updated 7 months ago
- [NeurIPS 2023] Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Modelsβ648Updated 11 months ago
- [CVPR2024, Highlight] Official code for DragDiffusionβ1,220Updated last year
- Official Code for MotionCtrl [SIGGRAPH 2024]β1,438Updated 4 months ago
- β448Updated last month
- Code for Enhancing Detail Preservation for Customized Text-to-Image Generation: A Regularization-Free Approachβ466Updated last year
- [IJCV] FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attentionβ703Updated 5 months ago
- Video-P2P: Video Editing with Cross-attention Controlβ414Updated 11 months ago
- MagicAvatar: Multimodal Avatar Generation and Animationβ624Updated last year
- β¨ Hotshot-XL: State-of-the-art AI text-to-GIF model trained to work alongside Stable Diffusion XLβ1,102Updated last year
- Zero-Shot Video Editing Using Off-The-Shelf Image Diffusion Modelsβ353Updated last year
- ControlLoRA: A Lightweight Neural Network To Control Stable Diffusion Spatial Informationβ601Updated 10 months ago
- [ICLR 2024] Code for FreeNoise based on VideoCrafterβ411Updated 11 months ago
- Fine-Grained Open Domain Image Animation with Motion Guidanceβ928Updated 8 months ago
- Official PyTorch implementation for the paper "AnimateZero: Video Diffusion Models are Zero-Shot Image Animators"β352Updated last year