ExponentialML / Text-To-Video-FinetuningLinks
Finetune ModelScope's Text To Video model using Diffusers π§¨
β690Updated last year
Alternatives and similar repositories for Text-To-Video-Finetuning
Users that are interested in Text-To-Video-Finetuning are comparing it to the libraries listed below
Sorting:
- [ICLR 2024] Official pytorch implementation of "ControlVideo: Training-free Controllable Text-to-Video Generation"β848Updated 2 years ago
- Official repo for VideoComposer: Compositional Video Synthesis with Motion Controllabilityβ949Updated 2 years ago
- Official Implementation of "Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models"β399Updated 2 years ago
- [ICLR 2024] SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Predictionβ944Updated last year
- [IJCV 2024] LaVie: High-Quality Video Generation with Cascaded Latent Diffusion Modelsβ939Updated last year
- [ECCV 2024] FreeInit: Bridging Initialization Gap in Video Diffusion Modelsβ535Updated last year
- [ICCV 2023] Consistent Image Synthesis and Editingβ820Updated last year
- ELITE: Encoding Visual Concepts into Textual Embeddings for Customized Text-to-Image Generation (ICCV 2023, Oral)β542Updated last year
- Official Pytorch Implementation for "MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation" presenting "MultiDiffusion" β¦β1,047Updated 2 years ago
- Unoffical implement for [StyleDrop](https://arxiv.org/abs/2306.00983)β583Updated 2 years ago
- Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion Models, arxiv 2023 / CVPR 2024β757Updated 2 years ago
- The official implementation for "Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising".β303Updated last month
- Video-P2P: Video Editing with Cross-attention Controlβ422Updated 4 months ago
- Transfer the ControlNet with any basemodel in diffusersπ₯β843Updated 2 years ago
- Official Code for MotionCtrl [SIGGRAPH 2024]β1,466Updated 9 months ago
- [NeurIPS 2023] Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Modelsβ660Updated last year
- Official Pytorch Implementation for "Text2LIVE: Text-Driven Layered Image and Video Editing" (ECCV 2022 Oral)β891Updated 2 years ago
- LVDM: Latent Video Diffusion Models for High-Fidelity Long Video Generationβ496Updated last year
- β¨ Hotshot-XL: State-of-the-art AI text-to-GIF model trained to work alongside Stable Diffusion XLβ1,110Updated last year
- ControlLoRA: A Lightweight Neural Network To Control Stable Diffusion Spatial Informationβ617Updated last year
- Fine-Grained Open Domain Image Animation with Motion Guidanceβ950Updated last year
- Official implementation of "DreamPose: Fashion Image-to-Video Synthesis via Stable Diffusion"β1,009Updated 2 years ago
- [IJCV] FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attentionβ710Updated 10 months ago
- ICLR 2024 (Spotlight)β776Updated last year
- Official PyTorch implementation for the paper "AnimateZero: Video Diffusion Models are Zero-Shot Image Animators"β350Updated last year
- [SIGGRAPH ASIA 2024 TCS] AnimateLCM: Computation-Efficient Personalized Style Video Generation without Personalized Video Dataβ651Updated last year
- Make-A-Protagonist: Generic Video Editing with An Ensemble of Expertsβ322Updated 2 years ago
- [CVPR 2024] PAIR Diffusion: A Comprehensive Multimodal Object-Level Image Editorβ520Updated last year
- Code for Enhancing Detail Preservation for Customized Text-to-Image Generation: A Regularization-Free Approachβ468Updated last year
- Code and data for "AnyV2V: A Tuning-Free Framework For Any Video-to-Video Editing Tasks" [TMLR 2024]β634Updated last year