ExponentialML / Text-To-Video-FinetuningLinks
Finetune ModelScope's Text To Video model using Diffusers π§¨
β686Updated last year
Alternatives and similar repositories for Text-To-Video-Finetuning
Users that are interested in Text-To-Video-Finetuning are comparing it to the libraries listed below
Sorting:
- [ICLR 2024] Official pytorch implementation of "ControlVideo: Training-free Controllable Text-to-Video Generation"β839Updated last year
- Official repo for VideoComposer: Compositional Video Synthesis with Motion Controllabilityβ943Updated last year
- Official Implementation of "Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models"β398Updated 2 years ago
- [ICLR 2024] SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Predictionβ941Updated 9 months ago
- Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion Models, arxiv 2023 / CVPR 2024β757Updated last year
- [ECCV 2024] FreeInit: Bridging Initialization Gap in Video Diffusion Modelsβ531Updated last year
- Unoffical implement for [StyleDrop](https://arxiv.org/abs/2306.00983)β583Updated 2 years ago
- The official implementation for "Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising".β301Updated last year
- Official Pytorch Implementation for "MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation" presenting "MultiDiffusion" β¦