G-U-N / Gen-L-VideoLinks
The official implementation for "Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising".
☆297Updated last year
Alternatives and similar repositories for Gen-L-Video
Users that are interested in Gen-L-Video are comparing it to the libraries listed below
Sorting:
- [ICLR 2024] Code for FreeNoise based on VideoCrafter☆408Updated 10 months ago
- Official implementation for "ControlVideo: Adding Conditional Control for One Shot Text-to-Video Editing"☆229Updated last year
- Official Implementation of "Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models"☆390Updated last year
- [IEEE TVCG 2024] Customized Video Generation Using Textual and Structural Guidance☆191Updated last year
- ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation (TMLR 2024)☆240Updated 11 months ago
- [CVPR2024] VideoBooth: Diffusion-based Video Generation with Image Prompts☆295Updated 11 months ago
- NeurIPS 2023, Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models☆416Updated last year
- Retrieval-Augmented Video Generation for Telling a Story☆255Updated last year
- I2V-Adapter: A General Image-to-Video Adapter for Video Diffusion Models☆204Updated last year
- LVDM: Latent Video Diffusion Models for High-Fidelity Long Video Generation☆481Updated 6 months ago
- AnimateDiff I2V version.☆185Updated last year
- Video-P2P: Video Editing with Cross-attention Control☆410Updated 10 months ago
- [TOG 2024]StyleCrafter: Enhancing Stylized Text-to-Video Generation with Style Adapter☆235Updated last month
- Make-A-Protagonist: Generic Video Editing with An Ensemble of Experts☆324Updated last year
- [ECCV 2024] FreeInit: Bridging Initialization Gap in Video Diffusion Models