AILab-CVC / Make-Your-VideoLinks
[IEEE TVCG 2024] Customized Video Generation Using Textual and Structural Guidance
☆194Updated last year
Alternatives and similar repositories for Make-Your-Video
Users that are interested in Make-Your-Video are comparing it to the libraries listed below
Sorting:
- Official Pytorch Implementation for "Space-Time Diffusion Features for Zero-Shot Text-Driven Motion Transfer""☆184Updated last month
- [TOG 2024]StyleCrafter: Enhancing Stylized Text-to-Video Generation with Style Adapter☆251Updated 4 months ago
- Official implementations for paper: LivePhoto: Real Image Animation with Text-guided Motion Control☆190Updated last year
- Retrieval-Augmented Video Generation for Telling a Story☆258Updated last year
- [CVPR2024] VideoBooth: Diffusion-based Video Generation with Image Prompts☆299Updated last year
- Official Pytorch Implementation for "VideoControlNet: A Motion-Guided Video-to-Video Translation Framework by Using Diffusion Model with …☆120Updated 2 years ago
- The official implementation for "Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising".☆300Updated last year
- Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models (ICLR 2024)☆139Updated last year
- Official Pytorch Implementation for "VidToMe: Video Token Merging for Zero-Shot Video Editing" (CVPR 2024)☆219Updated 7 months ago
- VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models (CVPR 2024)☆194Updated last year
- ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation [TMLR 2024]☆251Updated last year
- This respository contains the code for the CVPR 2024 paper AVID: Any-Length Video Inpainting with Diffusion Model.☆168Updated last year
- I2V-Adapter: A General Image-to-Video Adapter for Video Diffusion Models☆205Updated last year
- Interactive Video Generation via Masked-Diffusion☆84Updated last year
- Official implementation for "ControlVideo: Adding Conditional Control for One Shot Text-to-Video Editing"☆230Updated 2 years ago
- [ICLR 2024] Code for FreeNoise based on VideoCrafter☆415Updated last year
- AnimateDiff I2V version.☆186Updated last year
- [WACV 2025] Follow-Your-Handle: This repo is the official implementation of "MagicStick: Controllable Video Editing via Control Handle Tr…☆95Updated last year
- [SIGGRAPH Asia 2024] TrailBlazer: Trajectory Control for Diffusion-Based Video Generation☆100Updated last year
- ☆119Updated last year
- ☆152Updated 2 years ago
- [SIGGRAPH Asia 2023] An interactive story visualization tool that support multiple characters☆261Updated last year
- [SIGGRAPH 2024] Motion I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling☆174Updated 10 months ago
- A simple magic animate pipeline including densepose inference.☆37Updated last year
- Official Implementation of "Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models"☆398Updated 2 years ago
- [ICLR 2024] Code for FreeNoise based on AnimateDiff☆107Updated last year
- RAVE: Randomized Noise Shuffling for Fast and Consistent Video Editing with Diffusion Models [CVPR 2024]☆310Updated 6 months ago
- Zero-Shot Video Editing Using Off-The-Shelf Image Diffusion Models☆355Updated 2 years ago
- [TMM 2025] StableIdentity: Inserting Anybody into Anywhere at First Sight 🔥☆259Updated 7 months ago
- Official Implementation for "A Neural Space-Time Representation for Text-to-Image Personalization" (SIGGRAPH Asia 2023)☆179Updated last year