AILab-CVC / Make-Your-VideoLinks
[IEEE TVCG 2024] Customized Video Generation Using Textual and Structural Guidance
☆193Updated last year
Alternatives and similar repositories for Make-Your-Video
Users that are interested in Make-Your-Video are comparing it to the libraries listed below
Sorting:
- Official Pytorch Implementation for "Space-Time Diffusion Features for Zero-Shot Text-Driven Motion Transfer""☆182Updated 3 weeks ago
- Official implementations for paper: LivePhoto: Real Image Animation with Text-guided Motion Control☆190Updated last year
- [TOG 2024]StyleCrafter: Enhancing Stylized Text-to-Video Generation with Style Adapter☆249Updated 3 months ago
- Retrieval-Augmented Video Generation for Telling a Story☆258Updated last year
- Interactive Video Generation via Masked-Diffusion☆84Updated last year
- VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models (CVPR 2024)☆194Updated last year
- [CVPR2024] VideoBooth: Diffusion-based Video Generation with Image Prompts☆299Updated last year
- ☆118Updated last year
- [SIGGRAPH Asia 2024] TrailBlazer: Trajectory Control for Diffusion-Based Video Generation☆100Updated last year
- ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation [TMLR 2024]☆252Updated last year
- Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models (ICLR 2024)☆139Updated last year
- Official implementation for "ControlVideo: Adding Conditional Control for One Shot Text-to-Video Editing"☆230Updated 2 years ago
- The official implementation for "Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising".☆299Updated last year
- This respository contains the code for the CVPR 2024 paper AVID: Any-Length Video Inpainting with Diffusion Model.☆167Updated last year
- Official Pytorch Implementation for "VideoControlNet: A Motion-Guided Video-to-Video Translation Framework by Using Diffusion Model with …☆120Updated 2 years ago
- I2V-Adapter: A General Image-to-Video Adapter for Video Diffusion Models☆205Updated last year
- Official Implementation of "Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models"☆397Updated 2 years ago
- ☆151Updated 2 years ago
- [ICLR 2024] Code for FreeNoise based on VideoCrafter☆414Updated last year
- AnimateDiff I2V version.☆186Updated last year
- Official Pytorch Implementation for "VidToMe: Video Token Merging for Zero-Shot Video Editing" (CVPR 2024)☆220Updated 6 months ago
- [ACM MM24] MotionMaster: Training-free Camera Motion Transfer For Video Generation☆93Updated 9 months ago
- A simple magic animate pipeline including densepose inference.☆37Updated last year
- [SIGGRAPH Asia 2023] An interactive story visualization tool that support multiple characters☆261Updated last year
- [ICCV 2025] MagicMirror: ID-Preserved Video Generation in Video Diffusion Transformers☆119Updated last month
- RAVE: Randomized Noise Shuffling for Fast and Consistent Video Editing with Diffusion Models [CVPR 2024]☆310Updated 5 months ago
- Official code for VividPose: Advancing Stable Video Diffusion for Realistic Human Image Animation.☆84Updated last year
- Implementation of HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models☆175Updated 2 years ago
- [SIGGRAPH 2024] Motion I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling☆172Updated 10 months ago
- Text-conditioned image-to-video generation based on diffusion models.☆53Updated last year