AILab-CVC / Make-Your-VideoLinks
[IEEE TVCG 2024] Customized Video Generation Using Textual and Structural Guidance
☆193Updated last year
Alternatives and similar repositories for Make-Your-Video
Users that are interested in Make-Your-Video are comparing it to the libraries listed below
Sorting:
- Official Pytorch Implementation for "Space-Time Diffusion Features for Zero-Shot Text-Driven Motion Transfer""☆181Updated this week
- [TOG 2024]StyleCrafter: Enhancing Stylized Text-to-Video Generation with Style Adapter☆242Updated 3 months ago
- Official implementations for paper: LivePhoto: Real Image Animation with Text-guided Motion Control☆189Updated last year
- ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation [TMLR 2024]☆248Updated last year
- This respository contains the code for the CVPR 2024 paper AVID: Any-Length Video Inpainting with Diffusion Model.☆167Updated last year
- ☆117Updated last year
- Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models (ICLR 2024)☆139Updated last year
- [CVPR2024] VideoBooth: Diffusion-based Video Generation with Image Prompts☆299Updated last year
- Official Pytorch Implementation for "VideoControlNet: A Motion-Guided Video-to-Video Translation Framework by Using Diffusion Model with …☆120Updated last year
- AnimateDiff I2V version.☆186Updated last year
- VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models (CVPR 2024)☆194Updated last year
- Official Pytorch Implementation for "VidToMe: Video Token Merging for Zero-Shot Video Editing" (CVPR 2024)☆220Updated 5 months ago
- Official implementation for "ControlVideo: Adding Conditional Control for One Shot Text-to-Video Editing"☆229Updated 2 years ago
- [SIGGRAPH 2024] Motion I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling☆171Updated 9 months ago
- I2V-Adapter: A General Image-to-Video Adapter for Video Diffusion Models☆205Updated last year
- Interactive Video Generation via Masked-Diffusion☆83Updated last year
- The official implementation for "Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising".☆298Updated last year
- Retrieval-Augmented Video Generation for Telling a Story☆258Updated last year
- [ICLR 2024] Code for FreeNoise based on VideoCrafter☆413Updated last year
- ☆150Updated 2 years ago
- [ICLR 2024] Code for FreeNoise based on AnimateDiff☆107Updated last year
- [ICCV 2025] MagicMirror: ID-Preserved Video Generation in Video Diffusion Transformers☆118Updated 2 weeks ago
- Official Implementation for "A Neural Space-Time Representation for Text-to-Image Personalization" (SIGGRAPH Asia 2023)☆179Updated last year
- RAVE: Randomized Noise Shuffling for Fast and Consistent Video Editing with Diffusion Models [CVPR 2024]☆307Updated 5 months ago
- [AAAI 2025] Official pytorch implementation of "VideoElevator: Elevating Video Generation Quality with Versatile Text-to-Image Diffusion …☆160Updated last year
- Official Implementation of "Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models"☆396Updated 2 years ago
- Subject-Diffusion:Open Domain Personalized Text-to-Image Generation without Test-time Fine-tuning☆303Updated last year
- Code for the paper "Pix2Video: Video Editing using Image Diffusion"☆70Updated last year
- A simple magic animate pipeline including densepose inference.☆37Updated last year
- [ACM MM24] MotionMaster: Training-free Camera Motion Transfer For Video Generation☆92Updated 8 months ago