AILab-CVC / Make-Your-VideoLinks
[IEEE TVCG 2024] Customized Video Generation Using Textual and Structural Guidance
☆194Updated last year
Alternatives and similar repositories for Make-Your-Video
Users that are interested in Make-Your-Video are comparing it to the libraries listed below
Sorting:
- Official implementations for paper: LivePhoto: Real Image Animation with Text-guided Motion Control☆192Updated last week
- Official Pytorch Implementation for "Space-Time Diffusion Features for Zero-Shot Text-Driven Motion Transfer""☆191Updated 4 months ago
- Retrieval-Augmented Video Generation for Telling a Story☆259Updated last year
- Official Pytorch Implementation for "VideoControlNet: A Motion-Guided Video-to-Video Translation Framework by Using Diffusion Model with …☆119Updated 2 years ago
- [TOG 2024]StyleCrafter: Enhancing Stylized Text-to-Video Generation with Style Adapter☆262Updated 7 months ago
- ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation [TMLR 2024]☆255Updated last year
- [CVPR2024] VideoBooth: Diffusion-based Video Generation with Image Prompts☆306Updated last year
- I2V-Adapter: A General Image-to-Video Adapter for Video Diffusion Models☆205Updated last year
- The official implementation for "Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising".☆304Updated last month
- Official implementation for "ControlVideo: Adding Conditional Control for One Shot Text-to-Video Editing"☆231Updated 2 years ago
- Official Pytorch Implementation for "VidToMe: Video Token Merging for Zero-Shot Video Editing" (CVPR 2024)☆226Updated 10 months ago
- AnimateDiff I2V version.☆186Updated last year
- VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models (CVPR 2024)☆197Updated last year
- Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models (ICLR 2024)☆140Updated last year
- This respository contains the code for the CVPR 2024 paper AVID: Any-Length Video Inpainting with Diffusion Model.☆175Updated last year
- Interactive Video Generation via Masked-Diffusion☆84Updated last year
- [ICLR 2024] Code for FreeNoise based on VideoCrafter☆420Updated 3 months ago
- ☆126Updated last year
- [SIGGRAPH Asia 2024] TrailBlazer: Trajectory Control for Diffusion-Based Video Generation☆100Updated last year
- ☆154Updated 2 years ago
- [SIGGRAPH Asia 2023] An interactive story visualization tool that support multiple characters☆268Updated last year
- Code for the paper "Pix2Video: Video Editing using Image Diffusion"☆76Updated 2 years ago
- RAVE: Randomized Noise Shuffling for Fast and Consistent Video Editing with Diffusion Models [CVPR 2024]☆313Updated 9 months ago
- Official Implementation of "Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models"☆401Updated 2 years ago
- [SIGGRAPH 2024] Motion I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling☆184Updated last year
- [AAAI 2025] Official pytorch implementation of "VideoElevator: Elevating Video Generation Quality with Versatile Text-to-Image Diffusion …☆161Updated last year
- [ICLR 2024] Code for FreeNoise based on AnimateDiff☆108Updated last year
- UniEdit: A Unified Tuning-Free Framework for Video Motion and Appearance Editing☆114Updated 7 months ago
- ☆143Updated last year
- Official code for VividPose: Advancing Stable Video Diffusion for Realistic Human Image Animation.☆85Updated last year