RehgLab / RAVELinks
RAVE: Randomized Noise Shuffling for Fast and Consistent Video Editing with Diffusion Models [CVPR 2024]
☆310Updated 7 months ago
Alternatives and similar repositories for RAVE
Users that are interested in RAVE are comparing it to the libraries listed below
Sorting:
- [TOG 2024]StyleCrafter: Enhancing Stylized Text-to-Video Generation with Style Adapter☆256Updated 6 months ago
- [CVPR2024] VideoBooth: Diffusion-based Video Generation with Image Prompts☆304Updated last year
- ☆462Updated last year
- ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation [TMLR 2024]☆251Updated last year
- I2V-Adapter: A General Image-to-Video Adapter for Video Diffusion Models☆204Updated last year
- [CVPR 2025] Consistent and Controllable Image Animation with Motion Diffusion Models☆288Updated 4 months ago
- Official Pytorch Implementation for "VidToMe: Video Token Merging for Zero-Shot Video Editing" (CVPR 2024)☆219Updated 8 months ago
- [ICLR 2024] Code for FreeNoise based on VideoCrafter☆419Updated last month
- Official implementations for paper: LivePhoto: Real Image Animation with Text-guided Motion Control☆189Updated last year
- Official implementation of Ctrl-Adapter: An Efficient and Versatile Framework for Adapting Diverse Controls to Any Diffusion Model (ICLR …☆456Updated 7 months ago
- Official Pytorch Implementation for "Space-Time Diffusion Features for Zero-Shot Text-Driven Motion Transfer""☆186Updated 3 months ago
- ☆285Updated last year
- [TMM 2025] StableIdentity: Inserting Anybody into Anywhere at First Sight 🔥☆260Updated 9 months ago
- CosmicMan: A Text-to-Image Foundation Model for Humans (CVPR 2024)☆348Updated last year
- [ECCV 2024] FreeInit: Bridging Initialization Gap in Video Diffusion Models☆529Updated last year
- [ECCV 2024] DragAnything: Motion Control for Anything using Entity Representation☆499Updated last year
- [IEEE TVCG 2024] Customized Video Generation Using Textual and Structural Guidance☆194Updated last year
- Official implementation of CVPR 2024 paper: "FreeControl: Training-Free Spatial Control of Any Text-to-Image Diffusion Model with Any Con…☆470Updated 11 months ago
- This respository contains the code for the CVPR 2024 paper AVID: Any-Length Video Inpainting with Diffusion Model.☆172Updated last year
- NeurIPS 2024☆391Updated last year
- VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models (CVPR 2024)☆194Updated last year
- Official implementation of "Ctrl-X: Controlling Structure and Appearance for Text-To-Image Generation Without Guidance" (NeurIPS 2024)☆302Updated 3 weeks ago
- ☆277Updated last year
- Official PyTorch implementation for the paper "AnimateZero: Video Diffusion Models are Zero-Shot Image Animators"☆351Updated last year
- [SIGGRAPH 2024] Motion I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling☆178Updated last year
- [ICLR 2025] Official implementation of MotionClone: Training-Free Motion Cloning for Controllable Video Generation☆504Updated 3 months ago
- [SIGGRAPH 2025] Official implementation of 'Motion Inversion For Video Customization'☆151Updated 11 months ago
- Official Implementation of "Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models"☆398Updated 2 years ago
- A simple magic animate pipeline including densepose inference.☆37Updated last year
- [ICLR 2025] Codebase for "CtrLoRA: An Extensible and Efficient Framework for Controllable Image Generation"☆252Updated 8 months ago