RAVE: Randomized Noise Shuffling for Fast and Consistent Video Editing with Diffusion Models [CVPR 2024]
☆314Feb 11, 2025Updated last year
Alternatives and similar repositories for RAVE
Users that are interested in RAVE are comparing it to the libraries listed below
Sorting:
- Official Pytorch Implementation for "TokenFlow: Consistent Diffusion Features for Consistent Video Editing" presenting "TokenFlow" (ICLR …☆1,706Feb 3, 2025Updated last year
- Official Pytorch Implementation for "VidToMe: Video Token Merging for Zero-Shot Video Editing" (CVPR 2024)☆230Jan 22, 2025Updated last year
- VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models (CVPR 2024)☆198Mar 29, 2024Updated last year
- Code and data for "AnyV2V: A Tuning-Free Framework For Any Video-to-Video Editing Tasks" [TMLR 2024]☆650Oct 29, 2024Updated last year
- Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models (ICLR 2024)☆140May 21, 2024Updated last year
- ☆470Feb 12, 2024Updated 2 years ago
- [ECCV 2024] FreeInit: Bridging Initialization Gap in Video Diffusion Models☆545Jan 18, 2024Updated 2 years ago
- [ECCV 2024 Oral] MotionDirector: Motion Customization of Text-to-Video Diffusion Models.☆1,051Aug 21, 2024Updated last year
- [ICLR 2024] Code for FreeNoise based on VideoCrafter☆428Aug 25, 2025Updated 6 months ago
- CCEdit: Creative and Controllable Video Editing via Diffusion Models☆114Jun 11, 2024Updated last year
- [ICCV 2023 Oral] "FateZero: Fusing Attentions for Zero-shot Text-based Video Editing"☆1,161Aug 14, 2023Updated 2 years ago
- Pytorch Implementation of FLATTEN: optical FLow-guided ATTENtion for consistent text-to-video editing (ICLR 2024)☆213May 24, 2024Updated last year
- [IJCV 2024] LaVie: High-Quality Video Generation with Cascaded Latent Diffusion Models☆951Nov 13, 2024Updated last year
- Official implementation of "Slicedit: Zero-Shot Video Editing With Text-to-Image Diffusion Models Using Spatio-Temporal Slices" (ICML 202…☆58Nov 24, 2024Updated last year
- [SIGGRAPH ASIA 2024 TCS] AnimateLCM: Computation-Efficient Personalized Style Video Generation without Personalized Video Data☆661Oct 22, 2024Updated last year
- Codes for ID-Specific Video Customized Diffusion☆462Feb 22, 2024Updated 2 years ago
- [ICCV 2023] StableVideo: Text-driven Consistency-aware Diffusion Video Editing☆1,445Sep 7, 2023Updated 2 years ago
- Official Code for MotionCtrl [SIGGRAPH 2024]☆1,495Feb 19, 2025Updated last year
- [ICLR 2024] LLM-grounded Video Diffusion Models (LVD): official implementation for the LVD paper☆168May 7, 2024Updated last year
- [CVPR 2025] Consistent and Controllable Image Animation with Motion Diffusion Models☆296May 17, 2025Updated 10 months ago
- Officail Implementation for "ReNoise: Real Image Inversion Through Iterative Noising"☆262Jul 3, 2024Updated last year
- ☆144Jun 30, 2024Updated last year
- [ICLR 2025] Official implementation of MotionClone: Training-Free Motion Cloning for Controllable Video Generation☆514Jun 17, 2025Updated 9 months ago
- [CVPR 2024] Official implementation, Inversion-Free Image Editing with Natural Language"☆358May 28, 2024Updated last year
- Stable Video Diffusion Training Code and Extensions.☆734Jul 25, 2024Updated last year
- Official PyTorch implementation for the paper "AnimateZero: Video Diffusion Models are Zero-Shot Image Animators"☆359Dec 8, 2023Updated 2 years ago
- InteractiveVideo: User-Centric Controllable Video Generation with Synergistic Multimodal Instructions☆132Feb 7, 2024Updated 2 years ago
- Code repository for T2V-Turbo and T2V-Turbo-v2☆314Jan 31, 2025Updated last year
- [CVPR 2024] | LAMP: Learn a Motion Pattern for Few-Shot Based Video Generation☆283Apr 22, 2024Updated last year
- [ECCV 2024] Be-Your-Outpainter https://arxiv.org/abs/2403.13745☆257Apr 19, 2025Updated 11 months ago
- Official repo for VGen: a holistic video generation ecosystem for video generation building on diffusion models☆3,154Jan 10, 2025Updated last year
- Awesome diffusion Video-to-Video (V2V). A collection of paper on diffusion model-based video editing, aka. video-to-video (V2V) translati…☆280Nov 24, 2025Updated 3 months ago
- Code for FreeTraj, a tuning-free method for trajectory-controllable video generation☆111Sep 19, 2025Updated 6 months ago
- Training-Free Condition-Guided Text-to-Video Generation☆63Oct 23, 2025Updated 4 months ago
- MotionDirector Training For AnimateDiff. Train a MotionLoRA and run it on any compatible AnimateDiff UI.☆308Aug 20, 2024Updated last year
- Official implementation for "ControlVideo: Adding Conditional Control for One Shot Text-to-Video Editing"☆231Jun 12, 2023Updated 2 years ago
- [ECCV 2024] DragAnything: Motion Control for Anything using Entity Representation☆505Jul 2, 2024Updated last year
- [SIGGRAPH 2025] Official implementation of 'Motion Inversion For Video Customization'☆153Oct 22, 2024Updated last year
- [TOG 2024]StyleCrafter: Enhancing Stylized Text-to-Video Generation with Style Adapter☆267Apr 5, 2025Updated 11 months ago