TIGER-AI-Lab / AnyV2V
Code and data for "AnyV2V: A Tuning-Free Framework For Any Video-to-Video Editing Tasks" (TMLR 2024)
☆567Updated 4 months ago
Alternatives and similar repositories for AnyV2V:
Users that are interested in AnyV2V are comparing it to the libraries listed below
- [ECCV 2024] FreeInit: Bridging Initialization Gap in Video Diffusion Models☆512Updated last year
- [ECCV 2024] MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.☆730Updated 3 months ago
- ☆373Updated 9 months ago
- [ICLR 2025] Official implementation of MotionClone: Training-Free Motion Cloning for Controllable Video Generation☆463Updated 2 months ago
- NeurIPS 2024☆359Updated 5 months ago
- Code for [CVPR 2024] VideoSwap: Customized Video Subject Swapping with Interactive Semantic Point Correspondence☆382Updated 3 months ago
- [ECCV 2024] DragAnything: Motion Control for Anything using Entity Representation☆485Updated 8 months ago
- ☆450Updated last year
- [ECCV 2024] OMG: Occlusion-friendly Personalized Multi-concept Generation In Diffusion Models☆682Updated 8 months ago
- Official Pytorch implementation of StreamV2V.☆480Updated last month
- Official implementation of Ctrl-Adapter: An Efficient and Versatile Framework for Adapting Diverse Controls to Any Diffusion Model (ICLR …☆425Updated last month
- MotionFollower: Editing Video Motion via Lightweight Score-Guided Diffusion☆214Updated 9 months ago
- [CVPR 2024] FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translation☆755Updated 9 months ago
- [ICML 2024] MagicPose(also known as MagicDance): Realistic Human Poses and Facial Expressions Retargeting with Identity-aware Diffusion☆745Updated 8 months ago
- [Siggraph Asia 2024] Follow-Your-Emoji: This repo is the official implementation of "Follow-Your-Emoji: Fine-Controllable and Expressive …☆373Updated 6 months ago
- ☆412Updated 6 months ago
- [ICLR 2024] Code for FreeNoise based on VideoCrafter☆399Updated 8 months ago
- [SIGGRAPH ASIA 2024 TCS] AnimateLCM: Computation-Efficient Personalized Style Video Generation without Personalized Video Data☆636Updated 5 months ago
- Official PyTorch implementation for the paper "AnimateZero: Video Diffusion Models are Zero-Shot Image Animators"☆350Updated last year
- [ICLR 2024] SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Prediction☆933Updated 4 months ago
- I2V-Adapter: A General Image-to-Video Adapter for Video Diffusion Models☆206Updated last year
- [CVPR2024] VideoBooth: Diffusion-based Video Generation with Image Prompts☆292Updated 9 months ago
- 📹 A more flexible CogVideoX that can generate videos at any resolution and creates videos from images.☆675Updated this week
- ☆460Updated 6 months ago
- StoryMaker: Towards consistent characters in text-to-image generation☆666Updated 3 months ago
- Official codes of VEnhancer: Generative Space-Time Enhancement for Video Generation☆513Updated 6 months ago
- DesignEdit: Unify Spatial-Aware Image Editing via Training-free Inpainting with a Multi-Layered Latent Diffusion Framework☆330Updated 3 months ago
- [ECCV 2024] HiDiffusion: Increases the resolution and speed of your diffusion model by only adding a single line of code!☆807Updated 3 months ago
- ☆358Updated 5 months ago
- [CVPR 2024] Make-Your-Anchor: A Diffusion-based 2D Avatar Generation Framework.☆346Updated last month