gemlab-vt / motionshopLinks
MotionShop: Zero-Shot Motion Transfer in Video Diffusion Models with Mixture of Score Guidance
☆26Updated last year
Alternatives and similar repositories for motionshop
Users that are interested in motionshop are comparing it to the libraries listed below
Sorting:
- ☆52Updated last week
- ☆32Updated 9 months ago
- ☆20Updated last year
- This is the official repository for "LatentMan: Generating Consistent Animated Characters using Image Diffusion Models" [CVPRW 2024]☆22Updated last year
- HyperMotion is a pose guided human image animation framework based on a large-scale video diffusion Transformer.☆130Updated 5 months ago
- AAAI 2025: Anywhere: A Multi-Agent Framework for User-Guided, Reliable, and Diverse Foreground-Conditioned Image Generation☆44Updated last year
- Blending Custom Photos with Video Diffusion Transformers☆48Updated 11 months ago
- ☆79Updated 3 months ago
- ☆66Updated last year
- Code for ICLR 2024 paper "Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators"☆107Updated last week
- [ECCV 2024] Noise Calibration: Plug-and-play Content-Preserving Video Enhancement using Pre-trained Video Diffusion Models☆88Updated last year
- Frame Guidance: Training-Free Guidance for Frame-Level Control in Video Diffusion Model (Arxiv 2025)☆38Updated 6 months ago
- [NeurIPS 2024] Official Implementation of Attention Interpolation of Text-to-Image Diffusion☆107Updated last year
- [arXiv'25] AnyCharV: Bootstrap Controllable Character Video Generation with Fine-to-Coarse Guidance☆40Updated 10 months ago
- [NeurIPS 2025] The official code for "IllumiCraft: Unified Geometry and Illumination Diffusion for Controllable Video Generation"☆21Updated 7 months ago
- This is the project for 'Any2Caption', Interpreting Any Condition to Caption for Controllable Video Generation☆49Updated 9 months ago
- Official pytorch implementation for SingleInsert☆28Updated last year
- ☆29Updated 9 months ago
- This respository contains the code for the NeurIPS 2024 paper SF-V: Single Forward Video Generation Model.☆99Updated last year
- [ECCV 2024] IDOL: Unified Dual-Modal Latent Diffusion for Human-Centric Joint Video-Depth Generation☆56Updated last year
- [ACM MM24] MotionMaster: Training-free Camera Motion Transfer For Video Generation☆98Updated last year
- [AAAI 2026] Official implementation of DreamRunner: Fine-Grained Storytelling Video Generation with Retrieval-Augmented Motion Adaptation☆76Updated 7 months ago
- code for "TVG: A Training-free Transition Video Generation Method with Diffusion Models"☆46Updated last year
- Eye-for-an-eye: Appearance Transfer with Semantic Correspondence in Diffusion Models☆31Updated last year
- This project is the official implementation of 'DreamOmni3: Scribble-based Editing and Generation''☆32Updated 2 weeks ago
- ☆66Updated last year
- [AAAI'25] Official implementation of Image Conductor: Precision Control for Interactive Video Synthesis☆100Updated last year
- Concat-ID: Towards Universal Identity-Preserving Video Synthesis☆65Updated 8 months ago
- [Unofficial Implementation] Subject-driven Video Generation via Disentangled Identity and Motion☆57Updated last week
- [ACM MM24] Official implementation of ACM MM 2024 paper: "ZePo: Zero-Shot Portrait Stylization with Faster Sampling"☆43Updated last year