gemlab-vt / motionshopLinks
MotionShop: Zero-Shot Motion Transfer in Video Diffusion Models with Mixture of Score Guidance
☆26Updated 10 months ago
Alternatives and similar repositories for motionshop
Users that are interested in motionshop are comparing it to the libraries listed below
Sorting:
- ☆50Updated 3 weeks ago
- ☆32Updated 7 months ago
- [ECCV 2024] Noise Calibration: Plug-and-play Content-Preserving Video Enhancement using Pre-trained Video Diffusion Models☆87Updated last year
- ☆20Updated last year
- AAAI 2025: Anywhere: A Multi-Agent Framework for User-Guided, Reliable, and Diverse Foreground-Conditioned Image Generation☆44Updated last year
- ☆66Updated last year
- Code for ICLR 2024 paper "Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators"☆105Updated last year
- Official pytorch implementation for SingleInsert☆27Updated last year
- Concat-ID: Towards Universal Identity-Preserving Video Synthesis☆62Updated 5 months ago
- Blending Custom Photos with Video Diffusion Transformers☆48Updated 9 months ago
- [AAAI'25] Official implementation of Image Conductor: Precision Control for Interactive Video Synthesis☆97Updated last year
- This is the project for 'Any2Caption', Interpreting Any Condition to Caption for Controllable Video Generation☆45Updated 6 months ago
- ☆29Updated 7 months ago
- [ACM MM24] MotionMaster: Training-free Camera Motion Transfer For Video Generation☆95Updated last year
- [NeurIPS 2024] Official Implementation of Attention Interpolation of Text-to-Image Diffusion☆107Updated 11 months ago
- [ECCV 2024] IDOL: Unified Dual-Modal Latent Diffusion for Human-Centric Joint Video-Depth Generation☆55Updated last year
- Eye-for-an-eye: Appearance Transfer with Semantic Correspondence in Diffusion Models☆31Updated last year
- Code for FreeTraj, a tuning-free method for trajectory-controllable video generation☆108Updated last month
- Implementation code of the paper MIGE: A Unified Framework for Multimodal Instruction-Based Image Generation and Editing☆69Updated 3 months ago
- This respository contains the code for the NeurIPS 2024 paper SF-V: Single Forward Video Generation Model.☆99Updated 10 months ago
- This is the official repository for "LatentMan: Generating Consistent Animated Characters using Image Diffusion Models" [CVPRW 2024]☆21Updated last year
- Directed Diffusion: Direct Control of Object Placement through Attention Guidance (AAAI2024)☆80Updated last year
- [arXiv'25] AnyCharV: Bootstrap Controllable Character Video Generation with Fine-to-Coarse Guidance☆40Updated 8 months ago
- code for "TVG: A Training-free Transition Video Generation Method with Diffusion Models"☆43Updated last year
- [ICLR 2024] Code for FreeNoise based on LaVie☆33Updated last year
- [ICCV 2025] MagicMirror: ID-Preserved Video Generation in Video Diffusion Transformers☆125Updated 3 months ago
- HyperMotion is a pose guided human image animation framework based on a large-scale video diffusion Transformer.☆119Updated 3 months ago
- [ICCV 2025] FreeFlux: Understanding and Exploiting Layer-Specific Roles in RoPE-Based MMDiT for Versatile Image Editing☆63Updated last month
- [CVPR2024] Official code for Drag Your Noise: Interactive Point-based Editing via Diffusion Semantic Propagation☆86Updated last year
- [ NeurIPS 2024 D&B Track ] Implementation for "FiVA: Fine-grained Visual Attribute Dataset for Text-to-Image Diffusion Models"☆72Updated 9 months ago