carpedkm / disentangled-subject-to-vidLinks
[Official Implementation] Subject-driven Video Generation via Disentangled Identity and Motion
☆55Updated 3 months ago
Alternatives and similar repositories for disentangled-subject-to-vid
Users that are interested in disentangled-subject-to-vid are comparing it to the libraries listed below
Sorting:
- ☆50Updated last month
- Implementation of paper: Flux Already Knows – Activating Subject-Driven Image Generation without Training☆57Updated 2 months ago
- [Official Implementation] Improving Editability in Image Generation with Layer-wise Memory, CVPR 2025☆35Updated 2 months ago
- This is the project for 'Any2Caption', Interpreting Any Condition to Caption for Controllable Video Generation☆45Updated 7 months ago
- Official PyTorch implementation - Video Motion Transfer with Diffusion Transformers☆73Updated 3 months ago
- official repo of paper for "CamI2V: Camera-Controlled Image-to-Video Diffusion Model"☆153Updated last month
- [NeurIPS 2025] The official code for "IllumiCraft: Unified Geometry and Illumination Diffusion for Controllable Video Generation"☆20Updated 5 months ago
- [ACM MM24] MotionMaster: Training-free Camera Motion Transfer For Video Generation☆97Updated last year
- ☆27Updated 2 months ago
- CustomDiffusion360: Customizing Text-to-Image Diffusion with Camera Viewpoint Control☆171Updated 11 months ago
- Official implementation of "ControlFace: Harnessing Facial Parametric Control for Face Rigging".☆40Updated 8 months ago
- Official implementation of "Emergent Temporal Correspondences from Video Diffusion Models"☆85Updated 4 months ago
- ✨ PyTorch implementation of "Cora: Correspondence-aware Image Editing Using Few-Step Diffusion", accepted at SIGGRAPH 2025.☆29Updated 5 months ago
- [AAAI'25] Official implementation of Image Conductor: Precision Control for Interactive Video Synthesis☆99Updated last year
- [SIGGRAPH Asia 2024] I2VEdit: First-Frame-Guided Video Editing via Image-to-Video Diffusion Models☆74Updated 4 months ago
- [ECCV 2024] Noise Calibration: Plug-and-play Content-Preserving Video Enhancement using Pre-trained Video Diffusion Models☆87Updated last year
- MotionShop: Zero-Shot Motion Transfer in Video Diffusion Models with Mixture of Score Guidance☆26Updated 11 months ago
- [SIGGRAPH 2025] Official implementation of 'Motion Inversion For Video Customization'☆151Updated last year
- [ICLR 2025] DreamCatalyst: Fast and High-Quality 3D Editing via Controlling Editability and Identity Preservation☆95Updated 9 months ago
- CVPRW 2025 paper Progressive Autoregressive Video Diffusion Models: https://arxiv.org/abs/2410.08151☆85Updated 6 months ago
- [Siggraph Asia 2025] Official code release of our paper "Shape-for-Motion: Precise and Consistent Video Editing with 3D Proxy"☆51Updated last month
- [ICCV 2025] Balanced Image Stylization with Style Matching Score☆62Updated last month
- Phantom-Data: Towards a General Subject-Consistent Video Generation Dataset☆93Updated 3 weeks ago
- [SIGGRAPH Asia'25] Enabling Reference-based Camera Control via Context without Explicit 3D Estimation☆92Updated 3 weeks ago
- ☆29Updated 7 months ago
- ☆33Updated 5 months ago
- [ICLR 2025] Trajectory Attention For Fine-grained Video Motion Control☆95Updated 6 months ago
- Official implementation of "A Noise is Worth Diffusion Guidance", code and weights will be available soon.☆48Updated 11 months ago
- ☆53Updated 3 weeks ago
- AniCrafter: Customizing Realistic Human-Centric Animation via Avatar-Background Conditioning in Video Diffusion Models☆125Updated 3 months ago