yuanze-lin / IllumiCraftLinks
The official code for "IllumiCraft: Unified Geometry and Illumination Diffusion for Controllable Video Generation"
☆19Updated 2 months ago
Alternatives and similar repositories for IllumiCraft
Users that are interested in IllumiCraft are comparing it to the libraries listed below
Sorting:
- [Official Implementation] Subject-driven Video Generation via Disentangled Identity and Motion☆55Updated 2 weeks ago
- ☆46Updated last month
- ☆30Updated 5 months ago
- AniCrafter: Customizing Realistic Human-Centric Animation via Avatar-Background Conditioning in Video Diffusion Models☆108Updated last month
- This is the project for 'Any2Caption', Interpreting Any Condition to Caption for Controllable Video Generation☆44Updated 4 months ago
- MotionShop: Zero-Shot Motion Transfer in Video Diffusion Models with Mixture of Score Guidance☆26Updated 8 months ago
- DanceTogether! Identity-Preserving Multi-Person Interactive Video Generation☆33Updated 3 weeks ago
- ☆82Updated 3 weeks ago
- This is the official repository for "LatentMan: Generating Consistent Animated Characters using Image Diffusion Models" [CVPRW 2024]☆22Updated last year
- DreamCinema: Cinematic Transfer with Free Camera and 3D Character☆96Updated 2 months ago
- Official PyTorch implementation - Video Motion Transfer with Diffusion Transformers☆71Updated 3 weeks ago
- HyperMotion is a pose guided human image animation framework based on a large-scale video diffusion Transformer.☆108Updated last month
- Implementation of paper: Flux Already Knows – Activating Subject-Driven Image Generation without Training☆49Updated 3 months ago
- Phantom-Data: Towards a General Subject-Consistent Video Generation Dataset☆71Updated 2 months ago
- Official code release of our paper "Shape-for-Motion: Precise and Consistent Video Editing with 3D Proxy"☆45Updated last month
- ☆27Updated 2 months ago
- Official implementation of "Perception-as-Control: Fine-grained Controllable Image Animation with 3D-aware Motion Representation" (ICCV 2…☆69Updated 3 weeks ago
- ☆43Updated last month
- Frame In-N-Out: Unbounded Controllable Image-to-Video Generation☆22Updated last month
- ☆64Updated last year
- [ECCV 2024] IDOL: Unified Dual-Modal Latent Diffusion for Human-Centric Joint Video-Depth Generation☆55Updated 11 months ago
- [SIGGRAPH Asia 2024] I2VEdit: First-Frame-Guided Video Editing via Image-to-Video Diffusion Models☆69Updated 2 months ago
- [AAAI-2025] Official implementation of Image Conductor: Precision Control for Interactive Video Synthesis☆95Updated last year
- [CVPR'25] Official PyTorch implementation of AvatarArtist: Open-Domain 4D Avatarization.☆64Updated 2 months ago
- [arXiv'25] AnyCharV: Bootstrap Controllable Character Video Generation with Fine-to-Coarse Guidance☆39Updated 6 months ago
- Official repository for HOComp: Interaction-Aware Human-Object Composition☆21Updated last month
- ☆25Updated 3 months ago
- [CVPR 2025] Zero-1-to-A: Zero-Shot One Image to Animatable Head Avatars Using Video Diffusion☆41Updated 5 months ago
- Official PyTorch/Diffusers implementation of "RectifiedHR: Enable Efficient High Resolution Image Generation via Energy Rectification"☆21Updated 5 months ago
- [ICLR' 25] AvatarGO: Zero-shot 4D Human-Object Interaction Generation and Animation☆66Updated 5 months ago