muzishen / IMAGGarment-1Links
🎨 IMAGGarment-1🎨 : Fine-Grained Garment Generation with Controllable Structure, Color, and Logo. It supports precise and customizable garment synthesis guided by multi-conditions (e.g., sketch, color, logo), achieving high realism and controllability for digital fashion design.
☆42Updated this week
Alternatives and similar repositories for IMAGGarment-1
Users that are interested in IMAGGarment-1 are comparing it to the libraries listed below
Sorting:
- ☆43Updated 8 months ago
- The code of Edit-Your-Motion☆13Updated last year
- Official PyTorch implementation - Video Motion Transfer with Diffusion Transformers☆55Updated last month
- [CVPR'25 - Rating 555] Official PyTorch implementation of Lumos: Learning Visual Generative Priors without Text☆50Updated 2 months ago
- Implementation code of the paper MIGE: A Unified Framework for Multimodal Instruction-Based Image Generation and Editing☆63Updated 3 months ago
- [CVPR 2024] BIVDiff: A Training-free Framework for General-Purpose Video Synthesis via Bridging Image and Video Diffusion Models☆73Updated 8 months ago
- ☆20Updated 10 months ago
- ☆26Updated 2 months ago
- UniCombine: Unified Multi-Conditional Combination with Diffusion Transformer☆88Updated this week
- This is the project for 'Any2Caption', Interpreting Any Condition to Caption for Controllable Video Generation☆41Updated 2 months ago
- [ARXIV'24] StyleMaster: Stylize Your Video with Artistic Generation and Translation☆114Updated 2 months ago
- [Arxiv 2024] Edicho: Consistent Image Editing in the Wild☆118Updated 4 months ago
- Magic Mirror: ID-Preserved Video Generation in Video Diffusion Transformers☆117Updated 4 months ago
- [CVPR 2025] Official code for "Synergizing Motion and Appearance: Multi-Scale Compensatory Codebooks for Talking Head Video Generation"☆50Updated this week
- ☆83Updated last year
- MotionShop: Zero-Shot Motion Transfer in Video Diffusion Models with Mixture of Score Guidance☆25Updated 5 months ago
- ☆26Updated 2 months ago
- Official code for VividPose: Advancing Stable Video Diffusion for Realistic Human Image Animation.☆83Updated 11 months ago
- HyperMotion code repository and demo show☆20Updated last week
- Analogist: Out-of-the-box Visual In-Context Learning with Image Diffusion Model (SIGGRAPH 2024)☆37Updated 8 months ago
- code repo for the paper "AvatarVerse: High-quality & Stable 3D Avatar Creation from Text and Pose" (AAAI2024)☆59Updated last year
- This repo contains the code for PreciseControl project [ECCV'24]☆62Updated 8 months ago
- AAAI 2025: Anywhere: A Multi-Agent Framework for User-Guided, Reliable, and Diverse Foreground-Conditioned Image Generation☆41Updated last year
- Official code for CustAny: Customizing Anything from A Single Example. Accepted by CVPR2025 (Oral)☆42Updated last month
- ☆131Updated 2 months ago
- code for "MVOC:atraining-free multiple video object composition method with diffusion models"☆22Updated 11 months ago
- [ACM MM24] MotionMaster: Training-free Camera Motion Transfer For Video Generation☆92Updated 7 months ago
- Official pytorch implementation for SingleInsert☆27Updated last year
- [ECCV 2024] Noise Calibration: Plug-and-play Content-Preserving Video Enhancement using Pre-trained Video Diffusion Models☆87Updated 9 months ago
- Official implementation of "IFAdapter: Instance Feature Control for Grounded Text-to-Image Generation".☆55Updated 8 months ago