muzishen / IMAGDressingLinks
[AAAI 2025]πIMAGDressingπ: Interactive Modular Apparel Generation for Virtual Dressing. It enables customizable human image generation with flexible garment, pose, and scene control, ensuring high fidelity and garment consistency for virtual dressing.
β1,257Updated 3 weeks ago
Alternatives and similar repositories for IMAGDressing
Users that are interested in IMAGDressing are comparing it to the libraries listed below
Sorting:
- Official implementation of Magic Clothing: Controllable Garment-Driven Image Synthesisβ1,510Updated 10 months ago
- [ICLR 2025] CatVTON is a simple and efficient virtual try-on diffusion model with 1) Lightweight Network (899.06M parameters totally), 2)β¦β1,423Updated 4 months ago
- Official implementation of "FitDiT: Advancing the Authentic Garment Details for High-fidelity Virtual Try-on"β556Updated 4 months ago
- Official repository of In-Context LoRA for Diffusion Transformersβ1,919Updated 6 months ago
- ViViD: Video Virtual Try-on using Diffusion Modelsβ532Updated last year
- [CVPR2024] StableVITON: Learning Semantic Correspondence with Latent Diffusion Model for Virtual Try-Onβ1,180Updated 5 months ago
- StoryMaker: Towards consistent characters in text-to-image generationβ702Updated 6 months ago
- [CVPR 2025] Learning Flow Fields in Attention for Controllable Person Image Generationβ1,561Updated 4 months ago
- A minimal and universal controller for FLUX.1.β1,649Updated 2 weeks ago
- β538Updated 3 months ago
- [ECCV 2024] MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.β748Updated 6 months ago
- πΊ An End-to-End Solution for High-Resolution and Long Video Generation Based on Transformer Diffusionβ2,172Updated 3 months ago
- [ECCV 2024] OMG: Occlusion-friendly Personalized Multi-concept Generation In Diffusion Modelsβ689Updated 11 months ago
- ComfyUI adaptation of IDM-VTON for virtual try-on.β524Updated 10 months ago
- πΉ A more flexible framework that can generate videos at any resolution and creates videos from images.β1,132Updated this week
- [CVPR 2024] FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translationβ767Updated last year
- Pytorch Implementation of: "Stable-Hair: Real-World Hair Transfer via Diffusion Model" (AAAI 2025)β475Updated 3 months ago
- β555Updated last week
- β994Updated last month
- β728Updated 7 months ago
- [ACM MM 2024] This is the official code for "AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion β¦β1,582Updated 10 months ago
- A repository for organizing papers, codes and other resources related to Virtual Try-on Modelsβ244Updated last week
- High-Quality Human Motion Video Generation with Confidence-aware Pose Guidanceβ2,382Updated 9 months ago
- Official implementation of "MIMO: Controllable Character Video Synthesis with Spatial Decomposed Modeling"β1,513Updated last week
- β1,190Updated 2 months ago
- π₯π₯ UNO: A Universal Customization Method for Both Single and Multi-Subject Conditioningβ1,128Updated 2 months ago
- Controllable video and image Generation, SVD, Animate Anyone, ControlNet, ControlNeXt, LoRAβ1,578Updated 9 months ago
- β423Updated 9 months ago
- β793Updated 7 months ago
- MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generationβ2,553Updated 3 months ago