HumanAIGC / OutfitAnyoneLinks
Outfit Anyone: Ultra-high quality virtual try-on for Any Clothing and Any Person
☆5,967Updated last year
Alternatives and similar repositories for OutfitAnyone
Users that are interested in OutfitAnyone are comparing it to the libraries listed below
Sorting:
- [AAAI 2025] Official implementation of "OOTDiffusion: Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on"☆6,481Updated last year
- Official implementations for paper: Anydoor: zero-shot object-level image customization☆4,196Updated last year
- [ECCV2024] IDM-VTON : Improving Diffusion Models for Authentic Virtual Try-on in the Wild☆4,772Updated 9 months ago
- ☆2,459Updated last year
- Animate Anyone: Consistent and Controllable Image-to-Video Synthesis for Character Animation☆14,793Updated 2 months ago
- Official implementation of DreaMoving☆1,801Updated last year
- Character Animation (AnimateAnyone, Face Reenactment)☆3,460Updated last year
- Official implementation code of the paper <AnyText: Multilingual Visual Text Generation And Editing>☆4,813Updated 9 months ago
- Official implementation of Magic Clothing: Controllable Garment-Driven Image Synthesis☆1,536Updated last year
- Official repo for VGen: a holistic video generation ecosystem for video generation building on diffusion models☆3,147Updated 11 months ago
- [CVPR 2024] Official repository for "MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model"☆10,878Updated 3 months ago
- AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animation☆5,023Updated last year
- [ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors☆2,970Updated last year
- MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation☆2,633Updated 9 months ago
- The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.☆6,343Updated last year
- Unofficial Implementation of Animate Anyone☆2,934Updated last year
- [CVPR2024] StableVITON: Learning Semantic Correspondence with Latent Diffusion Model for Virtual Try-On☆1,241Updated last month
- Emote Portrait Alive: Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions☆7,652Updated last year
- Let us democratise high-resolution generation! (CVPR 2024)☆2,033Updated 2 months ago
- Accepted as [NeurIPS 2024] Spotlight Presentation Paper☆6,365Updated last year
- [WIP] Layer Diffusion for WebUI (via Forge)☆4,100Updated last year
- Convert your videos to densepose and use it on MagicAnimate☆1,102Updated 2 years ago
- MagicEdit: High-Fidelity Temporally Coherent Video Editing☆1,806Updated 2 years ago
- [AAAI 2025]👔IMAGDressing👔: Interactive Modular Apparel Generation for Virtual Dressing. It enables customizable human image generation …☆1,312Updated 2 months ago
- An intuitive GUI for GLIGEN that uses ComfyUI in the backend☆2,050Updated last year
- Official implementation of AnimateDiff.☆11,919Updated last year
- PhotoMaker [CVPR 2024]☆10,102Updated last year
- MuseV: Infinite-length and High Fidelity Virtual Human Video Generation with Visual Conditioned Parallel Denoising☆2,805Updated last year
- [ICLR 2025] CatVTON is a simple and efficient virtual try-on diffusion model with 1) Lightweight Network (899.06M parameters totally), 2)…☆1,552Updated 9 months ago
- Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models☆1,779Updated last year