alibaba-yuanjing-aigclab / ViViDLinks
ViViD: Video Virtual Try-on using Diffusion Models
☆542Updated last year
Alternatives and similar repositories for ViViD
Users that are interested in ViViD are comparing it to the libraries listed below
Sorting:
- A repository for organizing papers, codes and other resources related to Virtual Try-on Models☆277Updated last week
- ☆427Updated 10 months ago
- Official implementation of "FitDiT: Advancing the Authentic Garment Details for High-fidelity Virtual Try-on"☆578Updated 6 months ago
- Official Pytorch implementation of StreamV2V.☆504Updated 5 months ago
- Code and data for "AnyV2V: A Tuning-Free Framework For Any Video-to-Video Editing Tasks" [TMLR 2024]☆601Updated 9 months ago
- [ECCV 2024] MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.☆751Updated 8 months ago
- [ECCV 2024] OMG: Occlusion-friendly Personalized Multi-concept Generation In Diffusion Models☆693Updated last year
- Pytorch Implementation of: "Stable-Hair: Real-World Hair Transfer via Diffusion Model" (AAAI 2025)☆496Updated 4 months ago
- ☆142Updated last year
- StoryMaker: Towards consistent characters in text-to-image generation☆705Updated 8 months ago
- [AAAI 2025] MV-VTON: Multi-View Virtual Try-On with Diffusion Models☆243Updated 7 months ago
- [SIGGRAPH ASIA 2024 TCS] AnimateLCM: Computation-Efficient Personalized Style Video Generation without Personalized Video Data☆645Updated 9 months ago
- [CVPR 2024] FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translation