deepcs233 / VividFaceLinks
[Neurips 2025'] VividFace: A Diffusion-Based Hybrid Framework for High-Fidelity Video Face Swapping
☆54Updated last month
Alternatives and similar repositories for VividFace
Users that are interested in VividFace are comparing it to the libraries listed below
Sorting:
- ☆90Updated 4 months ago
- Concat-ID: Towards Universal Identity-Preserving Video Synthesis☆63Updated 6 months ago
- This repo contains the code for PreciseControl project [ECCV'24]☆69Updated last year
- [CVPR 2025] Official code for "Synergizing Motion and Appearance: Multi-Scale Compensatory Codebooks for Talking Head Video Generation"☆63Updated 5 months ago
- Blending Custom Photos with Video Diffusion Transformers☆48Updated 9 months ago
- [ICCV 2025] MagicMirror: ID-Preserved Video Generation in Video Diffusion Transformers☆126Updated 4 months ago
- Pytorch Implementation of "SSR-Encoder: Encoding Selective Subject Representation for Subject-Driven Generation"(CVPR 2024)☆125Updated last year
- ☆66Updated last year
- MotionShop: Zero-Shot Motion Transfer in Video Diffusion Models with Mixture of Score Guidance☆26Updated 11 months ago
- [ICCV 2025] FreeFlux: Understanding and Exploiting Layer-Specific Roles in RoPE-Based MMDiT for Versatile Image Editing☆65Updated 2 months ago
- [arXiv'25] AnyCharV: Bootstrap Controllable Character Video Generation with Fine-to-Coarse Guidance☆40Updated 8 months ago
- One-Shot Learning for Pose-Guided Person Image Synthesis in the Wild☆21Updated 7 months ago
- FantasyID: Face Knowledge Enhanced ID-Preserving Video Generation☆74Updated 2 months ago
- ☆31Updated last year
- [CVPR'25] StyleMaster: Stylize Your Video with Artistic Generation and Translation☆146Updated 3 months ago
- [ACM MM24] MotionMaster: Training-free Camera Motion Transfer For Video Generation☆97Updated last year
- AAAI 2025: Anywhere: A Multi-Agent Framework for User-Guided, Reliable, and Diverse Foreground-Conditioned Image Generation☆44Updated last year
- This respository contains the code for the NeurIPS 2024 paper SF-V: Single Forward Video Generation Model.☆99Updated 11 months ago
- ☆55Updated last year
- Official repository for our paper, "TryOn-Adapter: Efficient Fine-Grained Clothing Identity Adaptation for High-Fidelity Virtual Try-On"☆48Updated 5 months ago
- ☆20Updated last year
- Official code of "Edit Transfer: Learning Image Editing via Vision In-Context Relations"☆84Updated 5 months ago
- MasterWeaver: Taming Editability and Face Identity for Personalized Text-to-Image Generation (ECCV 2024)☆133Updated last year
- Implementation code of the paper MIGE: A Unified Framework for Multimodal Instruction-Based Image Generation and Editing☆70Updated 4 months ago
- ☆39Updated last year
- [ECCV 2024] Noise Calibration: Plug-and-play Content-Preserving Video Enhancement using Pre-trained Video Diffusion Models☆87Updated last year
- Towards Localized Fine-Grained Control for Facial Expression Generation☆82Updated 10 months ago
- [CVPR2024] Official code for Drag Your Noise: Interactive Point-based Editing via Diffusion Semantic Propagation☆87Updated last year
- [ICLR 2025] Official lmplementation of SPM-Diff: Incorporating Visual Correspondence into Diffusion Model for Virtual Try-On☆46Updated 8 months ago
- ☆26Updated last year