Wangbenzhi / RealisHuman
Code of RealisHuman: A Two-Stage Approach for Refining Malformed Human Parts in Generated Images
☆78Updated 4 months ago
Alternatives and similar repositories for RealisHuman:
Users that are interested in RealisHuman are comparing it to the libraries listed below
- Official code for VividPose: Advancing Stable Video Diffusion for Realistic Human Image Animation.☆80Updated 8 months ago
- Magic Mirror: ID-Preserved Video Generation in Video Diffusion Transformers☆109Updated 2 months ago
- LinguaLinker: Audio-Driven Portraits Animation with Implicit Facial Control Enhancement☆69Updated 7 months ago
- [Arxiv 2024] Edicho: Consistent Image Editing in the Wild☆113Updated 2 months ago
- Official code for paper "PICTURE: PhotorealistIC virtual Try-on from UnconstRained dEsigns"☆54Updated 5 months ago
- [AAAI 2025] Follow-Your-Canvas: This repo is the official implementation of "Follow-Your-Canvas: Higher-Resolution Video Outpainting with…☆122Updated 4 months ago
- [SIGGRAPH 2024] Motion I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling☆152Updated 5 months ago
- Official implementation of Image Conductor: Precision Control for Interactive Video Synthesis☆90Updated 7 months ago
- [ECCV 2024] IDOL: Unified Dual-Modal Latent Diffusion for Human-Centric Joint Video-Depth Generation☆53Updated 5 months ago
- Official code for CustAny: Customizing Anything from A Single Example☆40Updated 3 months ago
- Official Repo for Tuning-Free Noise Rectification for High Fidelity Image-to-Video Generation☆29Updated 11 months ago
- Blending Custom Photos with Video Diffusion Transformers☆46Updated last month
- ☆35Updated 3 months ago
- A Large-Scale High-Quality Dataset for Enhancing Human-Centric Video Generation☆82Updated 2 weeks ago
- ☆76Updated 9 months ago
- Implementation code:Advancing Pose-Guided Image Synthesis with Progressive Conditional Diffusion Models☆167Updated 8 months ago
- Consistency Distillation with Target Timestep Selection and Decoupled Guidance☆71Updated 2 months ago
- ☆40Updated 2 months ago
- ☆34Updated 3 weeks ago
- Towards Localized Fine-Grained Control for Facial Expression Generation☆68Updated 2 months ago
- Controlnet extension of AnimateDiff.☆52Updated last year
- [CVPR2024] Official code for Drag Your Noise: Interactive Point-based Editing via Diffusion Semantic Propagation☆85Updated 10 months ago
- [ARXIV'24] StyleMaster: Stylize Your Video with Artistic Generation and Translation☆86Updated 3 months ago
- Code for ICLR 2024 paper "Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators"☆95Updated last year
- [ACM MM24] MotionMaster: Training-free Camera Motion Transfer For Video Generation☆88Updated 4 months ago
- [SIGGRAPH Asia 2024] I2VEdit: First-Frame-Guided Video Editing via Image-to-Video Diffusion Models☆52Updated 2 months ago
- [ECCV 2024] AnyControl, a multi-control image synthesis model that supports any combination of user provided control signals. 一个支持用户自由输入控…☆123Updated 8 months ago