Fantasy-AMAP / fantasy-portraitLinks
FantasyPortrait: Enhancing Multi-Character Portrait Animation with Expression-Augmented Diffusion Transformers
☆398Updated last week
Alternatives and similar repositories for fantasy-portrait
Users that are interested in fantasy-portrait are comparing it to the libraries listed below
Sorting:
- Stand-In is a lightweight, plug-and-play framework for identity-preserving video generation.☆527Updated last week
- ICCV 2025 ACTalker: an end-to-end video diffusion framework for talking head synthesis that supports both single and multi-signal control…☆381Updated last week
- [ICCV 2025] Official Pytorch Implementation of FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait.☆375Updated 2 months ago
- We achieves high-quality first-frame guided video editing given a reference image, while maintaining flexibility for incorporating additi…☆299Updated 2 weeks ago
- Calligrapher: Freestyle Text Image Customization☆279Updated last month
- [CVPR-2025] The official code of HunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animation☆288Updated 2 months ago
- [ICLR 2025] Animate-X - PyTorch Implementation☆305Updated 7 months ago
- Official code for AccVideo: Accelerating Video Diffusion Model with Synthetic Dataset☆257Updated 2 months ago
- The official code implementation of the paper "OmniConsistency: Learning Style-Agnostic Consistency from Paired Stylization Data."☆393Updated 2 months ago
- Unlimited-length talking video generation that supports image-to-video and video-to-video generation☆717Updated this week
- Streamlining Cartoon Production with Generative Post-Keyframing☆384Updated last week
- [AAAI2025] DreamFit: Garment-Centric Human Generation via a Lightweight Anything-Dressing Encoder☆129Updated 3 months ago
- [Official] Voost: A Unified and Scalable Diffusion Transformer for Bidirectional Virtual Try-On and Try-Off☆279Updated last week
- Official implementation of MAGREF: Masked Guidance for Any-Reference Video Generation☆252Updated last month
- DICE-Talk is a diffusion-based emotional talking head generation method that can generate vivid and diverse emotions for speaking portrai…☆247Updated 3 weeks ago
- Mobius: Text to Seamless Looping Video Generation via Latent Shift☆164Updated 3 months ago
- In-context subject-driven image generation while preserving foreground fidelity☆348Updated 2 months ago
- KeySync: A Robust Approach for Leakage-free Lip Synchronization in High Resolution☆354Updated 3 weeks ago
- HunyuanVideo Keyframe Control Lora is an adapter for HunyuanVideo T2V model for keyframe-based video generation☆160Updated 5 months ago
- Pusa: Thousands Timesteps Video Diffusion Model☆597Updated this week
- A set of nodes to edit videos using the Hunyuan Video model☆488Updated 6 months ago
- A novel approach to hunyuan image-to-video sampling☆305Updated 6 months ago
- [SIGGRAPH 2025] Official code of the paper "Cobra: Efficient Line Art COlorization with BRoAder References". Cobra:利用更广泛参考图实现高效线稿上色☆209Updated 4 months ago
- ☆512Updated last month
- [ICLR 2025] Animate-X: Universal Character Image Animation with Enhanced Motion Representation☆353Updated 6 months ago
- [SIGGRAPH 2025] Official code of the paper "FlexiAct: Towards Flexible Action Control in Heterogeneous Scenarios"☆327Updated last week
- ComfyUI nodes to edit videos using Genmo Mochi☆295Updated 9 months ago
- Official Implementation of DRA-Ctrl (Dimension-Reduction Attack! Video Generative Models are Experts on Controllable Image Synthesis)☆118Updated 2 weeks ago
- A set of ComfyUI nodes providing additional control for the LTX Video model☆501Updated 5 months ago
- [CVPR 2025 Highlight🌟] Official ComfyUI implementation of "HyperLoRA: Parameter-Efficient Adaptive Generation for Portrait Synthesis"☆416Updated 2 months ago