harlanhong / ACTalkerLinks
ACTalker: an end-to-end video diffusion framework for talking head synthesis that supports both single and multi-signal control (e.g., audio, expression).
☆285Updated 2 months ago
Alternatives and similar repositories for ACTalker
Users that are interested in ACTalker are comparing it to the libraries listed below
Sorting:
- [SIGGRAPH 2025] Official code of the paper "FlexiAct: Towards Flexible Action Control in Heterogeneous Scenarios"☆284Updated last month
- [CVPR-2025] The official code of HunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animation☆256Updated last week
- Official code for AccVideo: Accelerating Video Diffusion Model with Synthetic Dataset☆236Updated last week
- [CVPR 2025] HunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animation☆249Updated 2 weeks ago
- Mobius: Text to Seamless Looping Video Generation via Latent Shift☆154Updated last month
- Light-A-Video: Training-free Video Relighting via Progressive Light Fusion☆425Updated last month
- KeySync: A Robust Approach for Leakage-free Lip Synchronization in High Resolution☆318Updated last month
- Let Them Talk: Audio-Driven Multi-Person Conversational Video Generation☆300Updated this week
- Official Pytorch Implementation of FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait.☆252Updated 4 months ago
- DICE-Talk is a diffusion-based emotional talking head generation method that can generate vivid and diverse emotions for speaking portrai…☆203Updated last month
- [ICLR2025] DisPose: Disentangling Pose Guidance for Controllable Human Image Animation☆367Updated 4 months ago
- The official code implementation of the paper "OmniConsistency: Learning Style-Agnostic Consistency from Paired Stylization Data."☆346Updated last week
- Offical implement of Dynamic Frame Avatar with Non-autoregressive Diffusion Framework for talking head Video Generation☆229Updated 2 months ago
- In-context subject-driven image generation while preserving foreground fidelity☆289Updated last week
- SeedVR2: One-Step Video Restoration via Diffusion Adversarial Post-Training☆264Updated this week
- Uni3C: Unifying Precisely 3D-Enhanced Camera and Human Motion Controls for Video Generation☆285Updated 3 weeks ago
- Official repository for the paper "CAP4D: Creating Animatable 4D Portrait Avatars with Morphable Multi-View Diffusion Models"☆172Updated 6 months ago
- Source code for the SIGGRAPH 2024 paper "X-Portrait: Expressive Portrait Animation with Hierarchical Motion Attention"☆511Updated 10 months ago
- SynCD: Generating Multi-Image Synthetic Data for Text-to-Image Customization☆137Updated last month
- ☆187Updated 5 months ago
- [CVPR 2025 Highlight] X-Dyna: Expressive Dynamic Human Image Animation☆249Updated 4 months ago
- [ICLR 2025] Animate-X - PyTorch Implementation☆304Updated 4 months ago
- [NeurIPS 2024] Generalizable and Animatable Gaussian Head Avatar☆488Updated 3 months ago
- Project Page for Animate Anyone 2☆62Updated 4 months ago
- [SIGGRAPH 2025] Official code of the paper "Cobra: Efficient Line Art COlorization with BRoAder References"☆192Updated 2 months ago
- All-round Creator and Editor☆222Updated 5 months ago
- [arXiv 2025] Official pytorch implementation of "FramePainter: Endowing Interactive Image Editing with Video Diffusion Priors"☆379Updated 3 months ago
- [ICLR 2025] Animate-X: Universal Character Image Animation with Enhanced Motion Representation☆312Updated 4 months ago
- ☆80Updated 4 months ago
- CVPR2025:AnimateAnything☆173Updated 2 weeks ago