harlanhong / ACTalker
ACTalker: an end-to-end video diffusion framework for talking head synthesis that supports both single and multi-signal control (e.g., audio, expression).
☆246Updated 2 weeks ago
Alternatives and similar repositories for ACTalker:
Users that are interested in ACTalker are comparing it to the libraries listed below
- Mobius: Text to Seamless Looping Video Generation via Latent Shift☆140Updated last month
- Official code for AccVideo: Accelerating Video Diffusion Model with Synthetic Dataset☆169Updated last month
- SynCD: Generating Multi-Image Synthetic Data for Text-to-Image Customization☆132Updated last week
- [CVPR-2025] The official code of HunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animation☆187Updated 2 weeks ago
- Light-A-Video: Training-free Video Relighting via Progressive Light Fusion☆412Updated 2 weeks ago
- Project Page for Animate Anyone 2☆62Updated 2 months ago
- [ICLR 2025] Animate-X: Universal Character Image Animation with Enhanced Motion Representation☆285Updated 2 months ago
- Source code for the SIGGRAPH 2024 paper "X-Portrait: Expressive Portrait Animation with Hierarchical Motion Attention"☆506Updated 9 months ago
- [CVPR 2025 Highlight] X-Dyna: Expressive Dynamic Human Image Animation☆237Updated 3 months ago
- ☆115Updated last week
- Offical implement of Dynamic Frame Avatar with Non-autoregressive Diffusion Framework for talking head Video Generation☆221Updated last month
- [ICLR2025] DisPose: Disentangling Pose Guidance for Controllable Human Image Animation☆366Updated 3 months ago
- CVPR2025:AnimateAnything☆169Updated last month
- [SIGGRAPH 2025] Official code of the paper "Cobra: Efficient Line Art COlorization with BRoAder References"☆176Updated 3 weeks ago
- All-round Creator and Editor☆215Updated 4 months ago
- ☆78Updated 3 months ago
- The official implementation of ”RepVideo: Rethinking Cross-Layer Representation for Video Generation“☆117Updated 3 months ago
- CatV2TON is a lightweight DiT-based visual virtual try-on model, capable of supporting try-on for both images and videos.☆127Updated 2 months ago
- ☆308Updated last month
- [arXiv 2025] Diffusion as Shader: 3D-aware Video Diffusion for Versatile Video Generation Control☆607Updated this week
- [arXiv 2025] Official pytorch implementation of "FramePainter: Endowing Interactive Image Editing with Video Diffusion Priors"☆369Updated last month
- ☆405Updated 6 months ago
- MotionFollower: Editing Video Motion via Lightweight Score-Guided Diffusion☆215Updated 2 weeks ago
- SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers☆492Updated last week
- [ICLR 2025] Animate-X - PyTorch Implementation☆303Updated 3 months ago
- Uni3C: Unifying Precisely 3D-Enhanced Camera and Human Motion Controls for Video Generation☆139Updated 2 weeks ago
- ☆158Updated 2 weeks ago
- LayerAnimate: Layer-specific Control for Animation☆154Updated last month
- [SIGGRAPH2025] Official repo for paper "Any-length Video Inpainting and Editing with Plug-and-Play Context Control"☆353Updated last month
- HunyuanVideo Keyframe Control Lora is an adapter for HunyuanVideo T2V model for keyframe-based video generation☆130Updated last month