toto222 / DICE-TalkLinks
DICE-Talk is a diffusion-based emotional talking head generation method that can generate vivid and diverse emotions for speaking portraits.
☆230Updated 2 months ago
Alternatives and similar repositories for DICE-Talk
Users that are interested in DICE-Talk are comparing it to the libraries listed below
Sorting:
- [ICCV 2025] Official Pytorch Implementation of FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait.☆351Updated last month
- KeySync: A Robust Approach for Leakage-free Lip Synchronization in High Resolution☆344Updated last month
- FantasyPortrait: Enhancing Multi-Character Portrait Animation with Expression-Augmented Diffusion Transformers☆169Updated last week
- [CVPR-2025] The official code of HunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animation☆280Updated last month
- [ACM MM 2025] Ditto: Motion-Space Diffusion for Controllable Realtime Talking Head Synthesis☆403Updated 3 weeks ago
- SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers☆554Updated last month
- [ICLR2025] DisPose: Disentangling Pose Guidance for Controllable Human Image Animation☆369Updated 6 months ago
- [CVPR 2025 Highlight] X-Dyna: Expressive Dynamic Human Image Animation☆255Updated 6 months ago
- ICCV 2025 ACTalker: an end-to-end video diffusion framework for talking head synthesis that supports both single and multi-signal control…☆356Updated this week
- 开源的LstmSync数字人泛化模型,只做最好的泛化模型!☆51Updated 2 weeks ago
- [CVPR'25-Demo] Official repository of "TryOffDiff: Virtual-Try-Off via High-Fidelity Garment Reconstruction using Diffusion Models".☆118Updated last week
- [ICLR 2025] Animate-X: Universal Character Image Animation with Enhanced Motion Representation☆335Updated 5 months ago
- Offical implement of Dynamic Frame Avatar with Non-autoregressive Diffusion Framework for talking head Video Generation☆230Updated 4 months ago
- [CVPR-2025] The official code of HunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animation☆267Updated last month
- Official implementation of MAGREF: Masked Guidance for Any-Reference Video Generation☆240Updated 2 weeks ago
- [ICLR 2025] Animate-X - PyTorch Implementation☆304Updated 6 months ago
- [ECCV 2024 Oral] EDTalk - Official PyTorch Implementation☆424Updated last week
- Fast running Live Portrait with TensorRT and ONNX models☆167Updated last year
- ☆186Updated 3 months ago
- MagicTryOn is a video virtual try-on framework based on a large-scale video diffusion Transformer.☆399Updated 2 weeks ago
- [AAAI2025] DreamFit: Garment-Centric Human Generation via a Lightweight Anything-Dressing Encoder☆119Updated 2 months ago
- Generate ARKit expression from audio in realtime☆125Updated last month
- Full version of wav2lip-onnx including face alignment and face enhancement and more...☆131Updated last month
- project page for ChatAnyone☆111Updated 4 months ago
- LLIA - Enabling Low-Latency Interactive Avatars: Real-Time Audio-Driven Portrait Video Generation with Diffusion Models☆104Updated last month
- ☆72Updated last week
- ☆52Updated 6 months ago
- [ICCV2025] MotionFollower: Editing Video Motion via Lightweight Score-Guided Diffusion☆228Updated last month
- [NeurIPS 2024] SHMT: Self-supervised Hierarchical Makeup Transfer via Latent Diffusion Models☆193Updated 6 months ago
- ☆325Updated last month