JOY-MM / JoyGenLinks
talking-face video editing
☆383Updated 7 months ago
Alternatives and similar repositories for JoyGen
Users that are interested in JoyGen are comparing it to the libraries listed below
Sorting:
- JoyHallo: Digital human model for Mandarin☆508Updated 2 weeks ago
- ☆370Updated 3 months ago
- EchoMimicV3: 1.3B Parameters are All You Need for Unified Multi-Modal and Multi-Task Human Animation☆548Updated last month
- MimicTalk: Mimicking a personalized and expressive 3D talking face in minutes; NeurIPS 2024; Official code☆781Updated 11 months ago
- [SIGGRAPH 2025] LAM: Large Avatar Model for One-shot Animatable Gaussian Head☆793Updated last month
- [NeurIPS 2025] OmniTalker: Real-Time Text-Driven Talking Head Generation with In-Context Audio-Visual Style Replication☆382Updated 3 weeks ago
- ☆623Updated 2 months ago
- The fastest digital human algorithm, now on your desktop.☆552Updated last week
- Diffusion-based Portrait and Animal Animation☆839Updated 2 weeks ago
- DICE-Talk is a diffusion-based emotional talking head generation method that can generate vivid and diverse emotions for speaking portrai…☆258Updated 2 months ago
- project page for ChatAnyone☆113Updated 6 months ago
- [ECCV 2024 Oral] EDTalk - Official PyTorch Implementation☆436Updated last week
- 开源的LstmSync数字人泛化模型,只做最好的泛化模型!☆103Updated last week
- A docker free offline version for HeyGem; Python and Linux is all you need!☆345Updated 2 months ago
- KeySync: A Robust Approach for Leakage-free Lip Synchronization in High Resolution☆362Updated 2 months ago
- Real time streaming talking head☆482Updated last year
- SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers☆561Updated 4 months ago
- wav2lip384生成器网格权重——来自不蠢不蠢☆131Updated 7 months ago
- MagicTryOn is a video virtual try-on framework based on a large-scale video diffusion Transformer.☆464Updated last month
- [ECCV'24] TalkingGaussian: Structure-Persistent 3D Talking Head Synthesis via Gaussian Splatting☆355Updated 6 months ago
- [ACM MM 2025] Ditto: Motion-Space Diffusion for Controllable Realtime Talking Head Synthesis☆501Updated 3 months ago
- [ICLR2025] DisPose: Disentangling Pose Guidance for Controllable Human Image Animation☆373Updated 8 months ago
- [ICCV 2025] Official Pytorch Implementation of FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait.☆406Updated 3 months ago
- [CVPR 2024] Make-Your-Anchor: A Diffusion-based 2D Avatar Generation Framework.☆353Updated 8 months ago
- Generate ARKit expression from audio in realtime☆149Updated 2 weeks ago
- ☆237Updated last year
- 优化wav2lip的执行步骤,将头脸分离、嘴型替换、回补背景三个步骤分离,添加gfpgan强化面部功能,实现提前解帧,流式循环处理,对接obs☆79Updated 9 months ago
- Memory-Guided Diffusion for Expressive Talking Video Generation☆1,066Updated 2 months ago
- ☆77Updated 2 months ago
- A 2D customized lip-sync model for high-fidelity real-time driving.☆95Updated 3 months ago