JOY-MM / JoyGenLinks
talking-face video editing
☆399Updated 8 months ago
Alternatives and similar repositories for JoyGen
Users that are interested in JoyGen are comparing it to the libraries listed below
Sorting:
- JoyHallo: Digital human model for Mandarin☆509Updated last month
- ☆380Updated 4 months ago
- MimicTalk: Mimicking a personalized and expressive 3D talking face in minutes; NeurIPS 2024; Official code☆785Updated last year
- EchoMimicV3: 1.3B Parameters are All You Need for Unified Multi-Modal and Multi-Task Human Animation☆584Updated last month
- [NeurIPS 2025] OmniTalker: Real-Time Text-Driven Talking Head Generation with In-Context Audio-Visual Style Replication☆389Updated last month
- ☆628Updated 3 months ago
- Diffusion-based Portrait and Animal Animation☆840Updated last month
- The fastest digital human algorithm, now on your desktop.☆553Updated last month
- DICE-Talk is a diffusion-based emotional talking head generation method that can generate vivid and diverse emotions for speaking portrai…☆269Updated 2 months ago
- [SIGGRAPH 2025] LAM: Large Avatar Model for One-shot Animatable Gaussian Head☆818Updated last month
- MagicTryOn is a video virtual try-on framework based on a large-scale video diffusion Transformer.☆469Updated 2 months ago
- project page for ChatAnyone☆115Updated 7 months ago
- Real time streaming talking head☆480Updated last year
- A docker free offline version for HeyGem; Python and Linux is all you need!☆359Updated 3 months ago
- [ECCV 2024 Oral] EDTalk - Official PyTorch Implementation☆440Updated last month
- KeySync: A Robust Approach for Leakage-free Lip Synchronization in High Resolution☆365Updated 2 months ago
- 开源的LstmSync数字人泛化模型,只做最好的泛化模型!☆117Updated last week
- [ICLR2025] DisPose: Disentangling Pose Guidance for Controllable Human Image Animation☆374Updated 9 months ago
- SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers☆564Updated 4 months ago
- [ACM MM 2025] Ditto: Motion-Space Diffusion for Controllable Realtime Talking Head Synthesis☆536Updated 3 months ago
- Memory-Guided Diffusion for Expressive Talking Video Generation☆1,066Updated 2 months ago
- Pytorch Implementation of: "Stable-Hair: Real-World Hair Transfer via Diffusion Model" (AAAI 2025)☆511Updated 7 months ago
- wav2lip384生成器网格权重——来自不蠢不蠢☆136Updated 7 months ago
- [ICLR 2025 Oral] TANGO: Co-Speech Gesture Video Reenactment with Hierarchical Audio-Motion Embedding and Diffusion Interpolation☆1,122Updated 2 months ago
- StoryMaker: Towards consistent characters in text-to-image generation☆714Updated 10 months ago
- [ECCV'24] TalkingGaussian: Structure-Persistent 3D Talking Head Synthesis via Gaussian Splatting☆359Updated 7 months ago
- [ICCV 2025] Official Pytorch Implementation of FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait.☆415Updated 4 months ago
- VividTalk: One-Shot Audio-Driven Talking Head Generation Based on 3D Hybrid Prior☆796Updated last year
- [ECCV 2024] MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.☆754Updated 10 months ago
- ☆238Updated last year