jdh-algo / JoyVASALinks
Diffusion-based Portrait and Animal Animation
☆814Updated 4 months ago
Alternatives and similar repositories for JoyVASA
Users that are interested in JoyVASA are comparing it to the libraries listed below
Sorting:
- Bring portraits to life in Real Time!onnx/tensorrt support!实时肖像驱动!☆939Updated last month
- [ACM MM 2025] FantasyTalking: Realistic Talking Portrait Generation via Coherent Motion Synthesis☆1,466Updated last week
- [ACM MM 2025] Ditto: Motion-Space Diffusion for Controllable Realtime Talking Head Synthesis☆403Updated 3 weeks ago
- ☆591Updated last week
- Source code for the SIGGRAPH 2024 paper "X-Portrait: Expressive Portrait Animation with Hierarchical Motion Attention"☆517Updated last year
- SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers☆554Updated last month
- [ICCV 2025] Official Pytorch Implementation of FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait.☆351Updated last month
- KeySync: A Robust Approach for Leakage-free Lip Synchronization in High Resolution☆344Updated last month
- talking-face video editing☆371Updated 5 months ago
- JoyHallo: Digital human model for Mandarin☆503Updated 8 months ago
- [ICLR2025] DisPose: Disentangling Pose Guidance for Controllable Human Image Animation☆369Updated 6 months ago
- [SIGGRAPH 2025] LAM: Large Avatar Model for One-shot Animatable Gaussian Head☆675Updated 2 months ago
- Memory-Guided Diffusion for Expressive Talking Video Generation☆1,047Updated 6 months ago
- Using Claude Sonnet 3.5 to forward (reverse) engineer code from VASA white paper - WIP - (this is for La Raza 🎷)☆296Updated 8 months ago
- ICCV 2025 ACTalker: an end-to-end video diffusion framework for talking head synthesis that supports both single and multi-signal control…☆356Updated this week
- Let Them Talk: Audio-Driven Multi-Person Conversational Video Generation☆1,700Updated 2 weeks ago
- [ECCV 2024 Oral] EDTalk - Official PyTorch Implementation☆424Updated last week
- Phantom: Subject-Consistent Video Generation via Cross-Modal Alignment☆1,328Updated last month
- DICE-Talk is a diffusion-based emotional talking head generation method that can generate vivid and diverse emotions for speaking portrai…☆230Updated 2 months ago
- StoryMaker: Towards consistent characters in text-to-image generation☆703Updated 7 months ago
- ☆1,741Updated last month
- MimicTalk: Mimicking a personalized and expressive 3D talking face in minutes; NeurIPS 2024; Official code☆751Updated 9 months ago
- VividTalk: One-Shot Audio-Driven Talking Head Generation Based on 3D Hybrid Prior☆794Updated last year
- ☆1,014Updated 2 months ago
- Official implementation of "FitDiT: Advancing the Authentic Garment Details for High-fidelity Virtual Try-on"☆575Updated 5 months ago
- [ECCV 2024] MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.☆751Updated 7 months ago
- FoleyCrafter: Bring Silent Videos to Life with Lifelike and Synchronized Sounds. AI拟音大师,给你的无声视频添加生动而且同步的音效 😝☆611Updated last year
- HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation☆1,137Updated last month
- Official implementation of "MIMO: Controllable Character Video Synthesis with Spatial Decomposed Modeling"☆1,533Updated last month
- Pytorch Implementation of: "Stable-Hair: Real-World Hair Transfer via Diffusion Model" (AAAI 2025)☆496Updated 4 months ago