jdh-algo / JoyVASALinks
Diffusion-based Portrait and Animal Animation
☆844Updated last month
Alternatives and similar repositories for JoyVASA
Users that are interested in JoyVASA are comparing it to the libraries listed below
Sorting:
- Bring portraits to life in Real Time!onnx/tensorrt support!实时肖像驱动!☆995Updated 4 months ago
- [ACM MM 2025] Ditto: Motion-Space Diffusion for Controllable Realtime Talking Head Synthesis☆546Updated 4 months ago
- Memory-Guided Diffusion for Expressive Talking Video Generation☆1,066Updated 3 months ago
- Source code for the SIGGRAPH 2024 paper "X-Portrait: Expressive Portrait Animation with Hierarchical Motion Attention"☆527Updated 3 weeks ago
- [ACM MM 2025] FantasyTalking: Realistic Talking Portrait Generation via Coherent Motion Synthesis☆1,578Updated 2 months ago
- [ICCV 2025] Official Pytorch Implementation of FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait.☆424Updated 4 months ago
- ☆631Updated 3 months ago
- talking-face video editing☆403Updated 8 months ago
- SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers☆566Updated 5 months ago
- KeySync: A Robust Approach for Leakage-free Lip Synchronization in High Resolution☆369Updated 3 months ago
- MimicTalk: Mimicking a personalized and expressive 3D talking face in minutes; NeurIPS 2024; Official code☆787Updated last year
- [ICLR2025] DisPose: Disentangling Pose Guidance for Controllable Human Image Animation☆374Updated 9 months ago
- JoyHallo: Digital human model for Mandarin☆511Updated last month
- DICE-Talk is a diffusion-based emotional talking head generation method that can generate vivid and diverse emotions for speaking portrai…☆273Updated 3 months ago
- wip - running some training with overfitting - https://wandb.ai/snoozie/vasa-overfitting☆299Updated last week
- EchoMimicV3: 1.3B Parameters are All You Need for Unified Multi-Modal and Multi-Task Human Animation☆605Updated 2 months ago
- ICCV 2025 ACTalker: an end-to-end video diffusion framework for talking head synthesis that supports both single and multi-signal control…☆422Updated 2 months ago
- StoryMaker: Towards consistent characters in text-to-image generation☆713Updated 11 months ago
- [ECCV 2024 Oral] EDTalk - Official PyTorch Implementation☆442Updated last month
- [IJCV] FoleyCrafter: Bring Silent Videos to Life with Lifelike and Synchronized Sounds. AI拟音大师,给你的无声视频添加生动而且同步的音效 😝☆630Updated last year
- [SIGGRAPH 2025] LAM: Large Avatar Model for One-shot Animatable Gaussian Head☆825Updated 2 months ago
- ☆1,923Updated 3 weeks ago
- [ECCV 2024] MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.☆754Updated 11 months ago
- [NeurIPS 2025] OmniTalker: Real-Time Text-Driven Talking Head Generation with In-Context Audio-Visual Style Replication☆393Updated last month
- [Siggraph Asia 2024 & IJCV 2025] Follow-Your-Emoji: This repo is the official implementation of "Follow-Your-Emoji: Fine-Controllable and…☆424Updated 6 months ago
- The official HelloMeme GitHub site☆625Updated 4 months ago
- VividTalk: One-Shot Audio-Driven Talking Head Generation Based on 3D Hybrid Prior☆799Updated last year
- [CVPR 2024] This is the official source for our paper "SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis"☆1,583Updated last month
- Phantom: Subject-Consistent Video Generation via Cross-Modal Alignment☆1,451Updated 2 months ago
- Select a portrait, click to move the head around (please use your own space / GPU!)☆902Updated 2 months ago