jdh-algo / JoyVASALinks
Diffusion-based Portrait and Animal Animation
☆791Updated 3 months ago
Alternatives and similar repositories for JoyVASA
Users that are interested in JoyVASA are comparing it to the libraries listed below
Sorting:
- Bring portraits to life in Real Time!onnx/tensorrt support!实时肖像驱动!☆903Updated 4 months ago
- FantasyTalking: Realistic Talking Portrait Generation via Coherent Motion Synthesis☆1,356Updated last month
- Memory-Guided Diffusion for Expressive Talking Video Generation☆1,027Updated 4 months ago
- ☆549Updated this week
- Ditto: Motion-Space Diffusion for Controllable Realtime Talking Head Synthesis☆347Updated 5 months ago
- SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers☆528Updated 2 weeks ago
- [ICLR2025] DisPose: Disentangling Pose Guidance for Controllable Human Image Animation☆367Updated 4 months ago
- MimicTalk: Mimicking a personalized and expressive 3D talking face in minutes; NeurIPS 2024; Official code☆731Updated 8 months ago
- Source code for the SIGGRAPH 2024 paper "X-Portrait: Expressive Portrait Animation with Hierarchical Motion Attention"☆511Updated 10 months ago
- ☆1,252Updated 2 weeks ago
- JoyHallo: Digital human model for Mandarin☆493Updated 7 months ago
- StoryMaker: Towards consistent characters in text-to-image generation☆700Updated 6 months ago
- Official implementation of "FitDiT: Advancing the Authentic Garment Details for High-fidelity Virtual Try-on"☆556Updated 4 months ago
- KeySync: A Robust Approach for Leakage-free Lip Synchronization in High Resolution☆318Updated last month
- [ECCV 2024] MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.☆748Updated 6 months ago
- [SIGGRAPH 2025] LAM: Large Avatar Model for One-shot Animatable Gaussian Head☆605Updated last month
- Phantom: Subject-Consistent Video Generation via Cross-Modal Alignment☆1,196Updated 2 weeks ago
- talking-face video editing☆352Updated 3 months ago
- ☆306Updated 2 months ago
- Official Pytorch Implementation of FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait.☆252Updated 4 months ago
- You can using EchoMimic in ComfyUI☆631Updated 2 months ago
- DICE-Talk is a diffusion-based emotional talking head generation method that can generate vivid and diverse emotions for speaking portrai…☆203Updated last month
- Official implementation of "Sonic: Shifting Focus to Global Audio Perception in Portrait Animation"☆2,832Updated last month
- ☆990Updated last month
- FoleyCrafter: Bring Silent Videos to Life with Lifelike and Synchronized Sounds. AI拟音大师,给你的无声视频添加生动而且同步的音效 😝☆598Updated 10 months ago
- Project Page repo of OmniTalker: Real-Time Text-Driven Talking Head Generation with In-Context Audio-Visual Style Replication☆350Updated 2 months ago
- High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance☆2,375Updated 8 months ago
- 🔥🔥 UNO: A Universal Customization Method for Both Single and Multi-Subject Conditioning☆1,118Updated 2 months ago
- [CVPR 2024] This is the official source for our paper "SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis"☆1,471Updated 9 months ago
- This node provides lip-sync capabilities in ComfyUI using ByteDance's LatentSync model. It allows you to synchronize video lips with audi…☆817Updated last week