jdh-algo / JoyVASA
Diffusion-based Portrait and Animal Animation
☆660Updated last month
Alternatives and similar repositories for JoyVASA:
Users that are interested in JoyVASA are comparing it to the libraries listed below
- Memory-Guided Diffusion for Expressive Talking Video Generation☆715Updated 3 weeks ago
- Bring portraits to life in Real Time!onnx/tensorrt support!实时肖像驱动!☆736Updated this week
- ☆475Updated 2 months ago
- Source code for the SIGGRAPH 2024 paper "X-Portrait: Expressive Portrait Animation with Hierarchical Motion Attention"☆473Updated 6 months ago
- [ICLR2025] DisPose: Disentangling Pose Guidance for Controllable Human Image Animation☆327Updated 3 weeks ago
- JoyHallo: Digital human model for Mandarin☆440Updated 3 months ago
- MimicTalk: Mimicking a personalized and expressive 3D talking face in minutes; NeurIPS 2024; Official code☆560Updated 4 months ago
- Official implementation of "FitDiT: Advancing the Authentic Garment Details for High-fidelity Virtual Try-on"☆429Updated last week
- StoryMaker: Towards consistent characters in text-to-image generation☆649Updated 2 months ago
- FoleyCrafter: Bring Silent Videos to Life with Lifelike and Synchronized Sounds. AI拟音大师,给你的无声视频添加生动而且同步的音效 😝☆519Updated 6 months ago
- [arXiv 2024] Taming Multimodal Joint Training for High-Quality Video-to-Audio Synthesis☆1,096Updated this week
- [ECCV 2024] MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.☆714Updated 2 months ago
- [ECCV 2024 Oral] EDTalk - Official PyTorch Implementation☆400Updated last month
- Taming Stable Diffusion for Lip Sync!☆2,538Updated last month
- [ICLR 2025] Animate-X - PyTorch Implementation☆301Updated 3 weeks ago
- This node provides lip-sync capabilities in ComfyUI using ByteDance's LatentSync model. It allows you to synchronize video lips with audi…☆430Updated 2 weeks ago
- [Siggraph Asia 2024] Follow-Your-Emoji: This repo is the official implementation of "Follow-Your-Emoji: Fine-Controllable and Expressive …☆362Updated 5 months ago
- talking-face video editing☆206Updated last month
- Official implementation of EMOPortraits: Emotion-enhanced Multimodal One-shot Head Avatars☆356Updated 4 months ago
- You can using EchoMimic in ComfyUI☆540Updated last month
- Stable-Hair: Real-World Hair Transfer via Diffusion Model (AAAI 2025)☆416Updated 3 months ago
- EchoMimicV2: Towards Striking, Simplified, and Semi-Body Human Animation☆2,783Updated 3 weeks ago
- [CVPR 2024] This is the official source for our paper "SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis"☆1,407Updated 5 months ago
- High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance☆2,198Updated 4 months ago
- Hallo3: Highly Dynamic and Realistic Portrait Image Animation with Diffusion Transformer Networks☆1,056Updated 3 weeks ago
- TangoFlux: Super Fast and Faithful Text to Audio Generation with Flow Matching☆647Updated 3 weeks ago
- MotionFollower: Editing Video Motion via Lightweight Score-Guided Diffusion☆211Updated 8 months ago
- Official Implementations for Paper - AniDoc: Animation Creation Made Easier☆475Updated last month