Tencent-Hunyuan / HunyuanVideo-AvatarLinks
☆1,981Updated 3 weeks ago
Alternatives and similar repositories for HunyuanVideo-Avatar
Users that are interested in HunyuanVideo-Avatar are comparing it to the libraries listed below
Sorting:
- [ACM MM 2025] FantasyTalking: Realistic Talking Portrait Generation via Coherent Motion Synthesis☆1,609Updated 4 months ago
- HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation☆1,196Updated 2 months ago
- [NeurIPS 2025] Let Them Talk: Audio-Driven Multi-Person Conversational Video Generation☆2,762Updated 3 weeks ago
- Phantom: Subject-Consistent Video Generation via Cross-Modal Alignment☆1,472Updated 4 months ago
- Implementation of "Live Avatar: Streaming Real-time Audio-Driven Avatar Generation with Infinite Length"☆1,359Updated last week
- [ICCV 2025] 🔥🔥 UNO: A Universal Customization Method for Both Single and Multi-Subject Conditioning☆1,343Updated 3 months ago
- Diffusion-based Portrait and Animal Animation☆852Updated last month
- ☆1,044Updated 7 months ago
- SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers☆575Updated 7 months ago
- HuMo: Human-Centric Video Generation via Collaborative Multi-Modal Conditioning☆1,061Updated 2 weeks ago
- HunyuanVideo-I2V: A Customizable Image-to-Video Model based on HunyuanVideo☆1,762Updated 7 months ago
- PersonaLive! : Expressive Portrait Image Animation for Live Streaming☆1,235Updated last week
- ☆752Updated 10 months ago
- [AAAI 2026] EchoMimicV3: 1.3B Parameters are All You Need for Unified Multi-Modal and Multi-Task Human Animation☆691Updated last month
- [ICCV 2025] Official implementations for paper: VACE: All-in-One Video Creation and Editing☆3,550Updated 2 months ago
- ☆1,874Updated 3 weeks ago
- SkyReels V1: The first and most advanced open-source human-centric video foundation model☆2,620Updated 10 months ago
- [ACM MM 2025] Ditto: Motion-Space Diffusion for Controllable Realtime Talking Head Synthesis☆653Updated last month
- [NeurIPS 2025] Image editing is worth a single LoRA! 0.1% training data for fantastic image editing! Surpasses GPT-4o in ID persistence~ …☆2,058Updated 3 weeks ago
- ☆647Updated last month
- Stand-In is a lightweight, plug-and-play framework for identity-preserving video generation.☆709Updated 3 weeks ago
- ☆784Updated 5 months ago
- Bring portraits to life in Real Time!onnx/tensorrt support!实时肖像驱动!☆1,035Updated 6 months ago
- ICCV 2025 ACTalker: an end-to-end video diffusion framework for talking head synthesis that supports both single and multi-signal control…☆440Updated 4 months ago
- [NeurIPS 2025] OmniTalker: Real-Time Text-Driven Talking Head Generation with In-Context Audio-Visual Style Replication☆411Updated 3 months ago
- This node provides lip-sync capabilities in ComfyUI using ByteDance's LatentSync model. It allows you to synchronize video lips with audi…☆924Updated 4 months ago
- ☆2,489Updated 5 months ago
- [ICCV 2025] Official Pytorch Implementation of FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait.☆449Updated 2 months ago
- Official Python inference and LoRA trainer package for the LTX-2 audio–video generative model.☆937Updated this week
- [SIGGRAPH 2025] LAM: Large Avatar Model for One-shot Animatable Gaussian Head☆893Updated 4 months ago