MeiGen-AI / MultiTalkLinks
[NeurIPS 2025] Let Them Talk: Audio-Driven Multi-Person Conversational Video Generation
☆2,791Updated last month
Alternatives and similar repositories for MultiTalk
Users that are interested in MultiTalk are comparing it to the libraries listed below
Sorting:
- ☆2,024Updated last month
- [ACM MM 2025] FantasyTalking: Realistic Talking Portrait Generation via Coherent Motion Synthesis☆1,617Updated last week
- HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation☆1,204Updated 3 months ago
- Phantom: Subject-Consistent Video Generation via Cross-Modal Alignment☆1,477Updated 4 months ago
- Unlimited-length talking video generation that supports image-to-video and video-to-video generation☆4,669Updated last month
- [ICCV 2025] Official implementations for paper: VACE: All-in-One Video Creation and Editing☆3,604Updated 3 months ago
- SkyReels V1: The first and most advanced open-source human-centric video foundation model☆2,643Updated 10 months ago
- HuMo: Human-Centric Video Generation via Collaborative Multi-Modal Conditioning☆1,127Updated last week
- Implementation of "Live Avatar: Streaming Real-time Audio-Driven Avatar Generation with Infinite Length"☆1,750Updated last week
- ☆1,046Updated 8 months ago
- ☆2,011Updated last month
- [AAAI 2026] EchoMimicV3: 1.3B Parameters are All You Need for Unified Multi-Modal and Multi-Task Human Animation☆739Updated last week
- HunyuanVideo-I2V: A Customizable Image-to-Video Model based on HunyuanVideo☆1,776Updated 8 months ago
- Diffusion-based Portrait and Animal Animation☆854Updated last month
- [ICCV 2025] 🔥🔥 UNO: A Universal Customization Method for Both Single and Multi-Subject Conditioning☆1,350Updated 4 months ago
- PersonaLive! : Expressive Portrait Image Animation for Live Streaming☆1,583Updated last month
- SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers☆582Updated 8 months ago
- [NeurIPS 2025] Image editing is worth a single LoRA! 0.1% training data for fantastic image editing! Surpasses GPT-4o in ID persistence~ …☆2,074Updated last month
- ☆2,496Updated 6 months ago
- [CVPR 2025] MMAudio: Taming Multimodal Joint Training for High-Quality Video-to-Audio Synthesis☆2,083Updated 2 months ago
- [CVPR2025 Highlight] Video Generation Foundation Models: https://saiyan-world.github.io/goku/☆2,904Updated 11 months ago
- [NeurIPS 2025] OmniTalker: Real-Time Text-Driven Talking Head Generation with In-Context Audio-Visual Style Replication☆417Updated 4 months ago
- Sonic is a method about ' Shifting Focus to Global Audio Perception in Portrait Animation',you can use it in comfyUI☆1,123Updated 4 months ago
- ☆650Updated 2 months ago
- A fast AI Video Generator for the GPU Poor. Supports Wan 2.1/2.2, Qwen Image, Hunyuan Video, LTX Video and Flux.☆4,298Updated last week
- LTX-Video Support for ComfyUI☆3,062Updated last week
- This node provides lip-sync capabilities in ComfyUI using ByteDance's LatentSync model. It allows you to synchronize video lips with audi…☆931Updated 5 months ago
- Official Python inference and LoRA trainer package for the LTX-2 audio–video generative model.☆3,485Updated last week
- ☆1,782Updated 6 months ago
- Implementation of "EasyControl: Adding Efficient and Flexible Control for Diffusion Transformer"(ICCV2025)☆1,713Updated 6 months ago