MeiGen-AI / MultiTalkLinks
Let Them Talk: Audio-Driven Multi-Person Conversational Video Generation
☆2,211Updated last week
Alternatives and similar repositories for MultiTalk
Users that are interested in MultiTalk are comparing it to the libraries listed below
Sorting:
- ☆1,800Updated 2 months ago
- [ACM MM 2025] FantasyTalking: Realistic Talking Portrait Generation via Coherent Motion Synthesis☆1,504Updated this week
- HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation☆1,158Updated 2 months ago
- Phantom: Subject-Consistent Video Generation via Cross-Modal Alignment☆1,372Updated last month
- HunyuanVideo-I2V: A Customizable Image-to-Video Model based on HunyuanVideo☆1,645Updated 3 months ago
- SkyReels V1: The first and most advanced open-source human-centric video foundation model☆2,250Updated 5 months ago
- Official implementations for paper: VACE: All-in-One Video Creation and Editing☆3,144Updated 3 months ago
- [ICCV 2025] 🔥🔥 UNO: A Universal Customization Method for Both Single and Multi-Subject Conditioning☆1,207Updated last week
- SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers☆559Updated 2 months ago
- ☆1,023Updated 3 months ago
- Diffusion-based Portrait and Animal Animation☆822Updated 5 months ago
- Unlimited-length talking video generation that supports image-to-video and video-to-video generation☆185Updated this week
- Project Page repo of OmniTalker: Real-Time Text-Driven Talking Head Generation with In-Context Audio-Visual Style Replication☆376Updated 3 weeks ago
- [CVPR 2025] MMAudio: Taming Multimodal Joint Training for High-Quality Video-to-Audio Synthesis☆1,801Updated last week
- Image editing is worth a single LoRA! 0.1% training data for fantastic image editing! Training released! Surpasses GPT-4o in ID persisten…☆1,898Updated this week
- Official implementation of "Sonic: Shifting Focus to Global Audio Perception in Portrait Animation"☆3,004Updated last month
- ☆751Updated 6 months ago
- ☆610Updated last month
- A fast AI Video Generator for the GPU Poor. Supports Wan 2.1/2.2, Hunyuan Video, LTX Video and Flux.☆2,397Updated this week
- Sonic is a method about ' Shifting Focus to Global Audio Perception in Portrait Animation',you can use it in comfyUI☆1,086Updated 3 months ago
- ☆2,392Updated last month
- SkyReels-V2: Infinite-length Film Generative model☆4,240Updated 2 weeks ago
- Bring portraits to life in Real Time!onnx/tensorrt support!实时肖像驱动!☆957Updated last month
- [CVPR2025 Highlight] Video Generation Foundation Models: https://saiyan-world.github.io/goku/☆2,878Updated 6 months ago
- This node provides lip-sync capabilities in ComfyUI using ByteDance's LatentSync model. It allows you to synchronize video lips with audi…☆884Updated 2 months ago
- Official PyTorch implementation of One-Minute Video Generation with Test-Time Training☆2,051Updated 2 months ago
- ☆765Updated last month
- Implementation of "EasyControl: Adding Efficient and Flexible Control for Diffusion Transformer"(ICCV2025)☆1,646Updated last month
- [ACM MM 2025] Ditto: Motion-Space Diffusion for Controllable Realtime Talking Head Synthesis☆443Updated last month
- LTX-Video Support for ComfyUI☆2,323Updated last month