MeiGen-AI / InfiniteTalkLinks
Unlimited-length talking video generation that supports image-to-video and video-to-video generation
☆4,303Updated 3 weeks ago
Alternatives and similar repositories for InfiniteTalk
Users that are interested in InfiniteTalk are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2025] Let Them Talk: Audio-Driven Multi-Person Conversational Video Generation☆2,762Updated 3 weeks ago
- SkyReels V1: The first and most advanced open-source human-centric video foundation model☆2,620Updated 10 months ago
- [ACM MM 2025] FantasyTalking: Realistic Talking Portrait Generation via Coherent Motion Synthesis☆1,609Updated 4 months ago
- ☆1,981Updated 3 weeks ago
- Taming Stable Diffusion for Lip Sync!☆5,317Updated 6 months ago
- SkyReels-V2: Infinite-length Film Generative model☆5,608Updated 5 months ago
- Official implementation of "Sonic: Shifting Focus to Global Audio Perception in Portrait Animation"☆3,170Updated this week
- Implementation of "Live Avatar: Streaming Real-time Audio-Driven Avatar Generation with Infinite Length"☆1,359Updated last week
- VoxCPM: Tokenizer-Free TTS for Context-Aware Speech Generation and True-to-Life Voice Cloning☆3,273Updated last week
- "ViMax: Agentic Video Generation (Director, Screenwriter, Producer, and Video Generator All-in-One)"☆1,784Updated 3 weeks ago
- ☆2,982Updated 3 weeks ago
- [NeurIPS 2025] OmniTalker: Real-Time Text-Driven Talking Head Generation with In-Context Audio-Visual Style Replication☆411Updated 3 months ago
- Sonic is a method about ' Shifting Focus to Global Audio Perception in Portrait Animation',you can use it in comfyUI☆1,117Updated 3 months ago
- [CVPR 2025] EchoMimicV2: Towards Striking, Simplified, and Semi-Body Human Animation☆4,441Updated 5 months ago
- HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation☆1,196Updated 2 months ago
- SoulX-Podcast is an inference codebase by the Soul AI team for generating high-fidelity podcasts from text.☆2,971Updated last month
- [AAAI 2026] EchoMimicV3: 1.3B Parameters are All You Need for Unified Multi-Modal and Multi-Task Human Animation☆691Updated last month
- AutoClip: AI-powered video clipping and highlight generation · 一款智能高光提取与剪辑的二创工具☆936Updated 3 months ago
- Official PyTorch implementation of One-Minute Video Generation with Test-Time Training☆2,340Updated 7 months ago
- PersonaLive! : Expressive Portrait Image Animation for Live Streaming☆1,235Updated last week
- AutoClip : AI-powered video clipping and highlight generation · 一款智能高光提取与剪辑的二创工具☆1,306Updated 3 months ago
- This node provides lip-sync capabilities in ComfyUI using ByteDance's LatentSync model. It allows you to synchronize video lips with audi…☆924Updated 4 months ago
- ☆1,874Updated 3 weeks ago
- [ICCV 2025] Official implementations for paper: VACE: All-in-One Video Creation and Editing☆3,550Updated 2 months ago
- Phantom: Subject-Consistent Video Generation via Cross-Modal Alignment☆1,472Updated 4 months ago
- [NeurIPS 2025] Image editing is worth a single LoRA! 0.1% training data for fantastic image editing! Surpasses GPT-4o in ID persistence~ …☆2,058Updated 3 weeks ago
- An Open-Source Multimodal AIGC Solution based on ComfyUI + MCP + LLM https://pixelle.ai☆886Updated 3 weeks ago
- ☆1,766Updated 5 months ago
- Open Cut API.☆1,495Updated this week
- ☆5,820Updated last week