MeiGen-AI / InfiniteTalkLinks
Unlimited-length talking video generation that supports image-to-video and video-to-video generation
☆3,954Updated this week
Alternatives and similar repositories for InfiniteTalk
Users that are interested in InfiniteTalk are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2025] Let Them Talk: Audio-Driven Multi-Person Conversational Video Generation☆2,726Updated this week
- SkyReels V1: The first and most advanced open-source human-centric video foundation model☆2,579Updated 9 months ago
- [ACM MM 2025] FantasyTalking: Realistic Talking Portrait Generation via Coherent Motion Synthesis☆1,601Updated 4 months ago
- ☆1,964Updated this week
- SkyReels-V2: Infinite-length Film Generative model☆5,209Updated 4 months ago
- HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation☆1,196Updated 2 months ago
- VoxCPM: Tokenizer-Free TTS for Context-Aware Speech Generation and True-to-Life Voice Cloning☆2,958Updated last week
- Official implementation of "Sonic: Shifting Focus to Global Audio Perception in Portrait Animation"☆3,144Updated 5 months ago
- Taming Stable Diffusion for Lip Sync!☆5,262Updated 6 months ago
- [NeurIPS 2025] OmniTalker: Real-Time Text-Driven Talking Head Generation with In-Context Audio-Visual Style Replication☆405Updated 3 months ago
- Sonic is a method about ' Shifting Focus to Global Audio Perception in Portrait Animation',you can use it in comfyUI☆1,113Updated 2 months ago
- Implementation of "Live Avatar: Streaming Real-time Audio-Driven Avatar Generation with Infinite Length"☆1,114Updated this week
- ☆5,564Updated last week
- "ViMax: Agentic Video Generation (Director, Screenwriter, Producer, and Video Generator All-in-One)"☆1,559Updated last week
- SoulX-Podcast is an inference codebase by the Soul AI team for generating high-fidelity podcasts from text.☆2,710Updated last week
- [ICCV 2025] Official implementations for paper: VACE: All-in-One Video Creation and Editing☆3,498Updated 2 months ago
- An Open-Source Multimodal AIGC Solution based on ComfyUI + MCP + LLM https://pixelle.ai☆855Updated this week
- This node provides lip-sync capabilities in ComfyUI using ByteDance's LatentSync model. It allows you to synchronize video lips with audi…☆924Updated 3 months ago
- Phantom: Subject-Consistent Video Generation via Cross-Modal Alignment☆1,462Updated 3 months ago
- ☆3,139Updated 9 months ago
- ☆2,893Updated 2 weeks ago
- LTX-Video Support for ComfyUI☆2,451Updated 2 weeks ago
- [AAAI 2026] EchoMimicV3: 1.3B Parameters are All You Need for Unified Multi-Modal and Multi-Task Human Animation☆667Updated 3 weeks ago
- AutoClip: AI-powered video clipping and highlight generation · 一款智能高光提取与剪辑的二创工具☆930Updated 3 months ago
- [NeurIPS 2025] Image editing is worth a single LoRA! 0.1% training data for fantastic image editing! Surpasses GPT-4o in ID persistence~ …☆2,047Updated last month
- [CVPR 2025] MMAudio: Taming Multimodal Joint Training for High-Quality Video-to-Audio Synthesis☆2,013Updated 3 weeks ago
- A fast AI Video Generator for the GPU Poor. Supports Wan 2.1/2.2, Qwen Image, Hunyuan Video, LTX Video and Flux.☆3,452Updated last week
- ☆1,749Updated 4 months ago
- AutoClip : AI-powered video clipping and highlight generation · 一款智能高光提取与剪辑的二创工具☆1,117Updated 2 months ago
- Official PyTorch implementation of One-Minute Video Generation with Test-Time Training☆2,315Updated 6 months ago