Fantasy-AMAP / fantasy-talkingLinks
[ACM MM 2025] FantasyTalking: Realistic Talking Portrait Generation via Coherent Motion Synthesis
☆1,595Updated 3 months ago
Alternatives and similar repositories for fantasy-talking
Users that are interested in fantasy-talking are comparing it to the libraries listed below
Sorting:
- ☆1,948Updated last month
- Phantom: Subject-Consistent Video Generation via Cross-Modal Alignment☆1,461Updated 2 months ago
- [NeurIPS 2025] Let Them Talk: Audio-Driven Multi-Person Conversational Video Generation☆2,695Updated 2 months ago
- ☆1,043Updated 6 months ago
- Diffusion-based Portrait and Animal Animation☆844Updated 2 months ago
- HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation☆1,196Updated last month
- [ICCV 2025] 🔥🔥 UNO: A Universal Customization Method for Both Single and Multi-Subject Conditioning☆1,335Updated 2 months ago
- SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers☆572Updated 6 months ago
- Bring portraits to life in Real Time!onnx/tensorrt support!实时肖像驱动!☆1,012Updated 5 months ago
- [NeurIPS 2025] Image editing is worth a single LoRA! 0.1% training data for fantastic image editing! Surpasses GPT-4o in ID persistence~ …☆2,038Updated 3 weeks ago
- HunyuanVideo-I2V: A Customizable Image-to-Video Model based on HunyuanVideo☆1,746Updated 6 months ago
- ☆779Updated 4 months ago
- [ACM MM 2025] Ditto: Motion-Space Diffusion for Controllable Realtime Talking Head Synthesis☆597Updated 3 weeks ago
- This node provides lip-sync capabilities in ComfyUI using ByteDance's LatentSync model. It allows you to synchronize video lips with audi…☆924Updated 3 months ago
- [ICCV 2025] Official implementations for paper: VACE: All-in-One Video Creation and Editing☆3,469Updated last month
- Memory-Guided Diffusion for Expressive Talking Video Generation☆1,069Updated 4 months ago
- SkyReels V1: The first and most advanced open-source human-centric video foundation model☆2,505Updated 9 months ago
- Sonic is a method about ' Shifting Focus to Global Audio Perception in Portrait Animation',you can use it in comfyUI☆1,111Updated 2 months ago
- ☆641Updated 3 weeks ago
- Stand-In is a lightweight, plug-and-play framework for identity-preserving video generation.☆687Updated 3 months ago
- 📹 A more flexible framework that can generate videos at any resolution and creates videos from images.☆1,604Updated last week
- HuMo: Human-Centric Video Generation via Collaborative Multi-Modal Conditioning☆963Updated last month
- ICCV 2025 ACTalker: an end-to-end video diffusion framework for talking head synthesis that supports both single and multi-signal control…☆427Updated 3 months ago
- ☆755Updated 9 months ago
- [AAAI 2026] EchoMimicV3: 1.3B Parameters are All You Need for Unified Multi-Modal and Multi-Task Human Animation☆655Updated 2 weeks ago
- ☆1,329Updated 7 months ago
- 📺 An End-to-End Solution for High-Resolution and Long Video Generation Based on Transformer Diffusion☆2,236Updated 9 months ago
- [ICCV 2025] Official Pytorch Implementation of FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait.☆432Updated last month
- [CVPR 2025] MatAnyone: Stable Video Matting with Consistent Memory Propagation☆1,414Updated last month
- FantasyPortrait: Enhancing Multi-Character Portrait Animation with Expression-Augmented Diffusion Transformers☆485Updated 3 months ago