Fantasy-AMAP / fantasy-talking
FantasyTalking: Realistic Talking Portrait Generation via Coherent Motion Synthesis
☆1,136Updated this week
Alternatives and similar repositories for fantasy-talking
Users that are interested in fantasy-talking are comparing it to the libraries listed below
Sorting:
- Phantom: Subject-Consistent Video Generation via Cross-Modal Alignment☆895Updated this week
- 🔥🔥 UNO: A Universal Customization Method for Both Single and Multi-Subject Conditioning☆1,030Updated last month
- Official implementations for paper: VACE: All-in-One Video Creation and Editing☆1,638Updated this week
- ☆925Updated this week
- SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers☆501Updated 2 weeks ago
- HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation☆823Updated this week
- This node provides lip-sync capabilities in ComfyUI using ByteDance's LatentSync model. It allows you to synchronize video lips with audi…☆766Updated last week
- Image editing is worth a single LoRA! 0.1% training data for fantastic image editing! Training released! Surpasses GPT-4o in ID persisten…☆1,327Updated this week
- Diffusion-based Portrait and Animal Animation☆770Updated 2 months ago
- Bring portraits to life in Real Time!onnx/tensorrt support!实时肖像驱动!☆872Updated 3 months ago
- Memory-Guided Diffusion for Expressive Talking Video Generation☆823Updated 3 months ago
- ☆742Updated 3 months ago
- ☆463Updated 2 weeks ago
- Source code for the SIGGRAPH 2024 paper "X-Portrait: Expressive Portrait Animation with Hierarchical Motion Attention"☆506Updated 9 months ago
- ☆1,102Updated 3 weeks ago
- ☆539Updated last month
- 📹 A more flexible framework that can generate videos at any resolution and creates videos from images.☆1,005Updated this week
- A pipeline parallel training script for diffusion models.☆1,019Updated last week
- ACTalker: an end-to-end video diffusion framework for talking head synthesis that supports both single and multi-signal control (e.g., au…☆262Updated 3 weeks ago
- Sonic is a method about ' Shifting Focus to Global Audio Perception in Portrait Animation',you can use it in comfyUI☆939Updated 2 months ago
- HunyuanVideo-I2V: A Customizable Image-to-Video Model based on HunyuanVideo☆1,416Updated last month
- Official repository of In-Context LoRA for Diffusion Transformers☆1,850Updated 4 months ago
- [ICLR2025] DisPose: Disentangling Pose Guidance for Controllable Human Image Animation☆367Updated 3 months ago
- Wan 2.1 for the GPU Poor☆837Updated last week
- [768 Resolution] [Any "SDXL" Model] [Various Conditions] [Texture Synthesis] Official impl. of "MV-Adapter: Multi-view Consistent Image G…☆922Updated this week
- SkyReels V1: The first and most advanced open-source human-centric video foundation model☆2,148Updated 2 months ago
- The official implementation of CVPR'25 Oral paper "Go-with-the-Flow: Motion-Controllable Video Diffusion Models Using Real-Time Warped No…☆882Updated this week
- ☆781Updated 6 months ago
- You can using EchoMimic in ComfyUI☆618Updated last month
- HunyuanVideo GP: Large Video Generation Model - GPU Poor version☆412Updated last month