Tencent-Hunyuan / HunyuanCustomLinks
HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation
β1,187Updated last week
Alternatives and similar repositories for HunyuanCustom
Users that are interested in HunyuanCustom are comparing it to the libraries listed below
Sorting:
- Phantom: Subject-Consistent Video Generation via Cross-Modal Alignmentβ1,440Updated last month
- [ICCV 2025] π₯π₯ UNO: A Universal Customization Method for Both Single and Multi-Subject Conditioningβ1,315Updated last month
- β1,038Updated 5 months ago
- β1,893Updated last week
- [ACM MM 2025] FantasyTalking: Realistic Talking Portrait Generation via Coherent Motion Synthesisβ1,571Updated 2 months ago
- SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformersβ561Updated 4 months ago
- HunyuanVideo-I2V: A Customizable Image-to-Video Model based on HunyuanVideoβ1,701Updated 5 months ago
- β753Updated 8 months ago
- HuMo: Human-Centric Video Generation via Collaborative Multi-Modal Conditioningβ753Updated this week
- A SOTA open-source image editing model, which aims to provide comparable performance against the closed-source models like GPT-4o and Gemβ¦β1,688Updated last month
- SkyReels-A2: Compose anything in video diffusion transformersβ674Updated 4 months ago
- [NeurIPS 2025] Let Them Talk: Audio-Driven Multi-Person Conversational Video Generationβ2,578Updated last month
- β779Updated 3 months ago
- Stand-In is a lightweight, plug-and-play framework for identity-preserving video generation.β653Updated last month
- The official implementation of CVPR'25 Oral paper "Go-with-the-Flow: Motion-Controllable Video Diffusion Models Using Real-Time Warped Noβ¦β1,034Updated last week
- [ICCV 2025] Official implementations for paper: VACE: All-in-One Video Creation and Editingβ3,350Updated last week
- [NeurIPS 2025] Image editing is worth a single LoRA! 0.1% training data for fantastic image editing! Surpasses GPT-4o in ID persistence~ β¦β1,998Updated last week
- [ICCV'25 Oral] ReCamMaster: Camera-Controlled Generative Rendering from A Single Videoβ1,520Updated 3 months ago
- β625Updated 3 months ago
- Implementation of [CVPR 2025] "DiffSensei: Bridging Multi-Modal LLMs and Diffusion Models for Customized Manga Generation"β857Updated 8 months ago
- Lumina-Image 2.0: A Unified and Efficient Image Generative Frameworkβ808Updated 3 months ago
- HunyuanImage-3.0: A Powerful Native Multimodal Model for Image Generationβ2,250Updated last week
- SkyReels V1: The first and most advanced open-source human-centric video foundation modelβ2,418Updated 7 months ago
- Pusa: Thousands Timesteps Video Diffusion Modelβ658Updated last month
- ICCV 2025 ACTalker: an end-to-end video diffusion framework for talking head synthesis that supports both single and multi-signal controlβ¦β416Updated 2 months ago
- πΉ A more flexible framework that can generate videos at any resolution and creates videos from images.β1,492Updated this week
- Diffusion-based Portrait and Animal Animationβ841Updated last month
- FantasyPortrait: Enhancing Multi-Character Portrait Animation with Expression-Augmented Diffusion Transformersβ473Updated 2 months ago
- Illumination Drawing Tools for Text-to-Image Diffusion Modelsβ778Updated 5 months ago
- [ICCV 2025 Highlight] OminiControl: Minimal and Universal Control for Diffusion Transformerβ1,804Updated 3 months ago