Kwai-Kolors / KolorsLinks
Kolors Team
☆4,587Updated last year
Alternatives and similar repositories for Kolors
Users that are interested in Kolors are comparing it to the libraries listed below
Sorting:
- High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance☆2,495Updated last month
- [NeurIPS 2024] Official code for PuLID: Pure and Lightning ID Customization via Contrastive Alignment☆3,505Updated 5 months ago
- [ICLR 2025] Pyramidal Flow Matching for Efficient Video Generative Modeling☆3,143Updated last year
- Hunyuan-DiT : A Powerful Multi-Resolution Diffusion Transformer with Fine-Grained Chinese Understanding☆4,287Updated last month
- 📺 An End-to-End Solution for High-Resolution and Long Video Generation Based on Transformer Diffusion☆2,242Updated 10 months ago
- [CVPR 2025] StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text☆1,619Updated 9 months ago
- Official repository of In-Context LoRA for Diffusion Transformers☆2,045Updated last year
- [AAAI 2025] EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning☆4,157Updated 5 months ago
- Character Animation (AnimateAnyone, Face Reenactment)☆3,473Updated last year
- MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation☆2,641Updated 10 months ago
- ☆2,229Updated last year
- [ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors☆2,978Updated last year
- Transparent Image Layer Diffusion using Latent Transparency☆2,183Updated last year
- The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.☆6,387Updated last year
- Enjoy the magic of Diffusion models!☆11,297Updated last week
- MuseV: Infinite-length and High Fidelity Virtual Human Video Generation with Visual Conditioned Parallel Denoising☆2,807Updated last year
- Accepted as [NeurIPS 2024] Spotlight Presentation Paper☆6,374Updated last year
- V-Express aims to generate a talking head video under the control of a reference image, an audio, and a sequence of V-Kps images.☆2,360Updated 11 months ago
- InstantStyle: Free Lunch towards Style-Preserving in Text-to-Image Generation 🔥☆2,002Updated last year
- More relighting!☆8,338Updated 10 months ago
- AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animation☆5,022Updated last year
- [ICLR 2025] CatVTON is a simple and efficient virtual try-on diffusion model with 1) Lightweight Network (899.06M parameters totally), 2)…☆1,562Updated 3 weeks ago
- A general fine-tuning kit geared toward image/video/audio diffusion models.☆2,697Updated this week
- PixArt-Σ: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation☆1,884Updated last year
- OmniGen: Unified Image Generation. https://arxiv.org/pdf/2409.11340☆4,296Updated last month
- Official implementations for paper: Zero-shot Image Editing with Reference Imitation☆1,303Updated last year
- Your image is almost there!☆7,660Updated last year
- Official implementation of "MIMO: Controllable Character Video Synthesis with Spatial Decomposed Modeling"☆1,564Updated 6 months ago
- Lumina-T2X is a unified framework for Text to Any Modality Generation☆2,244Updated 10 months ago
- Official implementation of "Sonic: Shifting Focus to Global Audio Perception in Portrait Animation"☆3,164Updated 6 months ago