gen-ai-team / Wan2.1-NABLALinks
Wan: Open and Advanced Large-Scale Video Generative Models
☆23Updated 3 months ago
Alternatives and similar repositories for Wan2.1-NABLA
Users that are interested in Wan2.1-NABLA are comparing it to the libraries listed below
Sorting:
- DC-Gen: Post-Training Diffusion Acceleration with Deeply Compressed Latent Space☆285Updated last month
- An open-source implementation of Regional Adaptive Sampling (RAS), a novel diffusion model sampling strategy that introduces regional var…☆146Updated 4 months ago
- Official Github Repo for Neurips 2024 Paper Immiscible Diffusion: Accelerating Diffusion Training with Noise Assignment☆61Updated 5 months ago
- Scale-wise Distillation of Diffusion Models☆113Updated 2 months ago
- [NeurIPS 2024] AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising☆207Updated last month
- (CVPR 2025) Scailing Down Text Encoders of Text-to-Image Diffusion Models☆47Updated 2 months ago
- Making Flux go brrr on GPUs.☆154Updated 4 months ago
- [NeurIPS'2024] Invertible Consistency Distillation for Text-Guided Image Editing in Around 7 Steps☆100Updated last year
- Official repository for VQDM:Accurate Compression of Text-to-Image Diffusion Models via Vector Quantization paper☆34Updated last year
- (CVPR 2025) Switti: Designing Scale-Wise Transformers for Text-to-Image Synthesis☆197Updated 4 months ago
- [NeurIPS 2025] Official PyTorch implementation of paper "CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up".☆212Updated last month
- Score identity Distillation with Long and Short Guidance for One-Step Text-to-Image Generation☆90Updated last week
- PixNerd: Pixel Neural Field Diffusion☆127Updated 2 months ago
- Nitro-T is a family of text-to-image diffusion models focused on highly efficient training.☆34Updated 4 months ago
- [ICML2025] LoRA fine-tune directly on the quantized models.☆36Updated 11 months ago
- [arXiv 2025] Upsample What Matters: Region-Adaptive Latent Sampling for Accelerated Diffusion Transformers☆48Updated 3 months ago
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching☆116Updated last year
- FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.☆51Updated last year
- [WACV 2025] MegaFusion: Extend Diffusion Models towards Higher-resolution Image Generation without Further Tuning☆96Updated 7 months ago
- [ICML 2025] Official PyTorch implementation of paper "Ultra-Resolution Adaptation with Ease".☆112Updated 6 months ago
- Code for our ICCV 2025 paper "Adaptive Caching for Faster Video Generation with Diffusion Transformers"☆160Updated last year
- ☆49Updated 8 months ago
- Transition Models☆133Updated last month
- ☆21Updated last year
- Inference-time scaling of diffusion-based image and video generation models.☆172Updated 4 months ago
- Official Implementation for "Guide-and-Rescale: Self-Guidance Mechanism for Effective Tuning-Free Real Image Editing"☆54Updated last year
- ☆64Updated 3 months ago
- [ICML 2025] Official implementation of the paper "Compressed Image Generation with Denoising Diffusion Codebook Models"☆71Updated 3 months ago
- Official implementation of DiCache: Let Diffusion Model Determine Its Own Cache☆52Updated last month
- Minimal Differentiable Image Reward Functions☆99Updated 3 months ago