gen-ai-team / Wan2.1-NABLALinks
Wan: Open and Advanced Large-Scale Video Generative Models
☆27Updated 5 months ago
Alternatives and similar repositories for Wan2.1-NABLA
Users that are interested in Wan2.1-NABLA are comparing it to the libraries listed below
Sorting:
- [NeurIPS'2024] Invertible Consistency Distillation for Text-Guided Image Editing in Around 7 Steps☆101Updated last year
- Scale-wise Distillation of Diffusion Models☆113Updated 3 months ago
- DC-Gen: Post-Training Diffusion Acceleration with Deeply Compressed Latent Space☆321Updated 2 months ago
- (CVPR 2025) Switti: Designing Scale-Wise Transformers for Text-to-Image Synthesis☆200Updated 5 months ago
- PixNerd: Pixel Neural Field Diffusion☆139Updated 2 weeks ago
- An open-source implementation of Regional Adaptive Sampling (RAS), a novel diffusion model sampling strategy that introduces regional var…☆149Updated 6 months ago
- Official Github Repo for Neurips 2024 Paper Immiscible Diffusion: Accelerating Diffusion Training with Noise Assignment☆61Updated 6 months ago
- [NeurIPS 2025] Official PyTorch implementation of paper "CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up".☆212Updated 3 months ago
- [NeurIPS 2024] AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising☆210Updated 3 months ago
- [WACV 2025] MegaFusion: Extend Diffusion Models towards Higher-resolution Image Generation without Further Tuning☆96Updated 8 months ago
- (CVPR 2025) Scailing Down Text Encoders of Text-to-Image Diffusion Models☆50Updated 3 months ago
- Transition Models☆139Updated 2 months ago
- Making Flux go brrr on GPUs.☆159Updated 5 months ago
- [NeurIPS 2025] Training-Free Efficient Video Generation via Dynamic Token Carving☆260Updated 4 months ago
- [ICML2025] LoRA fine-tune directly on the quantized models.☆36Updated last year
- Nitro-T is a family of text-to-image diffusion models focused on highly efficient training.☆37Updated 5 months ago
- This is the official implementation of "T-LoRA: Single Image Diffusion Model Customization Without Overfitting"☆125Updated 5 months ago
- lite attention implemented over flash attention 3☆38Updated this week
- Distilling Diversity and Control in Diffusion Models☆49Updated 8 months ago
- Diffusion-Sharpening: Fine-tuning Diffusion Models with Denoising Trajectory Sharpening☆68Updated 7 months ago
- [arXiv 2025] Upsample What Matters: Region-Adaptive Latent Sampling for Accelerated Diffusion Transformers☆50Updated 4 months ago
- Inference-time scaling of diffusion-based image and video generation models.☆172Updated last week
- Score identity Distillation with Long and Short Guidance for One-Step Text-to-Image Generation☆94Updated 3 weeks ago
- Official implementation of paper "VMoBA: Mixture-of-Block Attention for Video Diffusion Models"☆58Updated 5 months ago
- [ICML 2025] Official PyTorch implementation of paper "Ultra-Resolution Adaptation with Ease".☆116Updated 7 months ago
- This repository includes the official implementation of our paper "Grouping First, Attending Smartly: Training-Free Acceleration for Diff…☆55Updated 7 months ago
- Minimal Differentiable Image Reward Functions☆106Updated 4 months ago
- Code for our ICCV 2025 paper "Adaptive Caching for Faster Video Generation with Diffusion Transformers"☆163Updated last year
- Adapting Self-Supervised Representations as a Latent Space for Efficient Generation☆33Updated 2 months ago
- FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.☆52Updated last year