Nota-NetsPresso / BK-SDMLinks
A Compressed Stable Diffusion for Efficient Text-to-Image Generation [ECCV'24]
☆312Updated last year
Alternatives and similar repositories for BK-SDM
Users that are interested in BK-SDM are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] Official implementation of "Faster Diffusion: Rethinking the Role of UNet Encoder in Diffusion Models"☆350Updated 10 months ago
- This repo provides a working re-implementation of Latent Adversarial Diffusion Distillation by AMD☆124Updated 7 months ago
- [NeurIPS 2023] Structural Pruning for Diffusion Models☆216Updated last year
- [ICCV 2023] Q-Diffusion: Quantizing Diffusion Models.☆370Updated last year
- T-GATE: Temporally Gating Attention to Accelerate Diffusion Model for Free!☆415Updated 11 months ago
- Official PyTorch and Diffusers Implementation of "LinFusion: 1 GPU, 1 Minute, 16K Image"☆313Updated last year
- [NeurIPS 2025] Official PyTorch implementation of paper "CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up".☆214Updated 4 months ago
- [CVPR 2024] DeepCache: Accelerating Diffusion Models for Free☆952Updated last year
- The official implementation of PTQD: Accurate Post-Training Quantization for Diffusion Models☆103Updated last year
- Official implementation of "Controlling Text-to-Image Diffusion by Orthogonal Finetuning".☆298Updated 5 months ago
- Implementation of Post-training Quantization on Diffusion Models (CVPR 2023)☆141Updated 2 years ago
- ☆49Updated last year
- ☆239Updated 2 years ago
- [ICLR 2024 Spotlight] This is the official PyTorch implementation of "EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Di…☆68Updated last year
- 🚀 PyTorch Implementation of "Progressive Distillation for Fast Sampling of Diffusion Models(v-diffusion)"☆257Updated 3 years ago
- Code for instruction-tuning Stable Diffusion.☆249Updated last year
- Open source implementation and models of One-step Diffusion with Distribution Matching Distillation☆180Updated last year
- SpeeD: A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training