[ICCV 2023] Q-Diffusion: Quantizing Diffusion Models.
☆370Mar 21, 2024Updated last year
Alternatives and similar repositories for q-diffusion
Users that are interested in q-diffusion are comparing it to the libraries listed below
Sorting:
- The official implementation of PTQD: Accurate Post-Training Quantization for Diffusion Models☆103Mar 12, 2024Updated last year
- Implementation of Post-training Quantization on Diffusion Models (CVPR 2023)☆141Apr 1, 2023Updated 2 years ago
- [ICCV 2025] QuEST: Efficient Finetuning for Low-bit Diffusion Models☆57Jun 26, 2025Updated 8 months ago
- [CVPR 2024 Highlight & TPAMI 2025] This is the official PyTorch implementation of "TFMQ-DM: Temporal Feature Maintenance Quantization for…☆108Sep 29, 2025Updated 5 months ago
- [ICLR 2024 Spotlight] This is the official PyTorch implementation of "EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Di…☆68Jun 4, 2024Updated last year
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆151Mar 21, 2025Updated 11 months ago
- [CVPR 2024] DeepCache: Accelerating Diffusion Models for Free☆957Jun 27, 2024Updated last year
- Pytorch implementation of BRECQ, ICLR 2021☆290Aug 1, 2021Updated 4 years ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,612Jul 12, 2024Updated last year
- Model Compression Toolbox for Large Language Models and Diffusion Models☆761Aug 14, 2025Updated 6 months ago
- The official PyTorch implementation of the ICLR2022 paper, QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quan…☆128Sep 23, 2025Updated 5 months ago
- [NeurIPS 2023] Structural Pruning for Diffusion Models☆217Jul 8, 2024Updated last year
- [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆74Sep 3, 2024Updated last year
- [CVPR 2024 Highlight] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models☆724Dec 2, 2024Updated last year
- This repository contains integer operators on GPUs for PyTorch.☆237Sep 29, 2023Updated 2 years ago
- [CVPR 2023] PD-Quant: Post-Training Quantization Based on Prediction Difference Metric☆60Mar 23, 2023Updated 2 years ago
- [ECCV24] MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization☆49Nov 27, 2024Updated last year
- A Compressed Stable Diffusion for Efficient Text-to-Image Generation [ECCV'24]☆313Jul 6, 2024Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆817Mar 6, 2025Updated 11 months ago
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,261Mar 27, 2024Updated last year
- Tiny optimized Stable-diffusion that can run on GPUs with just 1GB of VRAM. (Beta)☆182Jul 20, 2023Updated 2 years ago
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆485Nov 26, 2024Updated last year
- List of papers related to neural network quantization in recent AI conferences and journals.☆805Mar 27, 2025Updated 11 months ago
- ☆169Mar 9, 2023Updated 2 years ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- Segmind Distilled diffusion☆619Oct 18, 2023Updated 2 years ago
- BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.☆58Feb 7, 2023Updated 3 years ago
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.☆890Nov 26, 2025Updated 3 months ago
- Reorder-based post-training quantization for large language model☆199May 17, 2023Updated 2 years ago
- ☆36Mar 29, 2023Updated 2 years ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring