[ICCV 2023] Q-Diffusion: Quantizing Diffusion Models.
☆373Mar 21, 2024Updated 2 years ago
Alternatives and similar repositories for q-diffusion
Users that are interested in q-diffusion are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The official implementation of PTQD: Accurate Post-Training Quantization for Diffusion Models☆103Mar 12, 2024Updated 2 years ago
- Implementation of Post-training Quantization on Diffusion Models (CVPR 2023)☆143Apr 1, 2023Updated 3 years ago
- [ICCV 2025] QuEST: Efficient Finetuning for Low-bit Diffusion Models☆59Jun 26, 2025Updated 10 months ago
- [CVPR 2024 Highlight & TPAMI 2025] This is the official PyTorch implementation of "TFMQ-DM: Temporal Feature Maintenance Quantization for…☆108Sep 29, 2025Updated 7 months ago
- [ICLR 2024 Spotlight] This is the official PyTorch implementation of "EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Di…☆71Jun 4, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Pytorch implementation of BRECQ, ICLR 2021☆297Aug 1, 2021Updated 4 years ago
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆156Mar 21, 2025Updated last year
- [CVPR 2024] DeepCache: Accelerating Diffusion Models for Free☆963Jun 27, 2024Updated last year
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,644Jul 12, 2024Updated last year
- [NeurIPS 2023] Structural Pruning for Diffusion Models☆219Jul 8, 2024Updated last year
- The official PyTorch implementation of the ICLR2022 paper, QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quan…☆130Sep 23, 2025Updated 7 months ago
- Model Compression Toolbox for Large Language Models and Diffusion Models☆779Aug 14, 2025Updated 8 months ago
- [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆77Sep 3, 2024Updated last year
- [ECCV24] MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization☆49Nov 27, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- This repository contains integer operators on GPUs for PyTorch.☆236Sep 29, 2023Updated 2 years ago
- [CVPR 2024 Highlight] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models☆727Dec 2, 2024Updated last year
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆506Nov 26, 2024Updated last year
- A Compressed Stable Diffusion for Efficient Text-to-Image Generation [ECCV'24]☆315Jul 6, 2024Updated last year
- [CVPR 2023] PD-Quant: Post-Training Quantization Based on Prediction Difference Metric☆61Mar 23, 2023Updated 3 years ago
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,301Mar 27, 2024Updated 2 years ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆835Mar 6, 2025Updated last year
- The official implementation of "EDA-DM: Enhanced Distribution Alignment for Post-Training Quantization of Diffusion Models"☆21Jul 8, 2025Updated 9 months ago
- Segmind Distilled diffusion☆618Oct 18, 2023Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆338Jul 2, 2024Updated last year
- ☆172Mar 9, 2023Updated 3 years ago
- Tiny optimized Stable-diffusion that can run on GPUs with just 1GB of VRAM. (Beta)☆182Jul 20, 2023Updated 2 years ago
- List of papers related to neural network quantization in recent AI conferences and journals.☆820Mar 27, 2025Updated last year
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.☆896Nov 26, 2025Updated 5 months ago
- BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.☆58Feb 7, 2023Updated 3 years ago
- A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are co…☆2,360Apr 25, 2026Updated last week
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆718Aug 13, 2024Updated last year
- Official implementation of the ICLR 2024 paper AffineQuant☆30Mar 30, 2024Updated 2 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- ☆36Mar 29, 2023Updated 3 years ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆277Jul 6, 2025Updated 9 months ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,521Jul 17, 2025Updated 9 months ago
- SQuant [ICLR22]☆131Sep 27, 2022Updated 3 years ago
- ☆192Jan 14, 2025Updated last year
- This is the official pytorch implementation for the paper: Towards Accurate Post-training Quantization for Diffusion Models.(CVPR24 Poste…☆38Jun 4, 2024Updated last year
- ☆18Mar 18, 2024Updated 2 years ago