[ICCV 2023] Q-Diffusion: Quantizing Diffusion Models.
☆371Mar 21, 2024Updated 2 years ago
Alternatives and similar repositories for q-diffusion
Users that are interested in q-diffusion are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The official implementation of PTQD: Accurate Post-Training Quantization for Diffusion Models☆103Mar 12, 2024Updated 2 years ago
- Implementation of Post-training Quantization on Diffusion Models (CVPR 2023)☆142Apr 1, 2023Updated 3 years ago
- [ICCV 2025] QuEST: Efficient Finetuning for Low-bit Diffusion Models☆58Jun 26, 2025Updated 9 months ago
- [CVPR 2024 Highlight & TPAMI 2025] This is the official PyTorch implementation of "TFMQ-DM: Temporal Feature Maintenance Quantization for…☆108Sep 29, 2025Updated 6 months ago
- [ICLR 2024 Spotlight] This is the official PyTorch implementation of "EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Di…☆69Jun 4, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Pytorch implementation of BRECQ, ICLR 2021☆296Aug 1, 2021Updated 4 years ago
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆156Mar 21, 2025Updated last year
- [CVPR 2024] DeepCache: Accelerating Diffusion Models for Free☆964Jun 27, 2024Updated last year
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,632Jul 12, 2024Updated last year
- [NeurIPS 2023] Structural Pruning for Diffusion Models☆218Jul 8, 2024Updated last year
- The official PyTorch implementation of the ICLR2022 paper, QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quan…☆129Sep 23, 2025Updated 6 months ago
- Model Compression Toolbox for Large Language Models and Diffusion Models☆774Aug 14, 2025Updated 8 months ago
- [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆77Sep 3, 2024Updated last year
- [CVPR 2024 Highlight] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models☆727Dec 2, 2024Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- [ECCV24] MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization☆49Nov 27, 2024Updated last year
- This repository contains integer operators on GPUs for PyTorch.☆236Sep 29, 2023Updated 2 years ago
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆501Nov 26, 2024Updated last year
- A Compressed Stable Diffusion for Efficient Text-to-Image Generation [ECCV'24]☆315Jul 6, 2024Updated last year
- [CVPR 2023] PD-Quant: Post-Training Quantization Based on Prediction Difference Metric☆61Mar 23, 2023Updated 3 years ago
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,287Mar 27, 2024Updated 2 years ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆826Mar 6, 2025Updated last year
- The official implementation of "EDA-DM: Enhanced Distribution Alignment for Post-Training Quantization of Diffusion Models"☆21Jul 8, 2025Updated 9 months ago
- Segmind Distilled diffusion☆618Oct 18, 2023Updated 2 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- ☆171Mar 9, 2023Updated 3 years ago
- Tiny optimized Stable-diffusion that can run on GPUs with just 1GB of VRAM. (Beta)☆182Jul 20, 2023Updated 2 years ago
- List of papers related to neural network quantization in recent AI conferences and journals.☆814Mar 27, 2025Updated last year
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.☆892Nov 26, 2025Updated 4 months ago
- BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.☆58Feb 7, 2023Updated 3 years ago
- A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are co…☆2,343Apr 5, 2026Updated last week
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆717Aug 13, 2024Updated last year
- Official implementation of the ICLR 2024 paper AffineQuant☆29Mar 30, 2024Updated 2 years ago
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- ☆36Mar 29, 2023Updated 3 years ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆276Jul 6, 2025Updated 9 months ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,498Jul 17, 2025Updated 8 months ago
- SQuant [ICLR22]☆131Sep 27, 2022Updated 3 years ago
- ☆191Jan 14, 2025Updated last year
- This is the official pytorch implementation for the paper: Towards Accurate Post-training Quantization for Diffusion Models.(CVPR24 Poste…☆38Jun 4, 2024Updated last year
- ☆18Mar 18, 2024Updated 2 years ago