[CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers
☆74Sep 3, 2024Updated last year
Alternatives and similar repositories for Q-DiT
Users that are interested in Q-DiT are comparing it to the libraries listed below
Sorting:
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆151Mar 21, 2025Updated 11 months ago
- [ICCV 2025] QuEST: Efficient Finetuning for Low-bit Diffusion Models☆57Jun 26, 2025Updated 8 months ago
- [ICLR 2024 Spotlight] This is the official PyTorch implementation of "EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Di…☆68Jun 4, 2024Updated last year
- ☆15Mar 21, 2025Updated 11 months ago
- [ICCV 2023] Q-Diffusion: Quantizing Diffusion Models.☆370Mar 21, 2024Updated last year
- TerDiT: Ternary Diffusion Models with Transformers☆74Jun 17, 2024Updated last year
- (ICLR 2025) BinaryDM: Accurate Weight Binarization for Efficient Diffusion Models☆26Oct 4, 2024Updated last year
- [CVPR 2024 Highlight & TPAMI 2025] This is the official PyTorch implementation of "TFMQ-DM: Temporal Feature Maintenance Quantization for…☆108Sep 29, 2025Updated 5 months ago
- DiTAS: Quantizing Diffusion Transformers via Enhanced Activation Smoothing (WACV 2025)☆12Feb 7, 2026Updated last month
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆487Nov 26, 2024Updated last year
- [TMLR] Official PyTorch implementation of paper "Efficient Quantization-aware Training with Adaptive Coreset Selection"☆37Aug 20, 2024Updated last year
- Official Code For Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM☆14Dec 27, 2023Updated 2 years ago
- High performance inference engine for diffusion models☆105Sep 5, 2025Updated 6 months ago
- PyTorch implementation of PTQ4DiT https://arxiv.org/abs/2405.16005☆45Nov 8, 2024Updated last year
- The official implementation of PTQD: Accurate Post-Training Quantization for Diffusion Models☆103Mar 12, 2024Updated last year
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆30Mar 28, 2024Updated last year
- [ICLR2025]: OSTQuant: Refining Large Language Model Quantization with Orthogonal and Scaling Transformations for Better Distribution Fitt…☆87Apr 8, 2025Updated 11 months ago
- Model Compression Toolbox for Large Language Models and Diffusion Models☆761Aug 14, 2025Updated 6 months ago
- [ICML 2025] Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"☆210Nov 25, 2025Updated 3 months ago
- (NeurIPS 2024) BiDM: Pushing the Limit of Quantization for Diffusion Models☆22Nov 20, 2024Updated last year
- [ICLR2025 Spotlight] SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models☆3,716Updated this week
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆47Jun 19, 2024Updated last year
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆323Mar 4, 2025Updated last year
- ☆23Nov 26, 2024Updated last year
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- Two Stones Hit One Bird: Bilevel Positional Encoding for Better Length Extrapolation, ICML 2024☆22Jun 26, 2024Updated last year
- Low-Rank Llama Custom Training☆23Mar 27, 2024Updated last year
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆17Mar 16, 2022Updated 3 years ago
- [ICLR'25] ARB-LLM: Alternating Refined Binarizations for Large Language Models☆28Aug 5, 2025Updated 7 months ago
- ☆49Feb 18, 2025Updated last year
- [CVPR 2025 Highlight] TinyFusion: Diffusion Transformers Learned Shallow☆162Dec 1, 2025Updated 3 months ago
- ☆58Oct 6, 2023Updated 2 years ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,612Jul 12, 2024Updated last year
- [NeurIPS'25 Spotlight🔥] Official Implementation of RobustMerge: Parameter-Efficient Model Merging for MLLMs with Direction Robustness☆59Dec 25, 2025Updated 2 months ago
- Official implementation for ECCV 2022 paper LIMPQ - "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance"☆61Mar 19, 2023Updated 2 years ago
- ☆26Nov 23, 2023Updated 2 years ago
- 📚 Collection of awesome generation acceleration resources.☆390Jul 7, 2025Updated 8 months ago
- Fast Hadamard transform in CUDA, with a PyTorch interface☆285Oct 19, 2025Updated 4 months ago
- Code for our ICCV 2025 paper "Adaptive Caching for Faster Video Generation with Diffusion Transformers"☆167Nov 5, 2024Updated last year