Juanerx / Q-DiTView external linksLinks
[CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers
☆74Sep 3, 2024Updated last year
Alternatives and similar repositories for Q-DiT
Users that are interested in Q-DiT are comparing it to the libraries listed below
Sorting:
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆149Mar 21, 2025Updated 10 months ago
- [ICCV 2025] QuEST: Efficient Finetuning for Low-bit Diffusion Models☆55Jun 26, 2025Updated 7 months ago
- [ICLR 2024 Spotlight] This is the official PyTorch implementation of "EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Di…☆68Jun 4, 2024Updated last year
- ☆190Jan 14, 2025Updated last year
- [ICLR 2025] Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better☆16Feb 15, 2025Updated last year
- ☆15Mar 21, 2025Updated 10 months ago
- [ICCV 2023] Q-Diffusion: Quantizing Diffusion Models.☆370Mar 21, 2024Updated last year
- TerDiT: Ternary Diffusion Models with Transformers☆74Jun 17, 2024Updated last year
- (ICLR 2025) BinaryDM: Accurate Weight Binarization for Efficient Diffusion Models☆26Oct 4, 2024Updated last year
- AFPQ code implementation☆23Nov 6, 2023Updated 2 years ago
- [CVPR 2024 Highlight & TPAMI 2025] This is the official PyTorch implementation of "TFMQ-DM: Temporal Feature Maintenance Quantization for…☆108Sep 29, 2025Updated 4 months ago
- DiTAS: Quantizing Diffusion Transformers via Enhanced Activation Smoothing (WACV 2025)☆12Feb 7, 2026Updated last week
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆482Nov 26, 2024Updated last year
- Official Code For Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM☆14Dec 27, 2023Updated 2 years ago
- [TMLR] Official PyTorch implementation of paper "Efficient Quantization-aware Training with Adaptive Coreset Selection"☆37Aug 20, 2024Updated last year
- High performance inference engine for diffusion models☆105Sep 5, 2025Updated 5 months ago
- PyTorch implementation of PTQ4DiT https://arxiv.org/abs/2405.16005☆45Nov 8, 2024Updated last year
- The official implementation of PTQD: Accurate Post-Training Quantization for Diffusion Models☆103Mar 12, 2024Updated last year
- BESA is a differentiable weight pruning technique for large language models.☆17Mar 4, 2024Updated last year
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆30Mar 28, 2024Updated last year
- [ICLR2025]: OSTQuant: Refining Large Language Model Quantization with Orthogonal and Scaling Transformations for Better Distribution Fitt…☆88Apr 8, 2025Updated 10 months ago
- [ICCV 2025] QuantCache:Adaptive Importance-Guided Quantization with Hierarchical Latent and Layer Caching for Video Generation☆15Sep 26, 2025Updated 4 months ago
- Model Compression Toolbox for Large Language Models and Diffusion Models☆753Aug 14, 2025Updated 6 months ago
- [ICML 2025] Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"☆211Nov 25, 2025Updated 2 months ago
- (NeurIPS 2024) BiDM: Pushing the Limit of Quantization for Diffusion Models☆22Nov 20, 2024Updated last year
- The official implementation of "EDA-DM: Enhanced Distribution Alignment for Post-Training Quantization of Diffusion Models"☆21Jul 8, 2025Updated 7 months ago
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆49Jun 19, 2024Updated last year
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆322Mar 4, 2025Updated 11 months ago
- ☆23Nov 26, 2024Updated last year
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- study of cutlass☆22Nov 10, 2024Updated last year
- Low-Rank Llama Custom Training☆23Mar 27, 2024Updated last year
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆18Mar 16, 2022Updated 3 years ago
- The official repository of paper "ScaleLong: Towards More Stable Training of Diffusion Model via Scaling Network Long Skip Connection" (N…☆50Oct 23, 2023Updated 2 years ago
- [ICLR'25] ARB-LLM: Alternating Refined Binarizations for Large Language Models☆28Aug 5, 2025Updated 6 months ago
- [ICLR 2026] Official implementation of DiCache: Let Diffusion Model Determine Its Own Cache☆55Jan 26, 2026Updated 2 weeks ago
- [CVPR 2025 Highlight] TinyFusion: Diffusion Transformers Learned Shallow☆160Dec 1, 2025Updated 2 months ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,607Jul 12, 2024Updated last year
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆172Nov 26, 2025Updated 2 months ago