Official implementation of the ICLR 2024 paper AffineQuant
☆30Mar 30, 2024Updated 2 years ago
Alternatives and similar repositories for AffineQuant
Users that are interested in AffineQuant are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆506Nov 26, 2024Updated last year
- [ICML 2025] Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"☆214Nov 25, 2025Updated 5 months ago
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆171Nov 26, 2025Updated 5 months ago
- ☆25Oct 31, 2024Updated last year
- Code repo for the paper "SpinQuant LLM quantization with learned rotations"☆390Feb 14, 2025Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.☆896Nov 26, 2025Updated 5 months ago
- IndexCache: Accelerating Sparse Attention via Cross-Layer Index Reuse☆96Mar 14, 2026Updated last month
- ☆36Mar 29, 2023Updated 3 years ago
- ☆12Aug 26, 2022Updated 3 years ago
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆39Sep 24, 2024Updated last year
- Reorder-based post-training quantization for large language model☆199May 17, 2023Updated 2 years ago
- ☆34Mar 28, 2025Updated last year
- ☆28Nov 5, 2021Updated 4 years ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆65Apr 15, 2024Updated 2 years ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- [CVPR 2023] PD-Quant: Post-Training Quantization Based on Prediction Difference Metric☆61Mar 23, 2023Updated 3 years ago
- ☆30Jul 22, 2024Updated last year
- This is the official pytorch implementation for the paper: Towards Accurate Post-training Quantization for Diffusion Models.(CVPR24 Poste…☆38Jun 4, 2024Updated last year
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,521Jul 17, 2025Updated 9 months ago
- [EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs, and video generative models.☆711Apr 1, 2026Updated last month
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆338Jul 2, 2024Updated last year
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆51Oct 21, 2023Updated 2 years ago
- [ICLR 2025, IEEE TPAMI 2026] Mixture Compressor & MC#☆73Feb 12, 2025Updated last year
- [ICCV 2023] Q-Diffusion: Quantizing Diffusion Models.☆373Mar 21, 2024Updated 2 years ago
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- [ICLR2025]: OSTQuant: Refining Large Language Model Quantization with Orthogonal and Scaling Transformations for Better Distribution Fitt…☆92Apr 8, 2025Updated last year
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆339Apr 10, 2026Updated 3 weeks ago
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆69Mar 7, 2024Updated 2 years ago
- Scaling Sparse Fine-Tuning to Large Language Models☆19Jan 31, 2024Updated 2 years ago
- 🎓Automatically Update circult-eda-mlsys-tinyml Papers Daily using Github Actions (Update Every 8th hours)☆10Updated this week
- ☆14Mar 5, 2024Updated 2 years ago
- KV cache compression via sparse coding☆17Oct 26, 2025Updated 6 months ago
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆180Apr 24, 2026Updated last week
- For releasing code related to compression methods for transformers, accompanying our publications☆461Jan 16, 2025Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Instruction Following Eval☆17Jan 16, 2025Updated last year
- [ICML 2025] SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models☆59Aug 9, 2024Updated last year
- [ICCAD 2025] Squant☆15Jul 3, 2025Updated 10 months ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆23Mar 15, 2024Updated 2 years ago
- [ICML 2025] Fast and Low-Cost Genomic Foundation Models via Outlier Removal.☆18Jun 19, 2025Updated 10 months ago
- ACL 2023☆39Jun 6, 2023Updated 2 years ago
- [ICLR'25] ARB-LLM: Alternating Refined Binarizations for Large Language Models☆29Aug 5, 2025Updated 9 months ago