Official implementation of the ICLR 2024 paper AffineQuant
☆28Mar 30, 2024Updated last year
Alternatives and similar repositories for AffineQuant
Users that are interested in AffineQuant are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆492Nov 26, 2024Updated last year
- [TMLR] Official PyTorch implementation of paper "Efficient Quantization-aware Training with Adaptive Coreset Selection"☆38Aug 20, 2024Updated last year
- [ICML 2025] Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"☆211Nov 25, 2025Updated 4 months ago
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆172Nov 26, 2025Updated 4 months ago
- ☆25Oct 31, 2024Updated last year
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Code repo for the paper "SpinQuant LLM quantization with learned rotations"☆380Feb 14, 2025Updated last year
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.☆892Nov 26, 2025Updated 4 months ago
- ☆36Mar 29, 2023Updated 2 years ago
- ☆12Aug 26, 2022Updated 3 years ago
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆38Sep 24, 2024Updated last year
- Reorder-based post-training quantization for large language model☆199May 17, 2023Updated 2 years ago
- ☆34Mar 28, 2025Updated 11 months ago
- ☆28Nov 5, 2021Updated 4 years ago
- [CVPR 2023] PD-Quant: Post-Training Quantization Based on Prediction Difference Metric☆60Mar 23, 2023Updated 3 years ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Apr 15, 2024Updated last year
- ☆30Jul 22, 2024Updated last year
- This is the official pytorch implementation for the paper: Towards Accurate Post-training Quantization for Diffusion Models.(CVPR24 Poste…☆38Jun 4, 2024Updated last year
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,469Jul 17, 2025Updated 8 months ago
- [EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs, and video generative models.☆691Mar 11, 2026Updated 2 weeks ago
- ☆15Sep 24, 2023Updated 2 years ago
- [ICLR2025]: OSTQuant: Refining Large Language Model Quantization with Orthogonal and Scaling Transformations for Better Distribution Fitt…☆88Apr 8, 2025Updated 11 months ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- [ICLR 2025, IEEE TPAMI 2026] Mixture Compressor & MC#☆69Feb 12, 2025Updated last year
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆51Oct 21, 2023Updated 2 years ago
- [ICCV 2023] Q-Diffusion: Quantizing Diffusion Models.☆371Mar 21, 2024Updated 2 years ago
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆333Nov 26, 2025Updated 4 months ago
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆69Mar 7, 2024Updated 2 years ago
- Scaling Sparse Fine-Tuning to Large Language Models☆18Jan 31, 2024Updated 2 years ago
- [ICML 2025] Fast and Low-Cost Genomic Foundation Models via Outlier Removal.☆18Jun 19, 2025Updated 9 months ago
- ☆14Mar 5, 2024Updated 2 years ago
- 🎓Automatically Update circult-eda-mlsys-tinyml Papers Daily using Github Actions (Update Every 8th hours)☆10Updated this week
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆179Oct 3, 2024Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- KV cache compression via sparse coding☆17Oct 26, 2025Updated 4 months ago
- For releasing code related to compression methods for transformers, accompanying our publications☆455Jan 16, 2025Updated last year
- This is the code repo for our paper "Say More with Less: Understanding Prompt Learning Behaviors through Gist Compression".☆12Feb 27, 2024Updated 2 years ago
- [ICML 2025] SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models☆55Aug 9, 2024Updated last year
- [ICCAD 2025] Squant☆15Jul 3, 2025Updated 8 months ago
- opportunity detector and automated trading in crypto exchange market☆11Oct 18, 2017Updated 8 years ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆23Mar 15, 2024Updated 2 years ago