An easy-to-use package for implementing SmoothQuant for LLMs
☆111Apr 7, 2025Updated last year
Alternatives and similar repositories for AutoSmoothQuant
Users that are interested in AutoSmoothQuant are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆155Aug 21, 2025Updated 8 months ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆23Mar 15, 2024Updated 2 years ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,061Sep 4, 2024Updated last year
- Official repository Flash Local Linear Attention☆23Apr 23, 2026Updated last week
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆337Jul 2, 2024Updated last year
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- [COLM 2025] DFRot: Achieving Outlier-Free and Massive Activation-Free for Rotated LLMs with Refined Rotation; 知乎:https://zhuanlan.zhihu.c…☆30Mar 5, 2025Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆834Mar 6, 2025Updated last year
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.☆891Nov 26, 2025Updated 5 months ago
- ☆66Apr 26, 2025Updated last year
- Boosting 4-bit inference kernels with 2:4 Sparsity☆95Sep 4, 2024Updated last year
- Code implementation of GPTAQ (https://arxiv.org/abs/2504.02692)☆89Jul 28, 2025Updated 9 months ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,641Jul 12, 2024Updated last year
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆171Nov 26, 2025Updated 5 months ago
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆505Nov 26, 2024Updated last year
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆11Dec 13, 2023Updated 2 years ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆280Jul 16, 2025Updated 9 months ago
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆387Nov 20, 2025Updated 5 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆266Dec 4, 2025Updated 4 months ago
- vLLM performance dashboard☆46Apr 26, 2024Updated 2 years ago
- High Performance FP8 GEMM Kernels for SM89 and later GPUs.☆21Jan 24, 2025Updated last year
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- ☆85Apr 18, 2025Updated last year
- GPU operators for sparse tensor operations☆36Mar 11, 2024Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- A quantization algorithm for LLM☆149Jun 21, 2024Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆182Jul 12, 2024Updated last year
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆420Aug 13, 2024Updated last year
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆279Aug 31, 2024Updated last year
- Repository for the QUIK project, enabling the use of 4bit kernels for generative inference - EMNLP 2024☆186Apr 16, 2024Updated 2 years ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,512Jul 17, 2025Updated 9 months ago
- Model Compression Toolbox for Large Language Models and Diffusion Models☆777Aug 14, 2025Updated 8 months ago
- ☆210May 5, 2025Updated 11 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA.☆276Updated this week
- GPTQ inference TVM kernel☆40Apr 25, 2024Updated 2 years ago
- FastCache: Fast Caching for Diffusion Transformer Through Learnable Linear Approximation [Efficient ML Model]☆49Updated this week
- Fast and memory-efficient exact attention☆21Apr 10, 2026Updated 2 weeks ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 8 months ago
- ☆33Feb 3, 2025Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆113Sep 10, 2024Updated last year