Adlik / smoothquantplusLinks
[ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
☆23Updated last year
Alternatives and similar repositories for smoothquantplus
Users that are interested in smoothquantplus are comparing it to the libraries listed below
Sorting:
- An easy-to-use package for implementing SmoothQuant for LLMs☆100Updated 2 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆128Updated 2 months ago
- ☆75Updated 5 months ago
- Official implementation of the ICLR 2024 paper AffineQuant☆26Updated last year
- [ICML 2025] Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"☆137Updated last month
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated last year
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆115Updated last year
- Code implementation of GPTAQ (https://arxiv.org/abs/2504.02692)☆47Updated 3 weeks ago
- QAQ: Quality Adaptive Quantization for LLM KV Cache☆51Updated last year
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆136Updated last month
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆118Updated last year
- Boosting 4-bit inference kernels with 2:4 Sparsity☆79Updated 9 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆70Updated last year
- AFPQ code implementation☆21Updated last year
- An innovative method expediting LLMs via streamlined semi-autoregressive generation and draft verification.☆26Updated 2 months ago
- ☆77Updated 2 months ago
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆46Updated last year
- [NeurIPS 2024] The official implementation of "Kangaroo: Lossless Self-Speculative Decoding for Accelerating LLMs via Double Early Exitin…☆57Updated 11 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆38Updated 3 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆99Updated 3 weeks ago
- ☆86Updated 2 months ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆310Updated 11 months ago
- ☆96Updated 9 months ago
- A quantization algorithm for LLM☆141Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- This repository contains integer operators on GPUs for PyTorch.☆205Updated last year
- Reorder-based post-training quantization for large language model☆191Updated 2 years ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆38Updated 2 weeks ago
- ☆194Updated last month
- ☆20Updated last year