[ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
☆23Mar 15, 2024Updated 2 years ago
Alternatives and similar repositories for smoothquantplus
Users that are interested in smoothquantplus are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A high-throughput and memory-efficient inference and serving engine for LLMs☆12Nov 14, 2025Updated 4 months ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆11Dec 13, 2023Updated 2 years ago
- ☆30Jul 22, 2024Updated last year
- An easy-to-use package for implementing SmoothQuant for LLMs☆111Apr 7, 2025Updated 11 months ago
- ☆87Jan 23, 2025Updated last year
- QAQ: Quality Adaptive Quantization for LLM KV Cache☆53Mar 27, 2024Updated last year
- Zen-NAS, a lightning fast, training-free Neural Architecture Searching algorithm☆11Nov 12, 2021Updated 4 years ago
- Model optimizer used in Adlik.☆42May 23, 2023Updated 2 years ago
- MUA-RL: MULTI-TURN USER-INTERACTING AGENT REINFORCEMENT LEARNING FOR AGENTIC TOOL USE☆58Nov 5, 2025Updated 4 months ago
- ☆21Feb 5, 2024Updated 2 years ago
- ☆11Dec 26, 2025Updated 2 months ago
- ☆38Jun 14, 2023Updated 2 years ago
- [ICLRW'26] EoRA: Fine-tuning-free Compensation for Compressed LLM with Eigenspace Low-Rank Approximation☆29Mar 16, 2026Updated last week
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆172Nov 26, 2025Updated 3 months ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆39Mar 11, 2024Updated 2 years ago
- The official implementation of the EMNLP 2023 paper LLM-FP4☆222Dec 15, 2023Updated 2 years ago
- Incredible acceleration with pruning or the other compression techniques☆13Jul 7, 2021Updated 4 years ago
- 用C++实现的一 个简单的线程池,支持任务队列,实际任务继承自taskbase。☆12Apr 15, 2015Updated 10 years ago
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆250Mar 15, 2024Updated 2 years ago
- ☆27Jul 30, 2024Updated last year
- Open-source evaluation toolkit of large vision-language models (LVLMs), support ~100 VLMs, 30+ benchmarks☆15Feb 17, 2025Updated last year
- Implemented a script that automatically adjusts Qwen3's inference and non-inference capabilities, based on an OpenAI-like API. The infere…☆21May 9, 2025Updated 10 months ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees" adapted for Llama models☆41Aug 4, 2023Updated 2 years ago
- The reproduct of the paper - Aligner: Achieving Efficient Alignment through Weak-to-Strong Correction☆22May 29, 2024Updated last year
- LLM Quantization toolkit☆19Jul 4, 2025Updated 8 months ago
- Official implementation of the ICLR 2024 paper AffineQuant☆28Mar 30, 2024Updated last year
- ☆19Feb 18, 2025Updated last year
- ☆33Jan 30, 2026Updated last month
- ☆16Jan 14, 2025Updated last year
- patches for huggingface transformers to save memory☆35Jun 2, 2025Updated 9 months ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆120Mar 6, 2024Updated 2 years ago
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆187Updated this week
- A lightweight event bus for C++☆16Sep 14, 2018Updated 7 years ago
- The code of 《HAM: Hidden Anchor Mechanism for Scene Text Detection》☆11Sep 22, 2020Updated 5 years ago
- 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM☆27Feb 26, 2024Updated 2 years ago
- ☆29Feb 3, 2026Updated last month
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆277Aug 31, 2024Updated last year
- A process runner for Procfile-based applications☆16Jul 13, 2024Updated last year