casper-hansen / AutoAWQ_kernels
☆52Updated this week
Related projects ⓘ
Alternatives and complementary repositories for AutoAWQ_kernels
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ, and export to onnx/onnx-runtime easily.☆150Updated last month
- An algorithm for static activation quantization of LLMs☆77Updated 2 weeks ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆209Updated 3 weeks ago
- An easy-to-use package for implementing SmoothQuant for LLMs☆83Updated 6 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆51Updated 2 months ago
- ☆47Updated 2 months ago
- ☆55Updated 5 months ago
- ☆114Updated 7 months ago
- ☆158Updated last month
- Materials for learning SGLang☆105Updated this week
- Odysseus: Playground of LLM Sequence Parallelism☆57Updated 5 months ago
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆84Updated 6 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆89Updated last month
- ☆79Updated 2 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆53Updated 3 weeks ago
- Production ready LLM model compression/quantization toolkit with accelerated inference support for both cpu/gpu via HF, vLLM, and SGLang.☆125Updated this week
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆278Updated 4 months ago
- ☆64Updated 3 months ago
- Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆79Updated this week
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆305Updated 3 months ago
- ☆188Updated 6 months ago
- KV cache compression for high-throughput LLM inference☆87Updated this week
- ☆88Updated 2 months ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆56Updated 11 months ago
- Official PyTorch implementation of FlatQuant: Flatness Matters for LLM Quantization☆63Updated last week
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆34Updated 8 months ago
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆241Updated last month
- EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆226Updated last month
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆82Updated 8 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆147Updated 4 months ago