microsoft / VPTQLinks
VPTQ, A Flexible and Extreme low-bit quantization algorithm
β674Updated 9 months ago
Alternatives and similar repositories for VPTQ
Users that are interested in VPTQ are comparing it to the libraries listed below
Sorting:
- Official implementation of Half-Quadratic Quantization (HQQ)β912Updated last month
- π―An accuracy-first, highly efficient quantization toolkit for LLMs, designed to minimize quality degradation across Weight-Only Quantizaβ¦β839Updated last week
- β577Updated last year
- Efficient LLM Inference over Long Sequencesβ394Updated 7 months ago
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Modelsβ327Updated 2 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Seβ¦β810Updated 11 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.β1,005Updated last year
- For releasing code related to compression methods for transformers, accompanying our publicationsβ455Updated last year
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attentionβ¦β1,180Updated 4 months ago
- A throughput-oriented high-performance serving framework for LLMsβ945Updated 3 months ago
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.β888Updated 2 months ago
- LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD ROCm, Intel XPU and Intel/AMD/Apple CPU viβ¦β1,007Updated this week
- An innovative library for efficient LLM inference via low-bit quantizationβ352Updated last year
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.β751Updated 6 months ago
- The homepage of OneBit model quantization framework.β200Updated last year
- A family of compressed models obtained via pruning and knowledge distillationβ364Updated 3 months ago
- β163Updated 7 months ago
- DFloat11 [NeurIPS '25]: Lossless Compression of LLMs and DiTs for Efficient GPU Inferenceβ600Updated 2 months ago
- Fast Matrix Multiplications for Lookup Table-Quantized LLMsβ387Updated 9 months ago
- scalable and robust tree-based speculative decoding algorithmβ366Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMsβ267Updated 2 months ago
- A safetensors extension to efficiently store sparse quantized tensors on diskβ238Updated this week
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Modelsβ260Updated last year
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.β480Updated last year
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantizationβ404Updated last year
- β592Updated last year
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantizationβ713Updated last year
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Headsβ524Updated 11 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.β201Updated last year
- Code repo for the paper "SpinQuant LLM quantization with learned rotations"β372Updated 11 months ago