huggingface / optimum-quanto
A pytorch quantization backend for optimum
☆883Updated last month
Alternatives and similar repositories for optimum-quanto:
Users that are interested in optimum-quanto are comparing it to the libraries listed below
- Official implementation of Half-Quadratic Quantization (HQQ)☆748Updated this week
- TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization, pruning, distillati…☆715Updated this week
- For releasing code related to compression methods for transformers, accompanying our publications☆408Updated last month
- Advanced Quantization Algorithm for LLMs/VLMs.☆372Updated this week
- ☆527Updated 3 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆729Updated 5 months ago
- PyTorch native quantization and sparsity for training and inference☆1,848Updated this week
- Microsoft Automatic Mixed Precision Library☆567Updated 4 months ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,341Updated 7 months ago
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.☆767Updated 4 months ago
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆288Updated 3 weeks ago
- Pipeline Parallelism for PyTorch☆748Updated 6 months ago
- Minimalistic large language model 3D-parallelism training☆1,483Updated this week
- LLM KV cache compression made easy☆397Updated this week
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆676Updated 6 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆514Updated this week
- Production ready LLM model compression/quantization toolkit with accelerated inference support for both cpu/gpu via HF, vLLM, and SGLang.☆290Updated this week
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,194Updated 4 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs…☆2,190Updated this week
- A simple and effective LLM pruning approach.☆712Updated 6 months ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆361Updated 11 months ago
- Official Implementation of EAGLE-1 (ICML'24) and EAGLE-2 (EMNLP'24)☆968Updated this week
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆342Updated 2 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆512Updated this week
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆1,957Updated last month
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆982Updated this week
- A family of compressed models obtained via pruning and knowledge distillation☆324Updated 3 months ago
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆269Updated 5 months ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆523Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆259Updated 4 months ago