A safetensors extension to efficiently store sparse quantized tensors on disk
☆268Apr 3, 2026Updated this week
Alternatives and similar repositories for compressed-tensors
Users that are interested in compressed-tensors are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,960Mar 31, 2026Updated last week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,048Sep 4, 2024Updated last year
- ☆209May 5, 2025Updated 11 months ago
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆327Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆266Dec 4, 2025Updated 4 months ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Boosting 4-bit inference kernels with 2:4 Sparsity☆94Sep 4, 2024Updated last year
- ☆87Jan 23, 2025Updated last year
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆390Apr 13, 2025Updated 11 months ago
- Fast low-bit matmul kernels in Triton☆443Updated this week
- DeeperGEMM: crazy optimized version☆86May 5, 2025Updated 11 months ago
- A pytorch quantization backend for optimum☆1,035Apr 2, 2026Updated last week
- Triton-based implementation of Sparse Mixture of Experts.☆272Oct 3, 2025Updated 6 months ago
- ☆22May 5, 2025Updated 11 months ago
- PyTorch native quantization and sparsity for training and inference☆2,756Updated this week
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD ROCm, Intel XPU and Intel/AMD/Apple CPU vi…☆1,085Updated this week
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 9 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆43Jan 15, 2024Updated 2 years ago
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,322May 11, 2025Updated 10 months ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆419Aug 13, 2024Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Applied AI experiments and examples for PyTorch☆320Aug 22, 2025Updated 7 months ago
- FlashInfer: Kernel Library for LLM Serving☆5,273Updated this week
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆381Nov 20, 2025Updated 4 months ago
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆498Nov 26, 2024Updated last year
- ☆14Jul 13, 2025Updated 8 months ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- ICML2017 MEC: Memory-efficient Convolution for Deep Neural Network C++实现(非官方)☆17Apr 9, 2019Updated 7 years ago
- ☆113Apr 19, 2024Updated last year
- A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresse…☆2,358Updated this week
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆154Aug 21, 2025Updated 7 months ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆174Nov 11, 2025Updated 4 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- SOTA rounding-based quantization for high-accuracy low-bit LLM inference, seamlessly optimized for CPU/XPU/CUDA, with multi-datatype supp…☆938Apr 2, 2026Updated last week
- Official implementation of Half-Quadratic Quantization (HQQ)☆925Feb 26, 2026Updated last month
- ☆120Mar 18, 2026Updated 3 weeks ago
- Model Compression Toolbox for Large Language Models and Diffusion Models☆773Aug 14, 2025Updated 7 months ago
- extensible collectives library in triton☆98Mar 31, 2025Updated last year
- Fast Hadamard transform in CUDA, with a PyTorch interface☆304Mar 10, 2026Updated 3 weeks ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆822Mar 6, 2025Updated last year