A high-throughput and memory-efficient inference and serving engine for LLMs
☆267Dec 4, 2025Updated 3 months ago
Alternatives and similar repositories for nm-vllm
Users that are interested in nm-vllm are comparing it to the libraries listed below
Sorting:
- Boosting 4-bit inference kernels with 2:4 Sparsity☆94Sep 4, 2024Updated last year
- ☆207May 5, 2025Updated 10 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,041Sep 4, 2024Updated last year
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,891Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆266Updated this week
- ML model optimization product to accelerate inference.☆326Jun 2, 2025Updated 9 months ago
- ☆105Sep 9, 2024Updated last year
- A throughput-oriented high-performance serving framework for LLMs☆949Oct 29, 2025Updated 4 months ago
- KV cache compression for high-throughput LLM inference☆153Feb 5, 2025Updated last year
- Sparsity-aware deep learning inference runtime for CPUs☆3,163Jun 2, 2025Updated 9 months ago
- ☆40Nov 22, 2025Updated 4 months ago
- An easy-to-use package for implementing SmoothQuant for LLMs☆111Apr 7, 2025Updated 11 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆120Mar 6, 2024Updated 2 years ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆277Aug 31, 2024Updated last year
- vLLM performance dashboard☆43Apr 26, 2024Updated last year
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆285Updated this week
- Easy and Efficient Quantization for Transformers☆206Jan 28, 2026Updated last month
- Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models☆2,142Jun 2, 2025Updated 9 months ago
- FlashInfer: Kernel Library for LLM Serving☆5,194Updated this week
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,322Mar 6, 2025Updated last year
- Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes☆387Jun 2, 2025Updated 9 months ago
- Serving multiple LoRA finetuned LLM as one☆1,148May 8, 2024Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆144Dec 4, 2024Updated last year
- Official implementation of Half-Quadratic Quantization (HQQ)☆919Feb 26, 2026Updated 3 weeks ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆155Aug 21, 2025Updated 7 months ago
- BERT score for text generation☆12Jan 15, 2025Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆180Jul 12, 2024Updated last year
- Supercharge huggingface transformers with model parallelism.☆78Jul 23, 2025Updated 7 months ago
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆389Apr 13, 2025Updated 11 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆16Updated this week
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆935Updated this week
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆281Nov 3, 2023Updated 2 years ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).☆2,229Feb 20, 2026Updated last month
- Top-level directory for documentation and general content☆120Jun 2, 2025Updated 9 months ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆1,070Updated this week
- Beyond LM: How can language model go forward in the future?☆15Apr 30, 2023Updated 2 years ago
- ☆51May 31, 2024Updated last year