A high-throughput and memory-efficient inference and serving engine for LLMs
☆266Dec 4, 2025Updated 4 months ago
Alternatives and similar repositories for nm-vllm
Users that are interested in nm-vllm are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Boosting 4-bit inference kernels with 2:4 Sparsity☆95Sep 4, 2024Updated last year
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,065Sep 4, 2024Updated last year
- ☆210May 5, 2025Updated 11 months ago
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆3,169Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆274Apr 24, 2026Updated last week
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- ML model optimization product to accelerate inference.☆325Jun 2, 2025Updated 11 months ago
- ☆105Sep 9, 2024Updated last year
- KV cache compression for high-throughput LLM inference☆157Feb 5, 2025Updated last year
- Sparsity-aware deep learning inference runtime for CPUs☆3,162Jun 2, 2025Updated 11 months ago
- A throughput-oriented high-performance serving framework for LLMs☆954Mar 29, 2026Updated last month
- ☆41Nov 22, 2025Updated 5 months ago
- An easy-to-use package for implementing SmoothQuant for LLMs☆111Apr 7, 2025Updated last year
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆121Mar 6, 2024Updated 2 years ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆279Aug 31, 2024Updated last year
- Easy and Efficient Quantization for Transformers☆206Mar 25, 2026Updated last month
- Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models☆2,143Jun 2, 2025Updated 11 months ago
- FlashInfer: Kernel Library for LLM Serving☆5,498Apr 25, 2026Updated last week
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,333Mar 6, 2025Updated last year
- Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes☆388Jun 2, 2025Updated 11 months ago
- Serving multiple LoRA finetuned LLM as one☆1,155May 8, 2024Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆146Dec 4, 2024Updated last year
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆365Apr 25, 2026Updated last week
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆155Aug 21, 2025Updated 8 months ago
- BERT score for text generation☆12Jan 15, 2025Updated last year
- Official implementation of Half-Quadratic Quantization (HQQ)☆931Feb 26, 2026Updated 2 months ago
- Supercharge huggingface transformers with model parallelism.☆78Jul 23, 2025Updated 9 months ago
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆391Apr 13, 2025Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆17Updated this week
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆280Nov 3, 2023Updated 2 years ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆182Jul 12, 2024Updated last year
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆338Jul 2, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆1,086Updated this week
- Top-level directory for documentation and general content☆120Jun 2, 2025Updated 11 months ago
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).☆2,299Feb 20, 2026Updated 2 months ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆1,106Updated this week
- Beyond LM: How can language model go forward in the future?☆15Apr 30, 2023Updated 3 years ago
- ☆51May 31, 2024Updated last year
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆140Jul 25, 2024Updated last year