A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM
☆335Apr 10, 2026Updated this week
Alternatives and similar repositories for speculators
Users that are interested in speculators are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A safetensors extension to efficiently store sparse quantized tensors on disk☆270Updated this week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,996Apr 9, 2026Updated last week
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆777Apr 2, 2026Updated 2 weeks ago
- vLLM adapter for a TGIS-compatible gRPC server.☆55Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆266Dec 4, 2025Updated 4 months ago
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- 3x Faster Inference; Unofficial implementation of EAGLE Speculative Decoding☆82Jul 3, 2025Updated 9 months ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆421Mar 28, 2026Updated 2 weeks ago
- Bagua tutorials.☆13Sep 4, 2022Updated 3 years ago
- Achieve state of the art inference performance with modern accelerators on Kubernetes☆2,957Updated this week
- ☆46Nov 10, 2023Updated 2 years ago
- ☆12Mar 8, 2022Updated 4 years ago
- Memory optimized Mixture of Experts☆75Jul 25, 2025Updated 8 months ago
- A high-performance and light-weight router for vLLM large scale deployment☆188Updated this week
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆291Apr 2, 2026Updated 2 weeks ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).☆2,273Feb 20, 2026Updated last month
- KV Cache & LoRA for minGPT☆62Mar 4, 2026Updated last month
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆154Aug 21, 2025Updated 7 months ago
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆987Updated this week
- Distributed SDDMM Kernel☆12Jul 8, 2022Updated 3 years ago
- Benchmark and optimize LLM inference across frameworks with ease☆177Sep 12, 2025Updated 7 months ago
- NVIDIA Inference Xfer Library (NIXL)☆970Updated this week
- Efficient LLM Inference over Long Sequences☆393Jun 25, 2025Updated 9 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,051Sep 4, 2024Updated last year
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- Longitudinal Evaluation of LLMs via Data Compression☆33May 29, 2024Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆43Jan 15, 2024Updated 2 years ago
- ☆18Aug 27, 2023Updated 2 years ago
- Common recipes to run vLLM☆669Updated this week
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- Tile-based language built for AI computation across all scales☆141Mar 27, 2026Updated 2 weeks ago
- Cloud Native Benchmarking of Foundation Models☆45Jul 31, 2025Updated 8 months ago
- A throughput-oriented high-performance serving framework for LLMs☆952Mar 29, 2026Updated 2 weeks ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆826Mar 6, 2025Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Structured, temporal memory for AI agents.☆66Apr 8, 2026Updated last week
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆343Updated this week
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆61Nov 8, 2024Updated last year
- ☆158Oct 9, 2024Updated last year
- FlashInfer: Kernel Library for LLM Serving☆5,372Updated this week
- Multi-Level Triton Runner supporting Python, IR, PTX, and cubin.☆84Mar 29, 2026Updated 2 weeks ago
- An Envoy inspired, ultimate LLM-first gateway for LLM serving and downstream application developers and enterprises☆26Apr 24, 2025Updated 11 months ago