A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM
☆285Mar 20, 2026Updated last week
Alternatives and similar repositories for speculators
Users that are interested in speculators are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A safetensors extension to efficiently store sparse quantized tensors on disk☆266Updated this week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,928Updated this week
- vLLM adapter for a TGIS-compatible gRPC server.☆55Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Dec 4, 2025Updated 3 months ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆413Mar 3, 2026Updated 3 weeks ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆736Updated this week
- Bagua tutorials.☆13Sep 4, 2022Updated 3 years ago
- The Soft Cosine Measure system developed for the ARQMath-3 shared task evaluation of math information retrieval systems☆13Sep 8, 2022Updated 3 years ago
- ☆12Mar 8, 2022Updated 4 years ago
- Achieve state of the art inference performance with modern accelerators on Kubernetes☆2,657Updated this week
- KV Cache & LoRA for minGPT☆60Mar 4, 2026Updated 3 weeks ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆155Aug 21, 2025Updated 7 months ago
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆935Updated this week
- Common recipes to run vLLM☆511Mar 16, 2026Updated last week
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Distributed SDDMM Kernel☆12Jul 8, 2022Updated 3 years ago
- NVIDIA Inference Xfer Library (NIXL)☆945Mar 20, 2026Updated last week
- Benchmark and optimize LLM inference across frameworks with ease☆174Sep 12, 2025Updated 6 months ago
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).☆2,229Feb 20, 2026Updated last month
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,041Sep 4, 2024Updated last year
- Longitudinal Evaluation of LLMs via Data Compression☆33May 29, 2024Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Jan 15, 2024Updated 2 years ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- Cloud Native Benchmarking of Foundation Models☆45Jul 31, 2025Updated 7 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- A throughput-oriented high-performance serving framework for LLMs☆950Oct 29, 2025Updated 4 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆821Mar 6, 2025Updated last year
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆332Updated this week
- ☆52Feb 19, 2024Updated 2 years ago
- ☆154Oct 9, 2024Updated last year
- Model Express is a Rust-based component meant to be placed next to existing model inference systems to speed up their startup times and i…☆40Mar 20, 2026Updated last week
- Multi-Level Triton Runner supporting Python, IR, PTX, and cubin.☆84Updated this week
- FlashInfer: Kernel Library for LLM Serving☆5,194Updated this week
- Autonomous GPU Kernel Generation & Optimization via Deep Agents☆328Mar 18, 2026Updated last week
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- A Rust reimplementation of genai-bench for benchmarking LLM serving systems at high concurrency with accurate timing and industry-standar…☆284Updated this week
- An Envoy inspired, ultimate LLM-first gateway for LLM serving and downstream application developers and enterprises☆26Apr 24, 2025Updated 11 months ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆416Aug 13, 2024Updated last year
- Materials for learning SGLang☆785Jan 5, 2026Updated 2 months ago
- Official implementation of the paper: "A deeper look at depth pruning of LLMs"☆15Jul 24, 2024Updated last year
- ☆11Dec 2, 2024Updated last year
- An innovative method expediting LLMs via streamlined semi-autoregressive generation and draft verification.☆28Apr 15, 2025Updated 11 months ago