Fast inference from large lauguage models via speculative decoding
β911Aug 22, 2024Updated last year
Alternatives and similar repositories for LLMSpeculativeSampling
Users that are interested in LLMSpeculativeSampling are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Explorations into some recent techniques surrounding speculative decodingβ300Dec 22, 2024Updated last year
- π° Must-read papers and blogs on Speculative Decoding β‘οΈβ1,180Mar 31, 2026Updated last week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Headsβ2,720Jun 25, 2024Updated last year
- REST: Retrieval-Based Speculative Decoding, NAACL 2024β216Mar 5, 2026Updated last month
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)β381Apr 22, 2025Updated 11 months ago
- Managed hosting for WordPress and PHP on Cloudways β’ AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**β223Feb 13, 2025Updated last year
- Multi-Candidate Speculative Decodingβ40Apr 22, 2024Updated last year
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).β2,253Feb 20, 2026Updated last month
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmindβ110Feb 29, 2024Updated 2 years ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decodingβ1,327Mar 6, 2025Updated last year
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Trainingβ1,870Mar 25, 2026Updated 2 weeks ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decodingβ279Aug 31, 2024Updated last year
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Lengthβ155Dec 23, 2025Updated 3 months ago
- [NeurIPS'23] Speculative Decoding with Big Little Decoderβ97Feb 6, 2024Updated 2 years ago
- Virtual machines for every use case on DigitalOcean β’ AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Accelerationβ65Feb 21, 2025Updated last year
- πA curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.πβ5,130Updated this week
- FlashInfer: Kernel Library for LLM Servingβ5,273Apr 4, 2026Updated last week
- β28May 24, 2025Updated 10 months ago
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.β510Aug 1, 2024Updated last year
- Simple implementation of Speculative Sampling in NumPy for GPT-2.β99Aug 20, 2023Updated 2 years ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Seβ¦β822Mar 6, 2025Updated last year
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Accelerationβ3,488Jul 17, 2025Updated 8 months ago
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.β5,039Apr 3, 2026Updated last week
- Virtual machines for every use case on DigitalOcean β’ AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabiliβ¦β3,995Apr 3, 2026Updated last week
- β26Mar 14, 2024Updated 2 years ago
- β15Aug 19, 2024Updated last year
- β64Dec 3, 2024Updated last year
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inferenceβ660Jan 15, 2026Updated 2 months ago
- β310Jul 10, 2025Updated 9 months ago
- Transformer related optimization, including BERT, GPTβ6,410Mar 27, 2024Updated 2 years ago
- β155Mar 4, 2025Updated last year
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Headsβ535Feb 10, 2025Updated last year
- Managed hosting for WordPress and PHP on Cloudways β’ AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- Disaggregated serving system for Large Language Models (LLMs).β798Apr 6, 2025Updated last year
- scalable and robust tree-based speculative decoding algorithmβ376Jan 28, 2025Updated last year
- Implementation of the paper Fast Inference from Transformers via Speculative Decoding, Leviathan et al. 2023.β105Dec 2, 2024Updated last year
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Servingβ336Jul 2, 2024Updated last year
- π° Must-read papers on KV Cache Compression (constantly updating π€).β679Feb 24, 2026Updated last month
- β354Apr 2, 2024Updated 2 years ago
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cacheβ381Nov 20, 2025Updated 4 months ago