Fast inference from large lauguage models via speculative decoding
β914Aug 22, 2024Updated last year
Alternatives and similar repositories for LLMSpeculativeSampling
Users that are interested in LLMSpeculativeSampling are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Explorations into some recent techniques surrounding speculative decodingβ300Dec 22, 2024Updated last year
- π° Must-read papers and blogs on Speculative Decoding β‘οΈβ1,204Apr 18, 2026Updated last week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Headsβ2,727Jun 25, 2024Updated last year
- REST: Retrieval-Based Speculative Decoding, NAACL 2024β218Mar 5, 2026Updated last month
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)β389Apr 22, 2025Updated last year
- Virtual machines for every use case on DigitalOcean β’ AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**β226Feb 13, 2025Updated last year
- Multi-Candidate Speculative Decodingβ40Apr 22, 2024Updated 2 years ago
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).β2,299Feb 20, 2026Updated 2 months ago
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmindβ110Feb 29, 2024Updated 2 years ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decodingβ1,333Mar 6, 2025Updated last year
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Trainingβ1,875Updated this week
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decodingβ279Aug 31, 2024Updated last year
- [NeurIPS'23] Speculative Decoding with Big Little Decoderβ97Feb 6, 2024Updated 2 years ago
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Lengthβ160Dec 23, 2025Updated 4 months ago
- Serverless GPU API endpoints on Runpod - Get Bonus Credits β’ AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Accelerationβ66Feb 21, 2025Updated last year
- πA curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.πβ5,185Apr 20, 2026Updated last week
- FlashInfer: Kernel Library for LLM Servingβ5,498Updated this week
- β29May 24, 2025Updated 11 months ago
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.β513Aug 1, 2024Updated last year
- Simple implementation of Speculative Sampling in NumPy for GPT-2.β99Aug 20, 2023Updated 2 years ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Seβ¦β834Mar 6, 2025Updated last year
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Accelerationβ3,512Jul 17, 2025Updated 9 months ago
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabiliβ¦β4,036Updated this week
- Proton VPN Special Offer - Get 70% off β’ AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.β5,242Updated this week
- β26Mar 14, 2024Updated 2 years ago
- β16Aug 19, 2024Updated last year
- β66Dec 3, 2024Updated last year
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inferenceβ666Jan 15, 2026Updated 3 months ago
- β311Jul 10, 2025Updated 9 months ago
- Transformer related optimization, including BERT, GPTβ6,412Mar 27, 2024Updated 2 years ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Headsβ540Feb 10, 2025Updated last year
- Disaggregated serving system for Large Language Models (LLMs).β804Apr 6, 2025Updated last year
- Managed hosting for WordPress and PHP on Cloudways β’ AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- β157Mar 4, 2025Updated last year
- scalable and robust tree-based speculative decoding algorithmβ377Jan 28, 2025Updated last year
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Servingβ338Jul 2, 2024Updated last year
- π° Must-read papers on KV Cache Compression (constantly updating π€).β696Apr 15, 2026Updated 2 weeks ago
- Implementation of the paper Fast Inference from Transformers via Speculative Decoding, Leviathan et al. 2023.β106Dec 2, 2024Updated last year
- β355Apr 2, 2024Updated 2 years ago
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cacheβ387Nov 20, 2025Updated 5 months ago