Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)
β389Apr 22, 2025Updated last year
Alternatives and similar repositories for Spec-Bench
Users that are interested in Spec-Bench are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- π° Must-read papers and blogs on Speculative Decoding β‘οΈβ1,204Apr 18, 2026Updated 2 weeks ago
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).β2,299Feb 20, 2026Updated 2 months ago
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Accelerationβ66Feb 21, 2025Updated last year
- Official Implementation of SAM-Decoding: Speculative Decoding via Suffix Automatonβ47Feb 13, 2025Updated last year
- Fast inference from large lauguage models via speculative decodingβ914Aug 22, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer β’ AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**β226Feb 13, 2025Updated last year
- [NeurIPS 2024] The official implementation of "Kangaroo: Lossless Self-Speculative Decoding for Accelerating LLMs via Double Early Exitinβ¦β68Jun 26, 2024Updated last year
- Multi-Candidate Speculative Decodingβ40Apr 22, 2024Updated 2 years ago
- REST: Retrieval-Based Speculative Decoding, NAACL 2024β218Mar 5, 2026Updated last month
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Headsβ2,727Jun 25, 2024Updated last year
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Lengthβ160Dec 23, 2025Updated 4 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decodingβ1,333Mar 6, 2025Updated last year
- β53Feb 19, 2024Updated 2 years ago
- [ACL2025 Oralπ₯]Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recyclingβ28Nov 11, 2025Updated 5 months ago
- Managed hosting for WordPress and PHP on Cloudways β’ AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Official Implementation of "Learning Harmonized Representations for Speculative Sampling" (HASS)β56Mar 14, 2025Updated last year
- β68Nov 4, 2024Updated last year
- [ACL 2025 main] FR-Spec: Frequency-Ranked Speculative Samplingβ54Jul 15, 2025Updated 9 months ago
- scalable and robust tree-based speculative decoding algorithmβ377Jan 28, 2025Updated last year
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)β116Mar 20, 2025Updated last year
- Explorations into some recent techniques surrounding speculative decodingβ300Dec 22, 2024Updated last year
- Codes for our paper "Enhancing Continual Relation Extraction via Classifier Decomposition" (Findings of ACL2023)β10Nov 29, 2023Updated 2 years ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decodingβ279Aug 31, 2024Updated last year
- β606Aug 23, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer β’ AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- [ACL 2026 (Main)] LongSpec: Long-Context Lossless Speculative Decoding with Efficient Drafting and Verificationβ82Jul 14, 2025Updated 9 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decodingβ146Dec 4, 2024Updated last year
- Official Implementation of DART (DART: Diffusion-Inspired Speculative Decoding for Fast LLM Inference).β53Feb 8, 2026Updated 2 months ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.β801Apr 2, 2026Updated last month
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attentionβ¦β1,207Apr 8, 2026Updated 3 weeks ago
- β29May 24, 2025Updated 11 months ago
- Codes for our paper "Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation" (EMNLP 2023 Findings)β47Dec 9, 2023Updated 2 years ago
- β223Jan 23, 2025Updated last year
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Headsβ540Feb 10, 2025Updated last year
- Deploy open-source AI quickly and easily - Special Bonus Offer β’ AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- An innovative method expediting LLMs via streamlined semi-autoregressive generation and draft verification.β28Apr 15, 2025Updated last year
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.β513Aug 1, 2024Updated last year
- β66Dec 3, 2024Updated last year
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cacheβ387Nov 20, 2025Updated 5 months ago
- β355Apr 2, 2024Updated 2 years ago
- Accelerating Large-Scale Reasoning Model Inference with Sparse Self-Speculative Decodingβ98Dec 2, 2025Updated 5 months ago
- β20Dec 24, 2024Updated last year