scalable and robust tree-based speculative decoding algorithm
☆370Jan 28, 2025Updated last year
Alternatives and similar repositories for Sequoia
Users that are interested in Sequoia are comparing it to the libraries listed below
Sorting:
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆277Aug 31, 2024Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆144Dec 4, 2024Updated last year
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).☆2,188Feb 20, 2026Updated last week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,708Jun 25, 2024Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,315Mar 6, 2025Updated 11 months ago
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆1,126Jan 24, 2026Updated last month
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆114Mar 20, 2025Updated 11 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆816Mar 6, 2025Updated 11 months ago
- An innovative method expediting LLMs via streamlined semi-autoregressive generation and draft verification.☆26Apr 15, 2025Updated 10 months ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,861Feb 20, 2026Updated last week
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**☆214Feb 13, 2025Updated last year
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)☆369Apr 22, 2025Updated 10 months ago
- REST: Retrieval-Based Speculative Decoding, NAACL 2024☆214Sep 11, 2025Updated 5 months ago
- [ICML 2024] CLLMs: Consistency Large Language Models☆412Nov 16, 2024Updated last year
- ☆352Apr 2, 2024Updated last year
- Reaching LLaMA2 Performance with 0.1M Dollars☆988Jul 23, 2024Updated last year
- FlashInfer: Kernel Library for LLM Serving☆5,009Updated this week
- Fast inference from large lauguage models via speculative decoding☆888Aug 22, 2024Updated last year
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆156Apr 7, 2025Updated 10 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,188Sep 30, 2025Updated 5 months ago
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆62Feb 21, 2025Updated last year
- Serving multiple LoRA finetuned LLM as one☆1,145May 8, 2024Updated last year
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,897Jan 21, 2024Updated 2 years ago
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.☆504Aug 1, 2024Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆753Sep 27, 2024Updated last year
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆665Jun 1, 2024Updated last year
- Keyformer proposes KV Cache reduction through key tokens identification and without the need for fine-tuning☆57Mar 26, 2024Updated last year
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆2,141Feb 19, 2026Updated last week
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆177Jul 12, 2024Updated last year
- A throughput-oriented high-performance serving framework for LLMs☆946Oct 29, 2025Updated 4 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆248Jun 6, 2025Updated 8 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆372Jul 10, 2025Updated 7 months ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,441Jul 17, 2025Updated 7 months ago
- Multi-Candidate Speculative Decoding☆39Apr 22, 2024Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆266Oct 3, 2025Updated 4 months ago
- [NeurIPS'23] Speculative Decoding with Big Little Decoder☆97Feb 6, 2024Updated 2 years ago
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,182Aug 22, 2025Updated 6 months ago
- ☆261Jul 11, 2024Updated last year
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,428Updated this week