thunlp / OuroborosView external linksLinks
Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)
☆114Mar 20, 2025Updated 10 months ago
Alternatives and similar repositories for Ouroboros
Users that are interested in Ouroboros are comparing it to the libraries listed below
Sorting:
- An innovative method expediting LLMs via streamlined semi-autoregressive generation and draft verification.☆26Apr 15, 2025Updated 9 months ago
- REST: Retrieval-Based Speculative Decoding, NAACL 2024☆214Sep 11, 2025Updated 5 months ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆276Aug 31, 2024Updated last year
- Multi-Candidate Speculative Decoding☆39Apr 22, 2024Updated last year
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆1,121Jan 24, 2026Updated 3 weeks ago
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆147Dec 23, 2025Updated last month
- Cascade Speculative Drafting☆32Apr 2, 2024Updated last year
- ☆13Oct 3, 2024Updated last year
- ☆12Feb 5, 2026Updated last week
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)☆367Apr 22, 2025Updated 9 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,316Mar 6, 2025Updated 11 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆142Dec 4, 2024Updated last year
- scalable and robust tree-based speculative decoding algorithm☆368Jan 28, 2025Updated last year
- (ACL 2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆34May 28, 2025Updated 8 months ago
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆30Dec 6, 2023Updated 2 years ago
- ☆303Jul 10, 2025Updated 7 months ago
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆283May 1, 2025Updated 9 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆812Mar 6, 2025Updated 11 months ago
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).☆2,180Jan 27, 2026Updated 2 weeks ago
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆356Nov 20, 2025Updated 2 months ago
- ☆52Nov 5, 2024Updated last year
- Accelerate inference without tears☆372Jan 23, 2026Updated 3 weeks ago
- ☆28May 24, 2025Updated 8 months ago
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**☆214Feb 13, 2025Updated last year
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,705Jun 25, 2024Updated last year
- sigma-MoE layer☆21Jan 5, 2024Updated 2 years ago
- The official implementation of the EMNLP 2023 paper LLM-FP4☆220Dec 15, 2023Updated 2 years ago
- Fork of Flame repo for training of some new stuff in development☆19Jan 5, 2026Updated last month
- 🌟Official code of our AAAI26 paper 🔍WebFilter☆35Nov 9, 2025Updated 3 months ago
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆658Sep 30, 2025Updated 4 months ago
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆123Jul 4, 2025Updated 7 months ago
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆482Nov 26, 2024Updated last year
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆524Feb 10, 2025Updated last year
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆52Oct 18, 2024Updated last year
- A simple calculation for LLM MFU.☆66Sep 10, 2025Updated 5 months ago
- Official Implementation of SAM-Decoding: Speculative Decoding via Suffix Automaton☆40Feb 13, 2025Updated last year
- ☆23Jan 27, 2025Updated last year
- Manages vllm-nccl dependency☆17Jun 3, 2024Updated last year
- [ICLR2023] NTK-SAP: Improving neural network pruning by aligning training dynamics☆20May 1, 2023Updated 2 years ago