apple / ml-recurrent-drafterView external linksLinks
☆219Jan 23, 2025Updated last year
Alternatives and similar repositories for ml-recurrent-drafter
Users that are interested in ml-recurrent-drafter are comparing it to the libraries listed below
Sorting:
- JAX Scalify: end-to-end scaled arithmetics☆18Oct 30, 2024Updated last year
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)☆367Apr 22, 2025Updated 9 months ago
- ☆91Aug 18, 2024Updated last year
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆276Aug 31, 2024Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆265Oct 3, 2025Updated 4 months ago
- Official repo of dataset-decomposition paper [NeurIPS 2024]☆21Jan 8, 2025Updated last year
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).☆2,180Jan 27, 2026Updated 2 weeks ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆812Mar 6, 2025Updated 11 months ago
- ☆131May 29, 2025Updated 8 months ago
- A throughput-oriented high-performance serving framework for LLMs☆945Oct 29, 2025Updated 3 months ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆524Feb 10, 2025Updated last year
- Odysseus: Playground of LLM Sequence Parallelism☆79Jun 17, 2024Updated last year
- An Extensible Deep Learning Library☆2,319Feb 6, 2026Updated last week
- An innovative method expediting LLMs via streamlined semi-autoregressive generation and draft verification.☆26Apr 15, 2025Updated 9 months ago
- ☆65Apr 26, 2025Updated 9 months ago
- ☆158Feb 15, 2025Updated 11 months ago
- LLM Serving Performance Evaluation Harness☆83Feb 25, 2025Updated 11 months ago
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdf☆21Jul 29, 2024Updated last year
- Multilingual Knowledge Graph Enhancement (EMNLP 2023)☆24Nov 28, 2023Updated 2 years ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,183Sep 30, 2025Updated 4 months ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,705Jun 25, 2024Updated last year
- ☆13Feb 5, 2024Updated 2 years ago
- ☆14Mar 2, 2025Updated 11 months ago
- Rust bindings for CTranslate2☆14Jun 21, 2023Updated 2 years ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆92Jul 17, 2025Updated 6 months ago
- ☆307Apr 23, 2025Updated 9 months ago
- Tile primitives for speedy kernels☆3,139Updated this week
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆357Feb 5, 2026Updated last week
- REST: Retrieval-Based Speculative Decoding, NAACL 2024☆214Sep 11, 2025Updated 5 months ago
- [ACL 2025 main] FR-Spec: Frequency-Ranked Speculative Sampling☆49Jul 15, 2025Updated 6 months ago
- A low-latency & high-throughput serving engine for LLMs☆470Jan 8, 2026Updated last month
- [EMNLP 2023] Official implementation of the algorithm ETSC: Exact Toeplitz-to-SSM Conversion our EMNLP 2023 paper - Accelerating Toeplitz…☆14Oct 17, 2023Updated 2 years ago
- ☆46Jul 25, 2024Updated last year
- PyTorch native quantization and sparsity for training and inference☆2,668Updated this week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,705Feb 6, 2026Updated last week
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆643Jan 15, 2026Updated 3 weeks ago
- Hydragen: High-Throughput LLM Inference with Shared Prefixes☆48May 10, 2024Updated last year
- FlashInfer: Kernel Library for LLM Serving☆4,935Updated this week
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆283May 1, 2025Updated 9 months ago