hemingkx / SpeculativeDecodingPapers
π° Must-read papers and blogs on Speculative Decoding β‘οΈ
β471Updated last week
Related projects β
Alternatives and complementary repositories for SpeculativeDecodingPapers
- Fast inference from large lauguage models via speculative decodingβ569Updated 2 months ago
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)β188Updated 3 weeks ago
- β289Updated 7 months ago
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.β391Updated 3 months ago
- Official Implementation of EAGLE-1 (ICML'24) and EAGLE-2 (EMNLP'24)β826Updated this week
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**β138Updated 5 months ago
- Explorations into some recent techniques surrounding speculative decodingβ211Updated last year
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inferenceβ357Updated this week
- π° Must-read papers on KV Cache Compression (constantly updating π€).β136Updated this week
- β502Updated 2 months ago
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline modβ¦β311Updated 2 months ago
- REST: Retrieval-Based Speculative Decoding, NAACL 2024β176Updated last month
- Large Language Model (LLM) Systems Paper Listβ645Updated this week
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Servingβ278Updated 4 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inferenceβ202Updated 2 weeks ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantizationβ305Updated 3 months ago
- Awesome list for LLM pruning.β167Updated this week
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cacheβ241Updated last month
- β188Updated 6 months ago
- Awesome LLM compression research papers and tools.β1,202Updated this week
- QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Servingβ443Updated last week
- Latency and Memory Analysis of Transformer Models for Training and Inferenceβ355Updated last week
- Ring attention implementation with flash attentionβ585Updated last week
- Disaggregated serving system for Large Language Models (LLMs).β359Updated 3 months ago
- A curated list for Efficient Large Language Modelsβ1,270Updated this week
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruningβ558Updated 8 months ago
- [TMLR 2024] Efficient Large Language Models: A Surveyβ1,025Updated last week
- Awesome-LLM-KV-Cache: A curated list of πAwesome LLM KV Cache Papers with Codes.β106Updated last week
- Survey Paper List - Efficient LLM and Foundation Modelsβ220Updated last month
- A large-scale simulation framework for LLM inferenceβ277Updated last month