Geralt-Targaryen / Awesome-Speculative-DecodingLinks
Reading notes on Speculative Decoding papers
☆13Updated 2 weeks ago
Alternatives and similar repositories for Awesome-Speculative-Decoding
Users that are interested in Awesome-Speculative-Decoding are comparing it to the libraries listed below
Sorting:
- Curated collection of papers in MoE model inference☆213Updated 5 months ago
- This repository serves as a comprehensive survey of LLM development, featuring numerous research papers along with their corresponding co…☆165Updated this week
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆490Updated last month
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆96Updated 3 months ago
- Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.☆334Updated 4 months ago
- Summary of some awesome work for optimizing LLM inference☆85Updated last month
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆146Updated last year
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)☆289Updated 3 months ago
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆260Updated 4 months ago
- ☆139Updated 3 weeks ago
- [COLM 2024] SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models☆21Updated 9 months ago
- ☆71Updated 9 months ago
- a curated list of high-quality papers on resource-efficient LLMs 🌱☆131Updated 4 months ago
- Code Repository for the NeurIPS 2024 Paper "Toward Efficient Inference for Mixture of Experts".☆19Updated 8 months ago
- Awesome list for LLM pruning.☆245Updated 7 months ago
- ☆23Updated 4 months ago
- This repository is established to store personal notes and annotated papers during daily research.☆135Updated this week
- Implement some method of LLM KV Cache Sparsity☆34Updated last year
- Code Repository of Evaluating Quantized Large Language Models☆129Updated 10 months ago
- ☆54Updated last year
- Adaptive Attention Sparsity with Hierarchical Top-p Pruning☆19Updated 5 months ago
- Awesome list for LLM quantization☆253Updated last month
- [SIGMOD 2025] PQCache: Product Quantization-based KVCache for Long Context LLM Inference☆62Updated last month
- [ICML 2025] Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"☆148Updated this week
- ☆41Updated 11 months ago
- ☆108Updated 8 months ago
- The Official Implementation of Ada-KV: Optimizing KV Cache Eviction by Adaptive Budget Allocation for Efficient LLM Inference☆86Updated last month
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆237Updated last month
- ☆145Updated 4 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆306Updated 2 weeks ago