DRSY / EasyKVLinks
Easy control for Key-Value Constrained Generative LLM Inference(https://arxiv.org/abs/2402.06262)
☆63Updated last year
Alternatives and similar repositories for EasyKV
Users that are interested in EasyKV are comparing it to the libraries listed below
Sorting:
- This is the official repo of "QuickLLaMA: Query-aware Inference Acceleration for Large Language Models"☆53Updated last year
- ☆80Updated 6 months ago
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆107Updated 3 months ago
- The official implementation of "Ada-LEval: Evaluating long-context LLMs with length-adaptable benchmarks"☆54Updated last month
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆150Updated 3 months ago
- LongRecipe: Recipe for Efficient Long Context Generalization in Large Language Models☆76Updated 8 months ago
- Implementation of NAACL 2024 Outstanding Paper "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆146Updated 4 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆46Updated 8 months ago
- Long Context Extension and Generalization in LLMs☆57Updated 9 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆117Updated last year
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆47Updated 8 months ago
- Cascade Speculative Drafting☆29Updated last year
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models☆94Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆211Updated 4 months ago
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆86Updated 3 weeks ago
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆77Updated 7 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Updated last year
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆38Updated last year
- [ACL 2025] An inference-time decoding strategy with adaptive foresight sampling☆99Updated last month
- ☆109Updated last month
- RL Scaling and Test-Time Scaling (ICML'25)☆108Updated 5 months ago
- LongSpec: Long-Context Lossless Speculative Decoding with Efficient Drafting and Verification☆57Updated 4 months ago
- ☆47Updated last month
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆116Updated last year
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration