DRSY / EasyKVLinks
Easy control for Key-Value Constrained Generative LLM Inference(https://arxiv.org/abs/2402.06262)
☆63Updated last year
Alternatives and similar repositories for EasyKV
Users that are interested in EasyKV are comparing it to the libraries listed below
Sorting:
- Implementation of NAACL 2024 Outstanding Paper "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆149Updated 6 months ago
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆109Updated 5 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆155Updated 5 months ago
- LongRecipe: Recipe for Efficient Long Context Generalization in Large Language Models☆76Updated 10 months ago
- The official implementation of "Ada-LEval: Evaluating long-context LLMs with length-adaptable benchmarks"☆54Updated 3 months ago
- Long Context Extension and Generalization in LLMs☆60Updated 11 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆116Updated last year
- ☆86Updated 8 months ago
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models☆101Updated last year
- Cascade Speculative Drafting☆29Updated last year
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆141Updated 11 months ago
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆93Updated 2 months ago
- This is the official repo of "QuickLLaMA: Query-aware Inference Acceleration for Large Language Models"☆54Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆221Updated 6 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆206Updated last year
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆107Updated 6 months ago
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆51Updated 10 months ago
- ☆128Updated last year
- The HELMET Benchmark☆171Updated 3 weeks ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆123Updated 8 months ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆80Updated last year
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆62Updated last year
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆118Updated last year
- [EMNLP'24] LongHeads: Multi-Head Attention is Secretly a Long Context Processor☆30Updated last year
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆48Updated 10 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75Updated 3 months ago
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆53Updated 6 months ago
- RL Scaling and Test-Time Scaling (ICML'25)☆113Updated 7 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆34Updated 2 weeks ago