LLMkvsys / rethink-kv-compressionView external linksLinks
☆22Mar 7, 2025Updated 11 months ago
Alternatives and similar repositories for rethink-kv-compression
Users that are interested in rethink-kv-compression are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] | An Efficient Recipe for Long Context Extension via Middle-Focused Positional Encoding☆21Oct 10, 2024Updated last year
- LongSpec: Long-Context Lossless Speculative Decoding with Efficient Drafting and Verification☆73Jul 14, 2025Updated 6 months ago
- Suri: Multi-constraint instruction following for long-form text generation (EMNLP’24)☆27Oct 3, 2025Updated 4 months ago
- Official Implementation of FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acceleration☆29Nov 22, 2025Updated 2 months ago
- ☆14Jan 24, 2025Updated last year
- [ICML 2025] LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Models☆17Nov 4, 2025Updated 3 months ago
- ☆60Jan 12, 2026Updated last month
- ☆28May 24, 2025Updated 8 months ago
- ☆15Apr 11, 2024Updated last year
- Official Implementation of our paper "THOR: Tool-Integrated Hierarchical Optimization via RL for Mathematical Reasoning".☆29Sep 19, 2025Updated 4 months ago
- KV cache compression via sparse coding☆17Oct 26, 2025Updated 3 months ago
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Dec 19, 2024Updated last year
- An Attention Superoptimizer☆22Jan 20, 2025Updated last year
- ☆23May 21, 2025Updated 8 months ago
- Emergent Hierarchical Reasoning in LLMs/VLMs through Reinforcement Learning☆60Oct 24, 2025Updated 3 months ago
- Official Implementation of SAM-Decoding: Speculative Decoding via Suffix Automaton☆40Feb 13, 2025Updated last year
- (ACL 2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆34May 28, 2025Updated 8 months ago
- A comprehensive and efficient long-context model evaluation framework☆30Updated this week
- This is a simple torch implementation of the high performance Multi-Query Attention☆16Aug 23, 2023Updated 2 years ago
- ☆15Jun 4, 2024Updated last year
- ☆53May 19, 2025Updated 8 months ago
- ☆27Nov 25, 2025Updated 2 months ago
- The official implementation of ICLR 2025 paper "Polynomial Composition Activations: Unleashing the Dynamics of Large Language Models".☆18Apr 25, 2025Updated 9 months ago
- ☆16Jul 23, 2024Updated last year
- Official repository of paper "Context-DPO: Aligning Language Models for Context-Faithfulness"☆21Feb 17, 2025Updated 11 months ago
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆24Oct 10, 2025Updated 4 months ago
- ☆19Sep 24, 2025Updated 4 months ago
- [ACM MM25] LongWriter-V: Enabling Ultra-Long and High-Fidelity Generation in Vision-Language Models☆23Mar 29, 2025Updated 10 months ago
- AdaSkip: Adaptive Sublayer Skipping for Accelerating Long-Context LLM Inference☆20Jan 24, 2025Updated last year
- Implementation of "Decoding-time Realignment of Language Models", ICML 2024.☆21Jun 17, 2024Updated last year
- ☆49Nov 25, 2024Updated last year
- Official implementation of ECCV24 paper: POA☆24Aug 8, 2024Updated last year
- ☆33Jun 3, 2025Updated 8 months ago
- Official code and resources for the paper "EXIT: Context-Aware Extractive Compression for Enhancing Retrieval-Augmented Generation."☆22Dec 23, 2024Updated last year
- ☆19Nov 5, 2024Updated last year
- ☆18Sep 5, 2024Updated last year
- Evaluating the faithfulness of long-context language models☆30Oct 21, 2024Updated last year
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆52Oct 18, 2024Updated last year
- Getting Starting with NIMBUS-CORE☆10Dec 16, 2023Updated 2 years ago