☆23Mar 7, 2025Updated last year
Alternatives and similar repositories for rethink-kv-compression
Users that are interested in rethink-kv-compression are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [ACL 2026 (Main)] LongSpec: Long-Context Lossless Speculative Decoding with Efficient Drafting and Verification☆82Jul 14, 2025Updated 9 months ago
- [NeurIPS 2024] | An Efficient Recipe for Long Context Extension via Middle-Focused Positional Encoding☆22Oct 10, 2024Updated last year
- ☆14Jun 4, 2024Updated last year
- Suri: Multi-constraint instruction following for long-form text generation (EMNLP’24)☆27Oct 3, 2025Updated 7 months ago
- [ICML 2025] LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Models☆18Nov 4, 2025Updated 6 months ago
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- KV cache compression via sparse coding☆17Oct 26, 2025Updated 6 months ago
- From Scratch to Submission: A Complete Guide to Academic Conference Paper Writing☆31Sep 26, 2025Updated 7 months ago
- An Attention Superoptimizer☆22Jan 20, 2025Updated last year
- [ACL Findings 2026] Official Implementation of "FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acc…☆31Apr 14, 2026Updated 3 weeks ago
- ☆20Nov 5, 2024Updated last year
- Source code for Activated LoRA☆25Nov 22, 2025Updated 5 months ago
- ☆64Mar 30, 2026Updated last month
- ☆18Jan 27, 2025Updated last year
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆17Dec 19, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- ☆27Nov 25, 2025Updated 5 months ago
- ☆29May 24, 2025Updated 11 months ago
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆24Oct 10, 2025Updated 6 months ago
- ☆36Jun 3, 2025Updated 11 months ago
- (ACL2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆35May 28, 2025Updated 11 months ago
- [ICLR-2026] Official Implementation of our paper "THOR: Tool-Integrated Hierarchical Optimization via RL for Mathematical Reasoning".☆32Feb 26, 2026Updated 2 months ago
- ☆13Jan 28, 2026Updated 3 months ago
- Official Implementation of SAM-Decoding: Speculative Decoding via Suffix Automaton☆48Feb 13, 2025Updated last year
- [COLM 2024] SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models☆24Oct 5, 2024Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Explorations into the proposed SDFT, Self-Distillation Enables Continual Learning, from Shenfeld et al. of MIT☆32Feb 6, 2026Updated 2 months ago
- Emergent Hierarchical Reasoning in LLMs/VLMs through Reinforcement Learning [ICLR26]☆64Apr 11, 2026Updated 3 weeks ago
- ☆59May 19, 2025Updated 11 months ago
- This is a simple torch implementation of the high performance Multi-Query Attention☆16Aug 23, 2023Updated 2 years ago
- Implement some method of LLM KV Cache Sparsity☆40Jun 6, 2024Updated last year
- ☆50Mar 20, 2026Updated last month
- QJL: 1-Bit Quantized JL transform for KV Cache Quantization with Zero Overhead☆97Jan 27, 2025Updated last year
- [NeurIPS 2025] Scaling Speculative Decoding with Lookahead Reasoning☆68Oct 31, 2025Updated 6 months ago
- ☆16Jul 23, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- 用AI从0开始制作“研究生模拟器”小游戏☆43Updated this week
- Official code and resources for the paper "EXIT: Context-Aware Extractive Compression for Enhancing Retrieval-Augmented Generation."☆23Dec 23, 2024Updated last year
- ☆47Nov 25, 2024Updated last year
- OpenBA-V2: 3B LLM (Large Language Model) with T5 architecture, utilizing model pruning technique and continuing pretraining from OpenBA-1…☆25May 10, 2024Updated last year
- Implementation of the Hierarchical Reasoning Model (HRM), applied to a pathfinding task, plus performance study.☆32Sep 9, 2025Updated 7 months ago
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆125Jul 4, 2025Updated 10 months ago
- Awesome latest models, datasets and benchmarks on streaming/online video understanding.☆27Oct 19, 2025Updated 6 months ago