Zefan-Cai / KVCache-FactoryLinks
Unified KV Cache Compression Methods for Auto-Regressive Models
β1,306Updated last year
Alternatives and similar repositories for KVCache-Factory
Users that are interested in KVCache-Factory are comparing it to the libraries listed below
Sorting:
- [Neurips 2025] R-KV: Redundancy-aware KV Cache Compression for Reasoning Modelsβ1,174Updated 3 months ago
- [ICLR 2025π₯] SVD-LLM & [NAACL 2025π₯] SVD-LLM V2β281Updated 5 months ago
- β1,127Updated 3 weeks ago
- SDAR (Synergy of Diffusion and AutoRegression), a large diffusion language modelοΌ1.7B, 4B, 8B, 30BοΌβ331Updated last month
- adds Sequence Parallelism into LLaMA-Factoryβ604Updated last week
- An acceleration library that supports arbitrary bit-width combinatorial quantization operationsβ241Updated last year
- [ICML 2025] "SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator"β563Updated 6 months ago
- [Arxiv] Discrete Diffusion in Large Language and Multimodal Models: A Surveyβ361Updated 3 months ago
- [NIPS'25 Spotlight] Mulberry, an o1-like Reasoning and Reflection MLLM Implemented via Collective MCTSβ1,240Updated 3 weeks ago
- β334Updated 5 months ago
- Codebase for Iterative DPO Using Rule-based Rewardsβ267Updated 10 months ago
- [NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Mergingβ140Updated 10 months ago
- A simple, unified multimodal models training engine. Lean, flexible, and built for hacking at scale.β708Updated 3 weeks ago
- β¨ A synthetic dataset generation framework that produces diverse coding questions and verifiable solutions - all in one framworkβ312Updated 5 months ago
- UCCL is an efficient communication library for GPUs, covering collectives, P2P (e.g., KV cache transfer, RL weight transfer), and EP (e.gβ¦β1,208Updated this week
- [ICLR 2025] BitStack: Any-Size Compression of Large Language Models in Variable Memory Environmentsβ39Updated 11 months ago
- A scalable, end-to-end training pipeline for general-purpose agentsβ366Updated 7 months ago
- [ICLR'26] Official code of paper "d2Cache: Accelerating Diffusion-based LLMs via Dual Adaptive Caching"β85Updated this week
- [NeurIPS 2025π₯]Main source code of SRPO framework.β186Updated 2 months ago
- Official repository of DARE: dLLM Alignment and Reinforcement Executorβ160Updated last week
- β128Updated 4 months ago
- Official code implementation of Context Cascade Compression: Exploring the Upper Limits of Text Compressionβ284Updated 2 weeks ago
- [ICML2025] Make LoRA Great Again: Boosting LoRA with Adaptive Singular Values and Mixture-of-Experts Optimization Alignmentβ140Updated 3 months ago
- Awesome-Efficient-Inference-for-LRMs is a collection of state-of-the-art, novel, exciting, token-efficient methods for Large Reasoning Moβ¦β235Updated 7 months ago
- β46Updated 10 months ago
- [NeurIPS 2025] Official repository of RiOSWorld: Benchmarking the Risk of Multimodal Computer-Use Agentsβ110Updated 2 months ago
- Open source code for ICLR 2026 Paper: Evaluating Memory in LLM Agents via Incremental Multi-Turn Interactionsβ223Updated 2 weeks ago
- The official implementation of MARS: Unleashing the Power of Variance Reduction for Training Large Modelsβ714Updated last week
- A highly optimized LLM inference acceleration engine for Llama and its variants.β906Updated this week
- A Comprehensive Benchmark for Routing LLMs to Explore Model-level Scaling Up in Large Language Modelsβ106Updated 3 months ago