Zefan-Cai / KVCache-FactoryLinks
Unified KV Cache Compression Methods for Auto-Regressive Models
β1,295Updated last year
Alternatives and similar repositories for KVCache-Factory
Users that are interested in KVCache-Factory are comparing it to the libraries listed below
Sorting:
- [Neurips 2025] R-KV: Redundancy-aware KV Cache Compression for Reasoning Modelsβ1,165Updated 2 months ago
- [ICLR 2025π₯] SVD-LLM & [NAACL 2025π₯] SVD-LLM V2β271Updated 4 months ago
- β981Updated this week
- SDAR (Synergy of Diffusion and AutoRegression), a large diffusion language modelοΌ1.7B, 4B, 8B, 30BοΌβ315Updated 3 weeks ago
- [ICML 2025] "SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator"β559Updated 5 months ago
- adds Sequence Parallelism into LLaMA-Factoryβ600Updated 2 months ago
- [NIPS'25 Spotlight] Mulberry, an o1-like Reasoning and Reflection MLLM Implemented via Collective MCTSβ1,235Updated 3 months ago
- An acceleration library that supports arbitrary bit-width combinatorial quantization operationsβ238Updated last year
- Codebase for Iterative DPO Using Rule-based Rewardsβ267Updated 9 months ago
- β¨ A synthetic dataset generation framework that produces diverse coding questions and verifiable solutions - all in one framworkβ301Updated 4 months ago
- [NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Mergingβ138Updated 9 months ago
- β330Updated 4 months ago
- [Arxiv] Discrete Diffusion in Large Language and Multimodal Models: A Surveyβ348Updated 2 months ago
- [NeurIPS 2025π₯]Main source code of SRPO framework.β185Updated last month
- A simple, unified multimodal models training engine. Lean, flexible, and built for hacking at scale.β693Updated this week
- A scalable, end-to-end training pipeline for general-purpose agentsβ363Updated 6 months ago
- β46Updated 9 months ago
- UCCL is an efficient communication library for GPUs, covering collectives, P2P (e.g., KV cache transfer, RL weight transfer), and EP (e.gβ¦β1,157Updated this week
- Awesome-Efficient-Inference-for-LRMs is a collection of state-of-the-art, novel, exciting, token-efficient methods for Large Reasoning Moβ¦β234Updated 6 months ago
- Official implementation of paper "d2Cache: Accelerating Diffusion-based LLMs via Dual Adaptive Caching"β80Updated last week
- [ICLR 2025] BitStack: Any-Size Compression of Large Language Models in Variable Memory Environmentsβ39Updated 10 months ago
- [ICML2025] Make LoRA Great Again: Boosting LoRA with Adaptive Singular Values and Mixture-of-Experts Optimization Alignmentβ138Updated 2 months ago
- The official GitHub repository of the paper "Recent advances in large langauge model benchmarks against data contamination: From static tβ¦β73Updated 3 months ago
- β127Updated 3 months ago
- Official repository of DARE: dLLM Alignment and Reinforcement Executorβ149Updated this week
- A highly optimized LLM inference acceleration engine for Llama and its variants.β906Updated 6 months ago
- [NeurIPS 2025] Official repository of RiOSWorld: Benchmarking the Risk of Multimodal Computer-Use Agentsβ105Updated last month
- The official implementation of MARS: Unleashing the Power of Variance Reduction for Training Large Modelsβ715Updated 2 months ago
- Official code implementation of Context Cascade Compression: Exploring the Upper Limits of Text Compressionβ273Updated last month
- Open source code for Paper: Evaluating Memory in LLM Agents via Incremental Multi-Turn Interactionsβ196Updated last month