Zefan-Cai / KVCache-FactoryLinks
Unified KV Cache Compression Methods for Auto-Regressive Models
β1,301Updated last year
Alternatives and similar repositories for KVCache-Factory
Users that are interested in KVCache-Factory are comparing it to the libraries listed below
Sorting:
- [Neurips 2025] R-KV: Redundancy-aware KV Cache Compression for Reasoning Modelsβ1,171Updated 3 months ago
- [ICLR 2025π₯] SVD-LLM & [NAACL 2025π₯] SVD-LLM V2β277Updated 5 months ago
- β1,115Updated 2 weeks ago
- SDAR (Synergy of Diffusion and AutoRegression), a large diffusion language modelοΌ1.7B, 4B, 8B, 30BοΌβ329Updated last month
- An acceleration library that supports arbitrary bit-width combinatorial quantization operationsβ240Updated last year
- [ICML 2025] "SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator"β561Updated 6 months ago
- Codebase for Iterative DPO Using Rule-based Rewardsβ267Updated 9 months ago
- [NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Mergingβ140Updated 10 months ago
- [NIPS'25 Spotlight] Mulberry, an o1-like Reasoning and Reflection MLLM Implemented via Collective MCTSβ1,238Updated 2 weeks ago
- [Arxiv] Discrete Diffusion in Large Language and Multimodal Models: A Surveyβ359Updated 3 months ago
- adds Sequence Parallelism into LLaMA-Factoryβ603Updated 3 months ago
- β¨ A synthetic dataset generation framework that produces diverse coding questions and verifiable solutions - all in one framworkβ310Updated 4 months ago
- A simple, unified multimodal models training engine. Lean, flexible, and built for hacking at scale.β706Updated 2 weeks ago
- A scalable, end-to-end training pipeline for general-purpose agentsβ365Updated 7 months ago
- β333Updated 5 months ago
- [NeurIPS 2025π₯]Main source code of SRPO framework.β186Updated 2 months ago
- [ICML2025] Make LoRA Great Again: Boosting LoRA with Adaptive Singular Values and Mixture-of-Experts Optimization Alignmentβ139Updated 2 months ago
- [ICLR 2025] BitStack: Any-Size Compression of Large Language Models in Variable Memory Environmentsβ39Updated 11 months ago
- Official repository of DARE: dLLM Alignment and Reinforcement Executorβ159Updated this week
- Official code implementation of Context Cascade Compression: Exploring the Upper Limits of Text Compressionβ283Updated last week
- [NeurIPS'25] KVCOMM: Online Cross-context KV-cache Communication for Efficient LLM-based Multi-agent Systemsβ125Updated 3 months ago
- Awesome-Efficient-Inference-for-LRMs is a collection of state-of-the-art, novel, exciting, token-efficient methods for Large Reasoning Moβ¦β235Updated 7 months ago
- A highly optimized LLM inference acceleration engine for Llama and its variants.β905Updated 6 months ago
- UCCL is an efficient communication library for GPUs, covering collectives, P2P (e.g., KV cache transfer, RL weight transfer), and EP (e.gβ¦β1,195Updated this week
- [NeurIPS 2025] Official repository of RiOSWorld: Benchmarking the Risk of Multimodal Computer-Use Agentsβ109Updated 2 months ago
- β46Updated 10 months ago
- The official implementation of MARS: Unleashing the Power of Variance Reduction for Training Large Modelsβ714Updated this week
- β128Updated 4 months ago
- Open source code for ICLR 2026 Paper: Evaluating Memory in LLM Agents via Incremental Multi-Turn Interactionsβ216Updated last week
- [NeurIPS'24] Training-Free Adaptive Diffusion with Bounded Difference Approximation Strategyβ73Updated last year