Zefan-Cai / KVCache-FactoryLinks
Unified KV Cache Compression Methods for Auto-Regressive Models
β1,288Updated 11 months ago
Alternatives and similar repositories for KVCache-Factory
Users that are interested in KVCache-Factory are comparing it to the libraries listed below
Sorting:
- [Neurips 2025] R-KV: Redundancy-aware KV Cache Compression for Reasoning Modelsβ1,159Updated 2 months ago
- [ICLR 2025π₯] SVD-LLM & [NAACL 2025π₯] SVD-LLM V2β270Updated 3 months ago
- β956Updated this week
- SDAR (Synergy of Diffusion and AutoRegression), a large diffusion language modelοΌ1.7B, 4B, 8B, 30BοΌβ306Updated this week
- An acceleration library that supports arbitrary bit-width combinatorial quantization operationsβ238Updated last year
- [NIPS'25 Spotlight] Mulberry, an o1-like Reasoning and Reflection MLLM Implemented via Collective MCTSβ1,233Updated 3 months ago
- [ICML 2025] "SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator"β556Updated 4 months ago
- Codebase for Iterative DPO Using Rule-based Rewardsβ263Updated 8 months ago
- [Arxiv] Discrete Diffusion in Large Language and Multimodal Models: A Surveyβ343Updated last month
- adds Sequence Parallelism into LLaMA-Factoryβ599Updated 2 months ago
- UCCL is an efficient communication library for GPUs, covering collectives, P2P (e.g., KV cache transfer, RL weight transfer), and EP (e.gβ¦β1,123Updated this week
- A simple, unified multimodal models training engine. Lean, flexible, and built for hacking at scale.β678Updated last week
- [NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Mergingβ138Updated 9 months ago
- [NeurIPS 2025π₯]Main source code of SRPO framework.β183Updated 3 weeks ago
- β¨ A synthetic dataset generation framework that produces diverse coding questions and verifiable solutions - all in one framworkβ297Updated 3 months ago
- β332Updated 3 months ago
- Official implementation of paper "d2Cache: Accelerating Diffusion-based LLMs via Dual Adaptive Caching"β73Updated last week
- [ICML2025] Make LoRA Great Again: Boosting LoRA with Adaptive Singular Values and Mixture-of-Experts Optimization Alignmentβ133Updated last month
- A scalable, end-to-end training pipeline for general-purpose agentsβ362Updated 5 months ago
- Awesome-Efficient-Inference-for-LRMs is a collection of state-of-the-art, novel, exciting, token-efficient methods for Large Reasoning Moβ¦β232Updated 6 months ago
- β127Updated 3 months ago
- A highly optimized LLM inference acceleration engine for Llama and its variants.β906Updated 5 months ago
- β46Updated 8 months ago
- The official implementation of MARS: Unleashing the Power of Variance Reduction for Training Large Modelsβ712Updated last month
- [ICLR 2025] BitStack: Any-Size Compression of Large Language Models in Variable Memory Environmentsβ39Updated 10 months ago
- [COLMβ25] DeepRetrieval β π₯ Training Search Agent by RLVR with Retrieval Outcomeβ684Updated 2 months ago
- [NeurIPS'24] Training-Free Adaptive Diffusion with Bounded Difference Approximation Strategyβ73Updated 10 months ago
- Tree Search for LLM Agent Reinforcement Learningβ256Updated 2 months ago
- The official implementation of Self-Play Preference Optimization (SPPO)β583Updated 10 months ago
- Open source code for Paper: Evaluating Memory in LLM Agents via Incremental Multi-Turn Interactionsβ183Updated 3 weeks ago