Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.
☆4,880Mar 7, 2026Updated this week
Alternatives and similar repositories for Mooncake
Users that are interested in Mooncake are comparing it to the libraries listed below
Sorting:
- FlashInfer: Kernel Library for LLM Serving☆5,101Updated this week
- SGLang is a high-performance serving framework for large language models and multimodal models.☆24,216Updated this week
- Disaggregated serving system for Large Language Models (LLMs).☆778Apr 6, 2025Updated 11 months ago
- A Datacenter Scale Distributed Inference Serving Framework☆6,193Updated this week
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,931Updated this week
- 📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉☆5,040Feb 27, 2026Updated last week
- A throughput-oriented high-performance serving framework for LLMs☆947Oct 29, 2025Updated 4 months ago
- NVIDIA Inference Xfer Library (NIXL)☆910Updated this week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,264Aug 28, 2025Updated 6 months ago
- DeepEP: an efficient expert-parallel communication library☆9,023Feb 9, 2026Updated last month
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆12,993Updated this week
- Efficient and easy multi-instance LLM serving☆528Sep 3, 2025Updated 6 months ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆1,059Updated this week
- FlashMLA: Efficient Multi-head Latent Attention Kernels☆12,511Feb 6, 2026Updated last month
- KV cache store for distributed LLM inference☆396Nov 13, 2025Updated 3 months ago
- My learning notes for ML SYS.☆5,580Mar 2, 2026Updated last week
- Supercharge Your LLM with the Fastest KV Cache Layer☆7,272Updated this week
- Distributed Compiler based on Triton for Parallel Systems☆1,380Feb 13, 2026Updated 3 weeks ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆71,883Updated this week
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,645Mar 3, 2026Updated last week
- Ongoing research training transformer models at scale☆15,535Updated this week
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆5,330Updated this week
- Fast and memory-efficient exact attention☆22,460Updated this week
- Materials for learning SGLang☆766Jan 5, 2026Updated 2 months ago
- Production-tested AI infrastructure tools for efficient AGI development and community-driven innovation☆7,969May 15, 2025Updated 9 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆6,230Feb 27, 2026Updated last week
- Transformer related optimization, including BERT, GPT☆6,398Mar 27, 2024Updated last year
- A low-latency & high-throughput serving engine for LLMs☆482Jan 8, 2026Updated 2 months ago
- A Flexible Framework for Experiencing Heterogeneous LLM Inference/Fine-tune Optimizations☆16,716Updated this week
- verl: Volcano Engine Reinforcement Learning for LLMs☆19,739Updated this week
- Development repository for the Triton language and compiler☆18,573Updated this week
- 📚LeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginners🐑, 200+ CUDA Kernels, Tensor Cores, HGEMM, FA-2 MMA.🎉☆9,815Feb 25, 2026Updated last week
- Dynamic Memory Management for Serving LLMs without PagedAttention☆465May 30, 2025Updated 9 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,191Sep 30, 2025Updated 5 months ago
- A high-performance distributed file system designed to address the challenges of AI training and inference workloads.☆9,749Updated this week
- CUDA Templates and Python DSLs for High-Performance Linear Algebra☆9,348Mar 2, 2026Updated last week
- how to optimize some algorithm in cuda.☆2,841Feb 28, 2026Updated last week
- xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism☆2,560Updated this week
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,084Updated this week