Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.
☆4,953Mar 20, 2026Updated this week
Alternatives and similar repositories for Mooncake
Users that are interested in Mooncake are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- FlashInfer: Kernel Library for LLM Serving☆5,194Updated this week
- SGLang is a high-performance serving framework for large language models and multimodal models.☆24,829Updated this week
- Disaggregated serving system for Large Language Models (LLMs).☆785Apr 6, 2025Updated 11 months ago
- A Datacenter Scale Distributed Inference Serving Framework☆6,347Updated this week
- A throughput-oriented high-performance serving framework for LLMs☆949Oct 29, 2025Updated 4 months ago
- 📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉☆5,062Updated this week
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,958Updated this week
- NVIDIA Inference Xfer Library (NIXL)☆945Updated this week
- Efficient and easy multi-instance LLM serving☆532Mar 12, 2026Updated last week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,273Aug 28, 2025Updated 6 months ago
- KV cache store for distributed LLM inference☆399Nov 13, 2025Updated 4 months ago
- DeepEP: an efficient expert-parallel communication library☆9,053Feb 9, 2026Updated last month
- My learning notes for ML SYS.☆5,737Updated this week
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆13,120Updated this week
- Supercharge Your LLM with the Fastest KV Cache Layer☆7,693Mar 16, 2026Updated last week
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆1,070Updated this week
- FlashMLA: Efficient Multi-head Latent Attention Kernels☆12,521Feb 6, 2026Updated last month
- A high-throughput and memory-efficient inference and serving engine for LLMs☆73,479Updated this week
- Distributed Compiler based on Triton for Parallel Systems☆1,394Mar 11, 2026Updated last week
- Materials for learning SGLang☆775Jan 5, 2026Updated 2 months ago
- Ongoing research training transformer models at scale☆15,744Updated this week
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆5,403Updated this week
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,694Mar 13, 2026Updated last week
- A low-latency & high-throughput serving engine for LLMs☆484Jan 8, 2026Updated 2 months ago
- Production-tested AI infrastructure tools for efficient AGI development and community-driven innovation☆7,972May 15, 2025Updated 10 months ago
- Fast and memory-efficient exact attention☆22,832Updated this week
- Dynamic Memory Management for Serving LLMs without PagedAttention☆466May 30, 2025Updated 9 months ago
- 📚LeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginners🐑, 200+ CUDA Kernels, Tensor Cores, HGEMM, FA-2 MMA.🎉☆9,932Updated this week
- Transformer related optimization, including BERT, GPT☆6,397Mar 27, 2024Updated last year
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆6,268Feb 27, 2026Updated 3 weeks ago
- Development repository for the Triton language and compiler☆18,708Updated this week
- xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism☆2,572Updated this week
- A Flexible Framework for Experiencing Heterogeneous LLM Inference/Fine-tune Optimizations☆16,804Updated this week
- A high-performance distributed file system designed to address the challenges of AI training and inference workloads.☆9,770Mar 9, 2026Updated 2 weeks ago
- verl: Volcano Engine Reinforcement Learning for LLMs☆20,097Updated this week
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,198Mar 9, 2026Updated 2 weeks ago
- CUDA Templates and Python DSLs for High-Performance Linear Algebra☆9,442Updated this week
- how to optimize some algorithm in cuda.☆2,872Updated this week
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆649Jan 15, 2026Updated 2 months ago