kvcache-ai / Mooncake
Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.
☆1,110Updated 3 months ago
Related projects ⓘ
Alternatives and complementary repositories for Mooncake
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆541Updated 3 weeks ago
- LLM Inference benchmark☆349Updated 3 months ago
- ☆282Updated last week
- 中文版 llm-numbers☆105Updated 10 months ago
- A PyTorch Native LLM Training Framework☆661Updated 2 months ago
- FlashInfer: Kernel Library for LLM Serving☆1,395Updated this week
- The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.☆711Updated last week
- Best practice for training LLaMA models in Megatron-LM☆627Updated 10 months ago
- GLake: optimizing GPU memory management and IO transmission.☆375Updated 3 months ago
- Disaggregated serving system for Large Language Models (LLMs).☆348Updated 2 months ago
- FlagScale is a large model toolkit based on open-sourced projects.☆167Updated this week
- Efficient AI Inference & Serving☆456Updated 10 months ago
- [EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a V…☆315Updated this week
- ☆873Updated 4 months ago
- Fast inference from large lauguage models via speculative decoding☆562Updated 2 months ago
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆236Updated 7 months ago
- veRL: Volcano Engine Reinforcement Learning for LLM☆279Updated this week
- [NeurIPS'24 Spotlight] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which reduces in…☆776Updated this week
- 中文Mixtral-8x7B(Chinese-Mixtral-8x7B)☆641Updated 2 months ago
- A throughput-oriented high-performance serving framework for LLMs☆629Updated last month
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆352Updated last week
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆457Updated 7 months ago
- ☆198Updated this week
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆310Updated last month
- A Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizations☆730Updated this week
- A streamlined and customizable framework for efficient large model evaluation and performance benchmarking☆231Updated this week
- xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism☆668Updated this week
- The road to hack SysML and become an system expert☆432Updated last month
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,000Updated 9 months ago
- DLRover: An Automatic Distributed Deep Learning System☆1,262Updated this week