High-performance inference framework for large language models, focusing on efficiency, flexibility, and availability.
☆3,334Feb 28, 2026Updated last week
Alternatives and similar repositories for chitu
Users that are interested in chitu are comparing it to the libraries listed below
Sorting:
- A Flexible Framework for Experiencing Heterogeneous LLM Inference/Fine-tune Optimizations☆16,649Updated this week
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,843Updated this week
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆1,059Updated this week
- FlashInfer: Kernel Library for LLM Serving☆5,057Updated this week
- SGLang is a high-performance serving framework for large language models and multimodal models.☆23,905Updated this week
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,645Updated this week
- ☆527Feb 10, 2026Updated 3 weeks ago
- A throughput-oriented high-performance serving framework for LLMs☆947Oct 29, 2025Updated 4 months ago
- A Datacenter Scale Distributed Inference Serving Framework☆6,154Feb 28, 2026Updated last week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,025Sep 4, 2024Updated last year
- FlashMLA: Efficient Multi-head Latent Attention Kernels☆12,505Feb 6, 2026Updated last month
- DeeperGEMM: crazy optimized version☆74May 5, 2025Updated 10 months ago
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,264Aug 28, 2025Updated 6 months ago
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,919Feb 28, 2026Updated last week
- Disaggregated serving system for Large Language Models (LLMs).☆778Apr 6, 2025Updated 11 months ago
- DeepEP: an efficient expert-parallel communication library☆9,005Feb 9, 2026Updated 3 weeks ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆6,206Feb 27, 2026Updated last week
- KV cache store for distributed LLM inference☆396Nov 13, 2025Updated 3 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Dec 25, 2025Updated 2 months ago
- A lightweight design for computation-communication overlap.☆223Jan 20, 2026Updated last month
- A bidirectional pipeline parallelism algorithm for computation-communication overlap in DeepSeek V3/R1 training.☆2,926Jan 14, 2026Updated last month
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆5,284Feb 28, 2026Updated last week
- Distributed Compiler based on Triton for Parallel Systems☆1,371Feb 13, 2026Updated 3 weeks ago
- Community maintained hardware plugin for vLLM on Ascend☆1,711Feb 28, 2026Updated last week
- Swap GPT for any LLM by changing a single line of code. Xinference lets you run open-source, speech, and multimodal models on cloud, on-p…☆9,089Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆71,883Updated this week
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆255Feb 13, 2026Updated 3 weeks ago
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆12,993Updated this week
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆273Aug 6, 2025Updated 7 months ago
- Supercharge Your LLM with the Fastest KV Cache Layer☆7,272Updated this week
- Analyze computation-communication overlap in V3/R1.☆1,143Mar 21, 2025Updated 11 months ago
- Production-tested AI infrastructure tools for efficient AGI development and community-driven innovation☆7,970May 15, 2025Updated 9 months ago
- Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible.☆3,586Feb 28, 2026Updated last week
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,190Sep 30, 2025Updated 5 months ago
- Performance-optimized AI inference on your GPUs. Unlock superior throughput by selecting and tuning engines like vLLM or SGLang.☆4,573Updated this week
- Expert Parallelism Load Balancer☆1,351Mar 24, 2025Updated 11 months ago
- 📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉☆5,040Feb 27, 2026Updated last week
- GLake: optimizing GPU memory management and IO transmission.☆498Mar 24, 2025Updated 11 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆150May 10, 2025Updated 9 months ago