KV cache store for distributed LLM inference
☆392Nov 13, 2025Updated 3 months ago
Alternatives and similar repositories for InfiniStore
Users that are interested in InfiniStore are comparing it to the libraries listed below
Sorting:
- Efficient and easy multi-instance LLM serving☆527Sep 3, 2025Updated 5 months ago
- NVIDIA Inference Xfer Library (NIXL)☆898Updated this week
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,795Feb 22, 2026Updated last week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,261Aug 28, 2025Updated 6 months ago
- A Datacenter Scale Distributed Inference Serving Framework☆6,154Updated this week
- A throughput-oriented high-performance serving framework for LLMs☆946Oct 29, 2025Updated 4 months ago
- Disaggregated serving system for Large Language Models (LLMs).☆777Apr 6, 2025Updated 10 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆463May 30, 2025Updated 9 months ago
- Perplexity GPU Kernels☆564Nov 7, 2025Updated 3 months ago
- High performance Transformer implementation in C++.☆152Jan 18, 2025Updated last year
- Supercharge Your LLM with the Fastest KV Cache Layer☆6,923Updated this week
- High Performance KV Cache Store for LLM☆47Updated this week
- A lightweight design for computation-communication overlap.☆221Jan 20, 2026Updated last month
- DeepSeek-V3/R1 inference performance simulator☆179Mar 27, 2025Updated 11 months ago
- Distributed Compiler based on Triton for Parallel Systems☆1,361Feb 13, 2026Updated 2 weeks ago
- FlashInfer: Kernel Library for LLM Serving☆5,009Updated this week
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆469Feb 21, 2026Updated last week
- GLake: optimizing GPU memory management and IO transmission.☆498Mar 24, 2025Updated 11 months ago
- UCCL is an efficient communication library for GPUs, covering collectives, P2P (e.g., KV cache transfer, RL weight transfer), and EP (e.g…☆1,224Updated this week
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Dec 25, 2025Updated 2 months ago
- ☆52May 19, 2025Updated 9 months ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆1,051Updated this week
- A benchmarking tool for comparing different LLM API providers' DeepSeek model deployments.☆30Mar 28, 2025Updated 11 months ago
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆380Updated this week
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆283May 1, 2025Updated 9 months ago
- ☆104Sep 9, 2024Updated last year
- Materials for learning SGLang☆753Jan 5, 2026Updated last month
- A low-latency & high-throughput serving engine for LLMs☆480Jan 8, 2026Updated last month
- A large-scale simulation framework for LLM inference☆539Jul 25, 2025Updated 7 months ago
- ☆31Apr 19, 2025Updated 10 months ago
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆796Updated this week
- Cost-efficient and pluggable Infrastructure components for GenAI inference☆4,650Updated this week
- ☆131Nov 11, 2024Updated last year
- ☆34Feb 3, 2025Updated last year
- ☆526Feb 10, 2026Updated 2 weeks ago
- ☆26Feb 17, 2025Updated last year
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆129Jun 24, 2025Updated 8 months ago
- DeeperGEMM: crazy optimized version☆74May 5, 2025Updated 9 months ago
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆271Feb 20, 2026Updated last week