KV cache store for distributed LLM inference
☆399Nov 13, 2025Updated 4 months ago
Alternatives and similar repositories for InfiniStore
Users that are interested in InfiniStore are comparing it to the libraries listed below
Sorting:
- Efficient and easy multi-instance LLM serving☆532Mar 12, 2026Updated last week
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,953Updated this week
- NVIDIA Inference Xfer Library (NIXL)☆945Updated this week
- A Datacenter Scale Distributed Inference Serving Framework☆6,347Updated this week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,273Aug 28, 2025Updated 6 months ago
- Disaggregated serving system for Large Language Models (LLMs).☆785Apr 6, 2025Updated 11 months ago
- A throughput-oriented high-performance serving framework for LLMs☆949Oct 29, 2025Updated 4 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆466May 30, 2025Updated 9 months ago
- Supercharge Your LLM with the Fastest KV Cache Layer☆7,693Updated this week
- High performance Transformer implementation in C++.☆153Jan 18, 2025Updated last year
- Perplexity GPU Kernels☆564Nov 7, 2025Updated 4 months ago
- High Performance KV Cache Store for LLM☆51Updated this week
- A lightweight design for computation-communication overlap.☆225Jan 20, 2026Updated 2 months ago
- FlashInfer: Kernel Library for LLM Serving☆5,145Updated this week
- Distributed Compiler based on Triton for Parallel Systems☆1,386Mar 11, 2026Updated last week
- GLake: optimizing GPU memory management and IO transmission.☆498Mar 24, 2025Updated 11 months ago
- DeepSeek-V3/R1 inference performance simulator☆189Mar 27, 2025Updated 11 months ago
- ☆52May 19, 2025Updated 10 months ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆490Updated this week
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆1,070Updated this week
- ☆530Feb 10, 2026Updated last month
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Dec 25, 2025Updated 2 months ago
- Materials for learning SGLang☆775Jan 5, 2026Updated 2 months ago
- High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)☆29Jan 22, 2026Updated last month
- A low-latency & high-throughput serving engine for LLMs☆484Jan 8, 2026Updated 2 months ago
- A large-scale simulation framework for LLM inference☆556Jul 25, 2025Updated 7 months ago
- ☆131Nov 11, 2024Updated last year
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆286May 1, 2025Updated 10 months ago
- A benchmarking tool for comparing different LLM API providers' DeepSeek model deployments.☆30Mar 28, 2025Updated 11 months ago
- UCCL is an efficient communication library for GPUs, covering collectives, P2P (e.g., KV cache transfer, RL weight transfer), and EP (e.g…☆1,240Updated this week
- Cost-efficient and pluggable Infrastructure components for GenAI inference☆4,682Updated this week
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆397Updated this week
- NVIDIA cuTile learn☆164Dec 9, 2025Updated 3 months ago
- ☆105Sep 9, 2024Updated last year
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆628Sep 11, 2024Updated last year
- Fast and memory-efficient exact attention☆20Mar 13, 2026Updated last week
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆813Updated this week
- Perplexity open source garden for inference technology☆376Dec 25, 2025Updated 2 months ago
- DeeperGEMM: crazy optimized version☆75May 5, 2025Updated 10 months ago