KV cache store for distributed LLM inference
☆410Nov 13, 2025Updated 5 months ago
Alternatives and similar repositories for InfiniStore
Users that are interested in InfiniStore are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Efficient and easy multi-instance LLM serving☆547Mar 12, 2026Updated last month
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆5,186Updated this week
- NVIDIA Inference Xfer Library (NIXL)☆1,011Updated this week
- A Datacenter Scale Distributed Inference Serving Framework☆6,701Updated this week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,295Aug 28, 2025Updated 8 months ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Dynamic Memory Management for Serving LLMs without PagedAttention☆480May 30, 2025Updated 11 months ago
- Disaggregated serving system for Large Language Models (LLMs).☆804Apr 6, 2025Updated last year
- A throughput-oriented high-performance serving framework for LLMs☆954Mar 29, 2026Updated last month
- Supercharge Your LLM with the Fastest KV Cache Layer☆8,132Updated this week
- High performance Transformer implementation in C++.☆154Jan 18, 2025Updated last year
- Perplexity GPU Kernels☆569Nov 7, 2025Updated 5 months ago
- High Performance KV Cache Store for LLM☆53Apr 6, 2026Updated 3 weeks ago
- A lightweight design for computation-communication overlap.☆227Jan 20, 2026Updated 3 months ago
- FlashInfer: Kernel Library for LLM Serving☆5,498Updated this week
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Distributed Compiler based on Triton for Parallel Systems☆1,414Apr 22, 2026Updated last week
- GLake: optimizing GPU memory management and IO transmission.☆502Mar 24, 2025Updated last year
- DeepSeek-V3/R1 inference performance simulator☆195Mar 27, 2025Updated last year
- ☆52May 19, 2025Updated 11 months ago
- High-performance KV cache storage for LLM inference — GPU offloading, SSD caching, and cross-node sharing via RDMA. Works with vLLM and S…☆46Updated this week
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆505Apr 24, 2026Updated last week
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆1,106Updated this week
- ☆539Apr 1, 2026Updated 3 weeks ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆125Dec 25, 2025Updated 4 months ago
- Serverless GPU API endpoints on Runpod - Get Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)☆30Jan 22, 2026Updated 3 months ago
- Materials for learning SGLang☆806Jan 5, 2026Updated 3 months ago
- A low-latency & high-throughput serving engine for LLMs☆496Jan 8, 2026Updated 3 months ago
- Accurate, large-scale, and extensible simulator for LLM inference Systems☆595Jul 25, 2025Updated 9 months ago
- A benchmarking tool for comparing different LLM API providers' DeepSeek model deployments.☆31Mar 28, 2025Updated last year
- UCCL is an efficient communication library for GPUs, covering collectives, P2P (e.g., KV cache transfer, RL weight transfer), and EP (e.g…☆1,338Updated this week
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆296May 1, 2025Updated 11 months ago
- ☆132Nov 11, 2024Updated last year
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆900Updated this week
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Cost-efficient and pluggable Infrastructure components for GenAI inference☆4,765Updated this week
- NVIDIA cuTile learn☆167Dec 9, 2025Updated 4 months ago
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆435Updated this week
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆108Jun 28, 2025Updated 10 months ago
- ☆105Sep 9, 2024Updated last year
- Fast and memory-efficient exact attention☆21Apr 10, 2026Updated 2 weeks ago
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆636Sep 11, 2024Updated last year