bytedance / InfiniStoreLinks
KV cache store for distributed LLM inference
☆378Updated last month
Alternatives and similar repositories for InfiniStore
Users that are interested in InfiniStore are comparing it to the libraries listed below
Sorting:
- Efficient and easy multi-instance LLM serving☆519Updated 4 months ago
- NVIDIA Inference Xfer Library (NIXL)☆788Updated this week
- ☆337Updated this week
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆735Updated last month
- Fast OS-level support for GPU checkpoint and restore☆266Updated 3 months ago
- Perplexity GPU Kernels☆548Updated 2 months ago
- GLake: optimizing GPU memory management and IO transmission.☆494Updated 9 months ago
- High performance Transformer implementation in C++.☆147Updated 11 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Updated last week
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆449Updated this week
- Venus Collective Communication Library, supported by SII and Infrawaves.☆129Updated last week
- Offline optimization of your disaggregated Dynamo graph☆137Updated this week
- torchcomms: a modern PyTorch communications API☆315Updated this week
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆250Updated 3 weeks ago
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆352Updated this week
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆433Updated this week
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆83Updated 2 weeks ago
- DeepSeek-V3/R1 inference performance simulator☆175Updated 9 months ago
- Materials for learning SGLang☆709Updated 3 weeks ago
- A low-latency & high-throughput serving engine for LLMs☆462Updated 2 months ago
- The driver for LMCache core to run in vLLM☆59Updated 11 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆453Updated 7 months ago
- Disaggregated serving system for Large Language Models (LLMs).☆758Updated 9 months ago
- Stateful LLM Serving☆92Updated 9 months ago
- ☆73Updated last year
- A lightweight design for computation-communication overlap.☆207Updated last week
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆597Updated last week
- Perplexity open source garden for inference technology☆321Updated last week
- NCCL Profiling Kit☆150Updated last year
- FlagCX is a scalable and adaptive cross-chip communication library.☆138Updated this week