bytedance / InfiniStoreLinks
KV cache store for distributed LLM inference
☆276Updated 3 weeks ago
Alternatives and similar repositories for InfiniStore
Users that are interested in InfiniStore are comparing it to the libraries listed below
Sorting:
- Efficient and easy multi-instance LLM serving☆440Updated this week
- NVIDIA Inference Xfer Library (NIXL)☆422Updated this week
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆87Updated last month
- Dynamic Memory Management for Serving LLMs without PagedAttention☆397Updated 3 weeks ago
- Perplexity GPU Kernels☆375Updated 2 weeks ago
- Fast OS-level support for GPU checkpoint and restore☆199Updated last week
- High performance Transformer implementation in C++.☆125Updated 5 months ago
- A low-latency & high-throughput serving engine for LLMs☆380Updated 3 weeks ago
- DeepSeek-V3/R1 inference performance simulator☆149Updated 3 months ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆379Updated this week
- Ultra and Unified CCL☆165Updated this week
- Materials for learning SGLang☆457Updated this week
- GLake: optimizing GPU memory management and IO transmission.☆467Updated 3 months ago
- Disaggregated serving system for Large Language Models (LLMs).☆629Updated 2 months ago
- ☆26Updated 3 months ago
- The driver for LMCache core to run in vLLM☆42Updated 4 months ago
- A lightweight design for computation-communication overlap.☆143Updated last week
- Fast and memory-efficient exact attention☆76Updated this week
- Zero Bubble Pipeline Parallelism☆399Updated last month
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆224Updated 2 weeks ago
- NCCL Profiling Kit☆138Updated 11 months ago
- PyTorch distributed training acceleration framework☆49Updated 4 months ago
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆164Updated 9 months ago
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆986Updated 3 weeks ago
- ☆73Updated 2 months ago
- A large-scale simulation framework for LLM inference☆387Updated 7 months ago
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆112Updated this week
- Distributed Compiler Based on Triton for Parallel Systems☆846Updated last week
- Stateful LLM Serving☆73Updated 3 months ago
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆219Updated 2 months ago