bytedance / InfiniStoreLinks
KV cache store for distributed LLM inference
☆254Updated last week
Alternatives and similar repositories for InfiniStore
Users that are interested in InfiniStore are comparing it to the libraries listed below
Sorting:
- Efficient and easy multi-instance LLM serving☆423Updated this week
- NVIDIA Inference Xfer Library (NIXL)☆387Updated this week
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆81Updated 3 weeks ago
- Perplexity GPU Kernels☆331Updated this week
- A low-latency & high-throughput serving engine for LLMs☆374Updated last week
- Dynamic Memory Management for Serving LLMs without PagedAttention☆389Updated last week
- Fast OS-level support for GPU checkpoint and restore☆197Updated this week
- High performance Transformer implementation in C++.☆124Updated 4 months ago
- ☆25Updated 3 months ago
- DeepSeek-V3/R1 inference performance simulator☆135Updated 2 months ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆374Updated this week
- Disaggregated serving system for Large Language Models (LLMs).☆605Updated 2 months ago
- The driver for LMCache core to run in vLLM☆41Updated 4 months ago
- Materials for learning SGLang☆426Updated this week
- A lightweight design for computation-communication overlap.☆136Updated this week
- GLake: optimizing GPU memory management and IO transmission.☆463Updated 2 months ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆202Updated 2 weeks ago
- PyTorch distributed training acceleration framework☆49Updated 3 months ago
- ☆85Updated 2 months ago
- NCCL Profiling Kit☆135Updated 11 months ago
- Ultra | Ultimate | Unified CCL☆102Updated this week
- Distributed Triton for Parallel Systems☆775Updated last week
- Zero Bubble Pipeline Parallelism☆396Updated last month
- Stateful LLM Serving☆70Updated 2 months ago
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆61Updated last year
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆158Updated 8 months ago
- Here are my personal paper reading notes (including cloud computing, resource management, systems, machine learning, deep learning, and o…☆105Updated this week
- ☆119Updated 5 months ago
- Fast and memory-efficient exact attention☆72Updated last month
- ☆67Updated 2 months ago