bytedance / InfiniStore
KV cache store for distributed LLM inference
☆151Updated 3 weeks ago
Alternatives and similar repositories for InfiniStore:
Users that are interested in InfiniStore are comparing it to the libraries listed below
- Fast OS-level support for GPU checkpoint and restore☆185Updated 2 weeks ago
- NVIDIA Inference Xfer Library (NIXL)☆282Updated this week
- Efficient and easy multi-instance LLM serving☆383Updated this week
- High performance Transformer implementation in C++.☆119Updated 3 months ago
- The driver for LMCache core to run in vLLM☆38Updated 2 months ago
- A low-latency & high-throughput serving engine for LLMs☆346Updated last week
- Perplexity GPU Kernels☆251Updated this week
- Dynamic Memory Management for Serving LLMs without PagedAttention☆354Updated last week
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆58Updated 11 months ago
- DeepSeek-V3/R1 inference performance simulator☆113Updated last month
- ☆36Updated 4 months ago
- NCCL Profiling Kit☆132Updated 9 months ago
- Stateful LLM Serving☆63Updated last month
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆342Updated this week
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆60Updated 10 months ago
- ☆58Updated 3 weeks ago
- ☆23Updated last month
- Ultra | Ultimate | Unified CCL☆59Updated 2 months ago
- ☆95Updated 5 months ago
- ☆82Updated last month
- PyTorch distributed training acceleration framework☆48Updated 2 months ago
- GLake: optimizing GPU memory management and IO transmission.☆456Updated last month
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆154Updated 7 months ago
- Disaggregated serving system for Large Language Models (LLMs).☆562Updated 2 weeks ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆162Updated 9 months ago
- ☆104Updated 3 months ago
- Fast and memory-efficient exact attention☆65Updated this week
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆97Updated last year
- Here are my personal paper reading notes (including cloud computing, resource management, systems, machine learning, deep learning, and o…☆89Updated 2 weeks ago
- NVIDIA NCCL Tests for Distributed Training☆88Updated this week