taco-project / FlexKVLinks
☆145Updated this week
Alternatives and similar repositories for FlexKV
Users that are interested in FlexKV are comparing it to the libraries listed below
Sorting:
- KV cache store for distributed LLM inference☆389Updated 2 months ago
- Efficient and easy multi-instance LLM serving☆523Updated 4 months ago
- GLake: optimizing GPU memory management and IO transmission.☆497Updated 10 months ago
- NVIDIA Inference Xfer Library (NIXL)☆864Updated this week
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆765Updated 3 weeks ago
- Disaggregated serving system for Large Language Models (LLMs).☆772Updated 9 months ago
- ☆342Updated this week
- Fast OS-level support for GPU checkpoint and restore☆270Updated 4 months ago
- Persist and reuse KV Cache to speedup your LLM.☆244Updated this week
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆91Updated 2 weeks ago
- High performance Transformer implementation in C++.☆148Updated last year
- Offline optimization of your disaggregated Dynamo graph☆168Updated last week
- DeepSeek-V3/R1 inference performance simulator☆176Updated 10 months ago
- A low-latency & high-throughput serving engine for LLMs☆471Updated 3 weeks ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Updated last month
- Dynamic Memory Management for Serving LLMs without PagedAttention☆457Updated 8 months ago
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆99Updated 2 years ago
- Stateful LLM Serving☆95Updated 10 months ago
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆365Updated this week
- NVIDIA NCCL Tests for Distributed Training☆133Updated this week
- Research prototype of PRISM — a cost-efficient multi-LLM serving system with flexible time- and space-based GPU sharing.☆54Updated 5 months ago
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆298Updated 2 weeks ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆455Updated last week
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆260Updated this week
- FlagCX is a scalable and adaptive cross-chip communication library.☆170Updated this week
- SGLang kernel library for NPU☆96Updated this week
- Fast and memory-efficient exact attention☆110Updated last week
- ☆523Updated last week
- Perplexity GPU Kernels☆554Updated 2 months ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆1,037Updated this week