LMCache / LMBenchmarkLinks
Systematic and comprehensive benchmarks for LLM systems.
☆50Updated last week
Alternatives and similar repositories for LMBenchmark
Users that are interested in LMBenchmark are comparing it to the libraries listed below
Sorting:
- Offline optimization of your disaggregated Dynamo graph☆177Updated last week
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆135Updated last year
- Microsoft Collective Communication Library☆66Updated last year
- NCCL Profiling Kit☆150Updated last year
- Stateful LLM Serving☆95Updated 10 months ago
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆64Updated last year
- A tool to detect infrastructure issues on cloud native AI systems☆52Updated 4 months ago
- An interference-aware scheduler for fine-grained GPU sharing☆159Updated 2 months ago
- An I/O benchmark for deep Learning applications☆102Updated last month
- NVIDIA NCCL Tests for Distributed Training☆134Updated last week
- GeminiFS: A Companion File System for GPUs☆72Updated 11 months ago
- Fast OS-level support for GPU checkpoint and restore☆271Updated 4 months ago
- ☆24Updated 2 years ago
- Magnum IO community repo☆109Updated 2 months ago
- ☆47Updated last year
- SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training☆36Updated 2 years ago
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆84Updated 7 months ago
- ☆84Updated 3 months ago
- Thunder Research Group's Collective Communication Library☆47Updated 7 months ago
- [ACM EuroSys 2023] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆56Updated 6 months ago
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆104Updated 3 years ago
- KV cache store for distributed LLM inference☆390Updated 2 months ago
- ☆56Updated 5 years ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Updated last month
- Research prototype of PRISM — a cost-efficient multi-LLM serving system with flexible time- and space-based GPU sharing.☆57Updated 5 months ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆67Updated last year
- ☆38Updated 5 years ago
- ☆77Updated last year
- Accepted to MLSys 2026☆70Updated last week
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆773Updated this week