LMCache / LMBenchmarkLinks
Systematic and comprehensive benchmarks for LLM systems.
☆45Updated last month
Alternatives and similar repositories for LMBenchmark
Users that are interested in LMBenchmark are comparing it to the libraries listed below
Sorting:
- Offline optimization of your disaggregated Dynamo graph☆136Updated last week
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆134Updated last year
- Fast OS-level support for GPU checkpoint and restore☆266Updated 3 months ago
- Stateful LLM Serving☆91Updated 9 months ago
- NCCL Profiling Kit☆149Updated last year
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆64Updated last year
- Microsoft Collective Communication Library☆66Updated last year
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆103Updated 3 years ago
- Artifacts for our NSDI'23 paper TGS☆95Updated last year
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆66Updated last year
- An interference-aware scheduler for fine-grained GPU sharing☆155Updated last month
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆121Updated last week
- ☆56Updated 4 years ago
- NVIDIA NCCL Tests for Distributed Training☆130Updated last week
- DeepSeek-V3/R1 inference performance simulator☆175Updated 9 months ago
- ☆144Updated last year
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆76Updated 6 months ago
- [ACM EuroSys 2023] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆56Updated 4 months ago
- ☆38Updated 4 years ago
- Research prototype of PRISM — a cost-efficient multi-LLM serving system with flexible time- and space-based GPU sharing.☆50Updated 4 months ago
- ☆80Updated 2 months ago
- SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training☆35Updated 2 years ago
- A tool to detect infrastructure issues on cloud native AI systems☆52Updated 3 months ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆127Updated 3 years ago
- ☆47Updated last year
- ☆70Updated 2 months ago
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆226Updated 5 months ago
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆55Updated 3 years ago
- ☆71Updated 3 months ago
- ☆28Updated last year