LMCache / LMBenchmarkLinks
Systematic and comprehensive benchmarks for LLM systems.
☆40Updated last month
Alternatives and similar repositories for LMBenchmark
Users that are interested in LMBenchmark are comparing it to the libraries listed below
Sorting:
- Offline optimization of your disaggregated Dynamo graph☆106Updated this week
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆132Updated last year
- NCCL Profiling Kit☆147Updated last year
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆65Updated last year
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆63Updated last year
- Stateful LLM Serving☆88Updated 8 months ago
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆69Updated 5 months ago
- SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training☆35Updated 2 years ago
- A tool to detect infrastructure issues on cloud native AI systems☆51Updated 2 months ago
- ☆57Updated 4 years ago
- Here are my personal paper reading notes (including cloud computing, resource management, systems, machine learning, deep learning, and o…☆133Updated 2 weeks ago
- An interference-aware scheduler for fine-grained GPU sharing☆152Updated 9 months ago
- ☆57Updated 3 weeks ago
- High performance RDMA-based distributed feature collection component for training GNN model on EXTREMELY large graph☆55Updated 3 years ago
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆103Updated 2 years ago
- ☆79Updated last month
- Microsoft Collective Communication Library☆66Updated 11 months ago
- ☆38Updated 4 years ago
- [ACM EuroSys 2023] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆57Updated 3 months ago
- ☆67Updated 2 months ago
- ☆53Updated 10 months ago
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆56Updated last year
- Fine-grained GPU sharing primitives☆147Updated 3 months ago
- An I/O benchmark for deep Learning applications☆94Updated 2 weeks ago
- Research prototype of PRISM — a cost-efficient multi-LLM serving system with flexible time- and space-based GPU sharing.☆46Updated 3 months ago
- Fast OS-level support for GPU checkpoint and restore☆257Updated last month
- ☆71Updated 10 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆114Updated 6 months ago
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆53Updated 2 years ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆126Updated 3 years ago