LMCache / LMBenchmarkLinks
Systematic and comprehensive benchmarks for LLM systems.
☆35Updated last week
Alternatives and similar repositories for LMBenchmark
Users that are interested in LMBenchmark are comparing it to the libraries listed below
Sorting:
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆129Updated last year
- NCCL Profiling Kit☆145Updated last year
- NVIDIA NCCL Tests for Distributed Training☆112Updated last week
- An interference-aware scheduler for fine-grained GPU sharing☆147Updated 8 months ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆62Updated last year
- A tool to detect infrastructure issues on cloud native AI systems☆47Updated 3 weeks ago
- Microsoft Collective Communication Library☆66Updated 10 months ago
- An I/O benchmark for deep Learning applications☆90Updated 3 weeks ago
- Fast OS-level support for GPU checkpoint and restore☆238Updated last week
- ☆56Updated 4 years ago
- ☆38Updated 4 years ago
- ☆46Updated 9 months ago
- ☆24Updated 2 years ago
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆62Updated last year
- Stateful LLM Serving☆85Updated 7 months ago
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆99Updated last week
- [ACM EuroSys 2023] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆57Updated 2 months ago
- Artifacts for our NSDI'23 paper TGS☆86Updated last year
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆100Updated 2 years ago
- Fine-grained GPU sharing primitives☆144Updated 2 months ago
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆64Updated 3 months ago
- SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training☆35Updated 2 years ago
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆75Updated 2 years ago
- example code for using DC QP for providing RDMA READ and WRITE operations to remote GPU memory☆145Updated last year
- Cloud Native Benchmarking of Foundation Models☆44Updated 2 months ago
- Efficient Interactive LLM Serving with Proxy Model-based Sequence Length Prediction | A tiny BERT model can tell you the verbosity of an …☆44Updated last year
- rFaaS: a high-performance FaaS platform with RDMA acceleration for low-latency invocations.☆54Updated 3 months ago
- A light weight vLLM simulator, for mocking out replicas.☆49Updated last week
- A resilient distributed training framework☆95Updated last year
- ☆132Updated last year