LMCache / LMBenchmarkLinks
Systematic and comprehensive benchmarks for LLM systems.
☆31Updated last month
Alternatives and similar repositories for LMBenchmark
Users that are interested in LMBenchmark are comparing it to the libraries listed below
Sorting:
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆129Updated last year
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆62Updated last year
- NCCL Profiling Kit☆145Updated last year
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆62Updated last year
- Microsoft Collective Communication Library☆66Updated 9 months ago
- Stateful LLM Serving☆84Updated 6 months ago
- SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training☆35Updated 2 years ago
- [ACM EuroSys 2023] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆57Updated last month
- An interference-aware scheduler for fine-grained GPU sharing☆145Updated 7 months ago
- ☆56Updated 4 years ago
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆59Updated 3 months ago
- ☆47Updated 3 months ago
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆100Updated 2 years ago
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆74Updated 2 years ago
- Fast OS-level support for GPU checkpoint and restore☆236Updated last month
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆43Updated 3 years ago
- Efficient Compute-Communication Overlap for Distributed LLM Inference☆43Updated last week
- ☆20Updated 2 months ago
- Here are my personal paper reading notes (including cloud computing, resource management, systems, machine learning, deep learning, and o…☆120Updated 3 weeks ago
- A hierarchical collective communications library with portable optimizations☆36Updated 9 months ago
- kvcached: Elastic KV cache for dynamic GPU sharing and efficient multi-LLM inference.☆91Updated this week
- Thunder Research Group's Collective Communication Library☆42Updated 2 months ago
- ☆130Updated 11 months ago
- Fine-grained GPU sharing primitives☆144Updated last month
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆54Updated last year
- Artifacts for our NSDI'23 paper TGS☆86Updated last year
- SOTA Learning-augmented Systems☆37Updated 3 years ago
- An I/O benchmark for deep Learning applications☆90Updated last week
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆85Updated 2 years ago
- A resilient distributed training framework☆95Updated last year