fmperf-project / fmperf
Cloud Native Benchmarking of Foundation Models
☆30Updated 5 months ago
Alternatives and similar repositories for fmperf:
Users that are interested in fmperf are comparing it to the libraries listed below
- A tool to detect infrastructure issues on cloud native AI systems☆31Updated last month
- NVIDIA NCCL Tests for Distributed Training☆88Updated this week
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆116Updated last year
- Predict the performance of LLM inference services☆17Updated 10 months ago
- GPU scheduler for elastic/distributed deep learning workloads in Kubernetes cluster (IC2E'23)☆34Updated last year
- An interference-aware scheduler for fine-grained GPU sharing☆132Updated 3 months ago
- Microsoft Collective Communication Library☆65Updated 5 months ago
- A resilient distributed training framework☆94Updated last year
- NCCL Profiling Kit☆132Updated 9 months ago
- ☆53Updated 7 months ago
- Efficient and easy multi-instance LLM serving☆383Updated this week
- Repository for MLCommons Chakra schema and tools☆95Updated last month
- 🔮 Execution time predictions for deep neural network training iterations across different GPUs.☆60Updated 2 years ago
- The driver for LMCache core to run in vLLM☆38Updated 2 months ago
- ☆59Updated 10 months ago
- Code for "Heterogenity-Aware Cluster Scheduling Policies for Deep Learning Workloads", which appeared at OSDI 2020☆127Updated 9 months ago
- ☆44Updated 3 years ago
- PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for…☆136Updated this week
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆116Updated last year
- Tiresias is a GPU cluster manager for distributed deep learning training.☆152Updated 4 years ago
- Code repository for ITBench☆33Updated last month
- Fine-grained GPU sharing primitives☆141Updated 5 years ago
- GenAI inference performance benchmarking tool☆39Updated 3 weeks ago
- LLM Serving Performance Evaluation Harness☆77Updated 2 months ago
- Holistic job manager on Kubernetes☆115Updated last year
- Artifacts for our NSDI'23 paper TGS☆75Updated 10 months ago
- How much energy do GenAI models consume?☆42Updated 6 months ago
- InstaSlice Operator facilitates slicing of accelerators using stable APIs☆33Updated this week
- An efficient GPU resource sharing system with fine-grained control for Linux platforms.☆82Updated last year
- Stateful LLM Serving☆63Updated last month