sii-research / MegatraceLinks
AI Cluster Observability & Troubleshooting Toolkit. Powered by SII & Infrawaves.
☆32Updated this week
Alternatives and similar repositories for Megatrace
Users that are interested in Megatrace are comparing it to the libraries listed below
Sorting:
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆760Updated 2 weeks ago
- Research prototype of PRISM — a cost-efficient multi-LLM serving system with flexible time- and space-based GPU sharing.☆54Updated 5 months ago
- Efficient and easy multi-instance LLM serving☆523Updated 4 months ago
- Offline optimization of your disaggregated Dynamo graph☆168Updated this week
- GLake: optimizing GPU memory management and IO transmission.☆497Updated 10 months ago
- Venus Collective Communication Library, supported by SII and Infrawaves.☆137Updated this week
- DeepSeek-V3/R1 inference performance simulator☆177Updated 10 months ago
- Serverless LLM Serving for Everyone.☆640Updated last week
- Persist and reuse KV Cache to speedup your LLM.☆244Updated this week
- Disaggregated serving system for Large Language Models (LLMs).☆772Updated 9 months ago
- Artifacts for our NSDI'23 paper TGS☆94Updated last year
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆134Updated last year
- KV cache store for distributed LLM inference☆389Updated 2 months ago
- A light weight vLLM simulator, for mocking out replicas.☆84Updated this week
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆365Updated this week
- A large-scale simulation framework for LLM inference☆527Updated 6 months ago
- ☆323Updated 2 years ago
- Here are my personal paper reading notes (including machine learning systems, AI infrastructure, and other interesting stuffs).☆154Updated this week
- Hooked CUDA-related dynamic libraries by using automated code generation tools.☆172Updated 2 years ago
- ☆73Updated 4 months ago
- NVIDIA NCCL Tests for Distributed Training☆133Updated this week
- ☆230Updated last month
- NVIDIA Inference Xfer Library (NIXL)☆844Updated this week
- Predict the performance of LLM inference services☆21Updated 4 months ago
- NCCL Profiling Kit☆150Updated last year
- Fast OS-level support for GPU checkpoint and restore☆270Updated 4 months ago
- Stateful LLM Serving☆95Updated 10 months ago
- ☆147Updated last year
- A workload for deploying LLM inference services on Kubernetes☆160Updated last week
- ☆174Updated last year