mlcommons / chakra-oldLinks
Repository for MLCommons Chakra schema and tools
☆39Updated last year
Alternatives and similar repositories for chakra-old
Users that are interested in chakra-old are comparing it to the libraries listed below
Sorting:
- Repository for MLCommons Chakra schema and tools☆113Updated 3 weeks ago
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆73Updated last year
- ☆24Updated 2 years ago
- TACOS: [T]opology-[A]ware [Co]llective Algorithm [S]ynthesizer for Distributed Machine Learning☆25Updated 3 weeks ago
- Synthesizer for optimal collective communication algorithms☆108Updated last year
- ☆79Updated 2 years ago
- LLM serving cluster simulator☆107Updated last year
- ☆19Updated 7 months ago
- ☆24Updated 4 years ago
- [NSDI 2023] TopoOpt: Optimizing the Network Topology for Distributed DNN Training☆31Updated 10 months ago
- ☆37Updated last year
- ☆37Updated 2 weeks ago
- Microsoft Collective Communication Library☆64Updated 7 months ago
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆50Updated 2 years ago
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆95Updated 2 years ago
- Managed collective communication service☆22Updated 10 months ago
- Compiler for Dynamic Neural Networks☆46Updated last year
- ☆143Updated last year
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆82Updated last year
- ☆14Updated 3 years ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆59Updated last year
- LLM Inference analyzer for different hardware platforms☆78Updated last month
- ☆23Updated last year
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆61Updated last year
- RPCNIC: A High-Performance and Reconfigurable PCIe-attached RPC Accelerator [HPCA2025]☆11Updated 7 months ago
- ☆49Updated 6 months ago
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆122Updated 3 weeks ago
- ☆45Updated 3 years ago
- [ACM EuroSys '23] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆57Updated last year
- Artifacts for our ASPLOS'23 paper ElasticFlow☆52Updated last year