mlcommons / chakraLinks
Repository for MLCommons Chakra schema and tools
☆151Updated 2 months ago
Alternatives and similar repositories for chakra
Users that are interested in chakra are comparing it to the libraries listed below
Sorting:
- ASTRA-sim2.0: Modeling Hierarchical Networks and Disaggregated Systems for Large-model Training at Scale☆511Updated 2 weeks ago
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆79Updated 2 years ago
- ATLAHS: An Application-centric Network Simulator Toolchain for AI, HPC, and Distributed Storage☆66Updated last week
- Repository for MLCommons Chakra schema and tools☆39Updated 2 years ago
- NCCL Profiling Kit☆150Updated last year
- Synthesizer for optimal collective communication algorithms☆123Updated last year
- ☆166Updated last year
- LLM serving cluster simulator☆132Updated last year
- TACOS: [T]opology-[A]ware [Co]llective Algorithm [S]ynthesizer for Distributed Machine Learning☆29Updated 7 months ago
- ☆230Updated 3 weeks ago
- ☆22Updated 11 months ago
- Microsoft Collective Communication Library☆378Updated 2 years ago
- An interference-aware scheduler for fine-grained GPU sharing☆158Updated last month
- [NSDI25] AutoCCL: Automated Collective Communication Tuning for Accelerating Distributed and Parallel DNN Training☆30Updated 8 months ago
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆105Updated 3 years ago
- example code for using DC QP for providing RDMA READ and WRITE operations to remote GPU memory☆152Updated last year
- ☆64Updated 6 months ago
- ☆33Updated 4 months ago
- ☆25Updated 3 years ago
- DeepSeek-V3/R1 inference performance simulator☆176Updated 9 months ago
- ☆20Updated 2 months ago
- ☆84Updated 3 years ago
- Artifacts for our NSDI'23 paper TGS☆95Updated last year
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆173Updated 6 months ago
- LIBRA: Enabling Workload-aware Multi-dimensional Network Topology Optimization for Distributed Training of Large AI Models☆12Updated last year
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆66Updated last year
- Here are my personal paper reading notes (including machine learning systems, AI infrastructure, and other interesting stuffs).☆151Updated last week
- ☆53Updated last year
- ☆32Updated 5 years ago
- ☆41Updated 2 years ago