microsoft / msccl-toolsLinks
Synthesizer for optimal collective communication algorithms
☆117Updated last year
Alternatives and similar repositories for msccl-tools
Users that are interested in msccl-tools are comparing it to the libraries listed below
Sorting:
- Microsoft Collective Communication Library☆360Updated 2 years ago
- NCCL Profiling Kit☆145Updated last year
- ☆83Updated 2 years ago
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆75Updated 2 years ago
- Microsoft Collective Communication Library☆66Updated 10 months ago
- Fine-grained GPU sharing primitives☆144Updated 2 months ago
- Repository for MLCommons Chakra schema and tools☆127Updated this week
- ☆154Updated last year
- ☆51Updated 9 months ago
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆100Updated 2 years ago
- RDMA and SHARP plugins for nccl library☆208Updated 3 weeks ago
- LLM serving cluster simulator☆114Updated last year
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆76Updated 4 years ago
- An experimental parallel training platform☆54Updated last year
- An interference-aware scheduler for fine-grained GPU sharing☆147Updated 8 months ago
- PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for…☆151Updated 3 weeks ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆87Updated 2 years ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆127Updated 3 years ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆62Updated last year
- ☆24Updated 3 years ago
- TACOS: [T]opology-[A]ware [Co]llective Algorithm [S]ynthesizer for Distributed Machine Learning☆26Updated 3 months ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆418Updated this week
- ☆38Updated 3 months ago
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆120Updated last year
- ☆81Updated 4 months ago
- Thunder Research Group's Collective Communication Library☆42Updated 2 months ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆53Updated last year
- A baseline repository of Auto-Parallelism in Training Neural Networks☆146Updated 3 years ago
- ☆23Updated last year
- ASTRA-sim2.0: Modeling Hierarchical Networks and Disaggregated Systems for Large-model Training at Scale☆436Updated 3 weeks ago