microsoft / msccl-toolsLinks
Synthesizer for optimal collective communication algorithms
☆116Updated last year
Alternatives and similar repositories for msccl-tools
Users that are interested in msccl-tools are comparing it to the libraries listed below
Sorting:
- Microsoft Collective Communication Library☆360Updated last year
- NCCL Profiling Kit☆143Updated last year
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆74Updated 2 years ago
- ☆81Updated 2 years ago
- Microsoft Collective Communication Library☆66Updated 9 months ago
- Repository for MLCommons Chakra schema and tools☆125Updated last month
- An experimental parallel training platform☆54Updated last year
- Thunder Research Group's Collective Communication Library☆41Updated 2 months ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆84Updated 2 years ago
- RDMA and SHARP plugins for nccl library☆201Updated last week
- ☆147Updated last year
- Fine-grained GPU sharing primitives☆144Updated last month
- ☆50Updated 8 months ago
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆100Updated 2 years ago
- PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for…☆149Updated last week
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆77Updated 4 years ago
- LLM serving cluster simulator☆109Updated last year
- An interference-aware scheduler for fine-grained GPU sharing☆145Updated 7 months ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆126Updated 3 years ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆412Updated last week
- ☆24Updated 3 years ago
- ☆37Updated 2 months ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆62Updated last year
- ASTRA-sim2.0: Modeling Hierarchical Networks and Disaggregated Systems for Large-model Training at Scale☆421Updated last week
- A baseline repository of Auto-Parallelism in Training Neural Networks☆146Updated 3 years ago
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆121Updated last year
- ☆81Updated 3 months ago
- nnScaler: Compiling DNN models for Parallel Training☆118Updated last week
- Artifacts for our ASPLOS'23 paper ElasticFlow☆52Updated last year
- TACOS: [T]opology-[A]ware [Co]llective Algorithm [S]ynthesizer for Distributed Machine Learning☆26Updated 2 months ago