Microsoft Collective Communication Library
☆387Sep 20, 2023Updated 2 years ago
Alternatives and similar repositories for msccl
Users that are interested in msccl are comparing it to the libraries listed below
Sorting:
- Synthesizer for optimal collective communication algorithms☆123Apr 8, 2024Updated last year
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆481Updated this week
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆80Jul 25, 2023Updated 2 years ago
- NCCL Profiling Kit☆152Jul 1, 2024Updated last year
- Microsoft Collective Communication Library☆66Nov 23, 2024Updated last year
- ☆84Dec 2, 2022Updated 3 years ago
- RDMA and SHARP plugins for nccl library☆224Jan 12, 2026Updated 2 months ago
- [DEPRECATED] Moved to ROCm/rocm-systems repo☆413Updated this week
- Optimized primitives for collective multi-GPU communication☆4,531Updated this week
- Unified Collective Communication Library☆296Mar 12, 2026Updated last week
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆122Nov 15, 2023Updated 2 years ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆976Mar 6, 2026Updated 2 weeks ago
- NCCL Tests☆1,459Mar 11, 2026Updated last week
- ☆26Feb 17, 2025Updated last year
- ASTRA-sim2.0: Modeling Hierarchical Networks and Disaggregated Systems for Large-model Training at Scale☆533Mar 12, 2026Updated last week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,273Aug 28, 2025Updated 6 months ago
- ☆390Apr 23, 2024Updated last year
- Distributed Compiler based on Triton for Parallel Systems☆1,386Mar 11, 2026Updated last week
- ☆47Dec 13, 2024Updated last year
- TACOS: [T]opology-[A]ware [Co]llective Algorithm [S]ynthesizer for Distributed Machine Learning☆32Jun 13, 2025Updated 9 months ago
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs☆1,000Mar 3, 2026Updated 2 weeks ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,864Mar 12, 2026Updated last week
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆55Dec 11, 2022Updated 3 years ago
- ☆49Aug 27, 2024Updated last year
- This is a plugin which lets EC2 developers use libfabric as network provider while running NCCL applications.☆207Mar 12, 2026Updated last week
- Repository for MLCommons Chakra schema and tools☆38Dec 24, 2023Updated 2 years ago
- A fast GPU memory copy library based on NVIDIA GPUDirect RDMA technology☆1,355Mar 12, 2026Updated last week
- Thunder Research Group's Collective Communication Library☆49Jul 8, 2025Updated 8 months ago
- ☆392Nov 4, 2022Updated 3 years ago
- A large-scale simulation framework for LLM inference☆556Jul 25, 2025Updated 7 months ago
- Fine-grained GPU sharing primitives☆147Jul 28, 2025Updated 7 months ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆1,003Sep 19, 2024Updated last year
- [DEPRECATED] Moved to ROCm/rocm-systems repo☆146Mar 10, 2026Updated last week
- A CPU+GPU Profiling library that provides access to timeline traces and hardware performance counters.☆932Updated this week
- A resilient distributed training framework☆97Apr 11, 2024Updated last year
- [DEPRECATED] Moved to ROCm/rocm-systems repo☆88Mar 5, 2026Updated 2 weeks ago
- A low-latency & high-throughput serving engine for LLMs☆484Jan 8, 2026Updated 2 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,211Updated this week
- A GPU-driven system framework for scalable AI applications☆124Feb 5, 2025Updated last year