microsoft / msccl
Microsoft Collective Communication Library
☆342Updated last year
Alternatives and similar repositories for msccl:
Users that are interested in msccl are comparing it to the libraries listed below
- Synthesizer for optimal collective communication algorithms☆105Updated last year
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆330Updated this week
- NCCL Profiling Kit☆129Updated 9 months ago
- RDMA and SHARP plugins for nccl library☆187Updated this week
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆116Updated last year
- ☆78Updated 2 years ago
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆72Updated last year
- Microsoft Collective Communication Library☆65Updated 4 months ago
- ☆133Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆345Updated 3 weeks ago
- Repository for MLCommons Chakra schema and tools☆95Updated last month
- DeepSeek-V3/R1 inference performance simulator☆102Updated 2 weeks ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆143Updated 2 years ago
- NVIDIA Inference Xfer Library (NIXL)☆242Updated this week
- Shared Middle-Layer for Triton Compilation☆241Updated this week
- A tool for bandwidth measurements on NVIDIA GPUs.☆397Updated 2 months ago
- ☆64Updated 3 months ago
- A low-latency & high-throughput serving engine for LLMs☆337Updated 2 months ago
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆231Updated 3 weeks ago
- nnScaler: Compiling DNN models for Parallel Training☆106Updated 2 months ago
- An experimental parallel training platform☆54Updated last year
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆81Updated last year
- ASTRA-sim2.0: Modeling Hierarchical Networks and Disaggregated Systems for Large-model Training at Scale☆340Updated last week
- ☆336Updated 11 months ago
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆73Updated 4 years ago
- PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for…☆134Updated this week
- Assembler for NVIDIA Volta and Turing GPUs☆215Updated 3 years ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆58Updated 11 months ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆61Updated 3 weeks ago
- A validation and profiling tool for AI infrastructure☆306Updated this week