microsoft / msccl-toolsLinks
Synthesizer for optimal collective communication algorithms
☆124Updated last year
Alternatives and similar repositories for msccl-tools
Users that are interested in msccl-tools are comparing it to the libraries listed below
Sorting:
- Microsoft Collective Communication Library☆379Updated 2 years ago
- ☆84Updated 3 years ago
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆80Updated 2 years ago
- NCCL Profiling Kit☆150Updated last year
- Microsoft Collective Communication Library☆66Updated last year
- ☆166Updated last year
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆104Updated 3 years ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆92Updated 2 years ago
- ATLAHS: An Application-centric Network Simulator Toolchain for AI, HPC, and Distributed Storage☆68Updated this week
- An interference-aware scheduler for fine-grained GPU sharing☆159Updated 2 months ago
- An experimental parallel training platform☆56Updated last year
- TACOS: [T]opology-[A]ware [Co]llective Algorithm [S]ynthesizer for Distributed Machine Learning☆31Updated 7 months ago
- PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for…☆155Updated last week
- Repository for MLCommons Chakra schema and tools☆153Updated 3 months ago
- Repository for MLCommons Chakra schema and tools☆39Updated 2 years ago
- RDMA and SHARP plugins for nccl library☆221Updated 3 weeks ago
- LLM serving cluster simulator☆134Updated last year
- ☆38Updated 7 months ago
- Thunder Research Group's Collective Communication Library☆47Updated 6 months ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆70Updated 10 months ago
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆76Updated 5 years ago
- ☆25Updated 3 years ago
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆122Updated 2 years ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆66Updated last year
- Fine-grained GPU sharing primitives☆148Updated 6 months ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆127Updated 3 years ago
- Compiler for Dynamic Neural Networks☆45Updated 2 years ago
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆64Updated last year
- ☆53Updated last year
- A lightweight design for computation-communication overlap.☆213Updated last week