Azure / mscclLinks
Microsoft Collective Communication Library
☆66Updated 10 months ago
Alternatives and similar repositories for msccl
Users that are interested in msccl are comparing it to the libraries listed below
Sorting:
- NCCL Profiling Kit☆145Updated last year
- Thunder Research Group's Collective Communication Library☆42Updated 2 months ago
- Synthesizer for optimal collective communication algorithms☆117Updated last year
- An experimental parallel training platform☆54Updated last year
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆80Updated 10 months ago
- ☆83Updated 2 years ago
- PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for…☆151Updated 3 weeks ago
- ☆72Updated last year
- A lightweight design for computation-communication overlap.☆177Updated last week
- ☆46Updated 9 months ago
- Microsoft Collective Communication Library☆360Updated 2 years ago
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆75Updated 2 years ago
- A resilient distributed training framework☆95Updated last year
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆418Updated this week
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆129Updated last year
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆20Updated last year
- Stateful LLM Serving☆85Updated 6 months ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆66Updated 6 months ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆87Updated 2 years ago
- ☆85Updated 5 months ago
- An interference-aware scheduler for fine-grained GPU sharing☆147Updated 8 months ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆62Updated last year
- ☆60Updated 8 months ago
- DeepSeek-V3/R1 inference performance simulator☆170Updated 6 months ago
- nnScaler: Compiling DNN models for Parallel Training☆118Updated last week
- kvcached: Elastic KV cache for dynamic GPU sharing and efficient multi-LLM inference.☆94Updated this week
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆40Updated 2 years ago
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆54Updated this week
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆61Updated 2 weeks ago
- Efficient Compute-Communication Overlap for Distributed LLM Inference☆57Updated 3 weeks ago