Azure / mscclLinks
Microsoft Collective Communication Library
☆66Updated 9 months ago
Alternatives and similar repositories for msccl
Users that are interested in msccl are comparing it to the libraries listed below
Sorting:
- NCCL Profiling Kit☆143Updated last year
- Synthesizer for optimal collective communication algorithms☆116Updated last year
- Thunder Research Group's Collective Communication Library☆41Updated 2 months ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆84Updated 2 years ago
- A lightweight design for computation-communication overlap.☆161Updated this week
- ☆69Updated last year
- Stateful LLM Serving☆81Updated 6 months ago
- ☆81Updated 2 years ago
- ☆46Updated 8 months ago
- An experimental parallel training platform☆54Updated last year
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆79Updated 9 months ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆66Updated 5 months ago
- PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for…☆149Updated last week
- nnScaler: Compiling DNN models for Parallel Training☆118Updated last week
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆62Updated last year
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆40Updated 2 years ago
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆20Updated last year
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆74Updated 2 years ago
- An interference-aware scheduler for fine-grained GPU sharing☆145Updated 7 months ago
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆128Updated last year
- A resilient distributed training framework☆94Updated last year
- DeepSeek-V3/R1 inference performance simulator☆165Updated 5 months ago
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆62Updated last year
- kvcached: Elastic KV cache for dynamic GPU sharing and efficient multi-LLM inference.☆86Updated this week
- Microsoft Collective Communication Library☆360Updated last year
- ☆55Updated 3 months ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆412Updated last week
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆56Updated last week
- ☆75Updated 4 years ago
- ☆84Updated 5 months ago