google / nccl-fastsocketLinks
NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.
☆116Updated last year
Alternatives and similar repositories for nccl-fastsocket
Users that are interested in nccl-fastsocket are comparing it to the libraries listed below
Sorting:
- RDMA and SHARP plugins for nccl library☆195Updated last week
- NCCL Profiling Kit☆137Updated 11 months ago
- Synthesizer for optimal collective communication algorithms☆108Updated last year
- Microsoft Collective Communication Library☆349Updated last year
- PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for…☆144Updated this week
- Fine-grained GPU sharing primitives☆141Updated 5 years ago
- This is a plugin which lets EC2 developers use libfabric as network provider while running NCCL applications.☆176Updated this week
- ☆37Updated 6 months ago
- Ultra and Unified CCL☆154Updated this week
- NVIDIA NCCL Tests for Distributed Training☆93Updated last week
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆177Updated last week
- Microsoft Collective Communication Library☆64Updated 6 months ago
- NVIDIA Inference Xfer Library (NIXL)☆413Updated this week
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆378Updated this week
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆76Updated 4 years ago
- ☆353Updated last year
- ☆82Updated 2 years ago
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆73Updated last year
- A tool for examining GPU scheduling behavior.☆84Updated 10 months ago
- GPU-scheduler-for-deep-learning☆206Updated 4 years ago
- ☆79Updated 2 years ago
- ☆58Updated 4 years ago
- Repository for MLCommons Chakra schema and tools☆108Updated 3 months ago
- DeepSeek-V3/R1 inference performance simulator☆148Updated 2 months ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆59Updated last year
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆81Updated last year
- Thunder Research Group's Collective Communication Library☆37Updated last year
- ☆91Updated 5 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆85Updated last month
- A lightweight design for computation-communication overlap.☆141Updated this week