google / nccl-fastsocket
NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.
☆116Updated last year
Alternatives and similar repositories for nccl-fastsocket:
Users that are interested in nccl-fastsocket are comparing it to the libraries listed below
- RDMA and SHARP plugins for nccl library☆191Updated 3 weeks ago
- NCCL Profiling Kit☆133Updated 10 months ago
- Microsoft Collective Communication Library☆344Updated last year
- PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for…☆137Updated this week
- Fine-grained GPU sharing primitives☆141Updated 5 years ago
- Synthesizer for optimal collective communication algorithms☆106Updated last year
- This is a plugin which lets EC2 developers use libfabric as network provider while running NCCL applications.☆169Updated this week
- ☆340Updated last year
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆345Updated this week
- ☆36Updated 4 months ago
- HierarchicalKV is a part of NVIDIA Merlin and provides hierarchical key-value storage to meet RecSys requirements. The key capability of…☆144Updated this week
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆76Updated 4 years ago
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆151Updated this week
- NVIDIA NCCL Tests for Distributed Training☆88Updated last week
- Microsoft Collective Communication Library☆65Updated 5 months ago
- NVIDIA Inference Xfer Library (NIXL)☆304Updated this week
- pytorch ucc plugin☆21Updated 3 years ago
- Unified Collective Communication Library☆251Updated last week
- oneAPI Collective Communications Library (oneCCL)☆232Updated last week
- GPUDirect Async support for IB Verbs☆112Updated 2 years ago
- Magnum IO community repo☆90Updated 3 months ago
- Pytorch process group third-party plugin for UCC☆20Updated last year
- ☆79Updated 2 years ago
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆73Updated last year
- GPU-scheduler-for-deep-learning☆205Updated 4 years ago
- Splits single Nvidia GPU into multiple partitions with complete compute and memory isolation (wrt to performace) between the partitions☆159Updated 6 years ago
- ☆82Updated 2 years ago
- example code for using DC QP for providing RDMA READ and WRITE operations to remote GPU memory☆129Updated 9 months ago
- Ultra | Ultimate | Unified CCL☆65Updated 2 months ago
- A tensor-aware point-to-point communication primitive for machine learning☆257Updated 2 years ago