google / nccl-fastsocket
NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.
☆116Updated last year
Alternatives and similar repositories for nccl-fastsocket:
Users that are interested in nccl-fastsocket are comparing it to the libraries listed below
- RDMA and SHARP plugins for nccl library☆181Updated last month
- NCCL Profiling Kit☆127Updated 8 months ago
- Microsoft Collective Communication Library☆340Updated last year
- Synthesizer for optimal collective communication algorithms☆104Updated 11 months ago
- PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for…☆130Updated this week
- Fine-grained GPU sharing primitives☆141Updated 5 years ago
- This is a plugin which lets EC2 developers use libfabric as network provider while running NCCL applications.☆166Updated this week
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆308Updated this week
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆74Updated 4 years ago
- ☆328Updated 10 months ago
- NVIDIA NCCL Tests for Distributed Training☆82Updated this week
- Microsoft Collective Communication Library☆60Updated 3 months ago
- ☆36Updated 3 months ago
- ☆83Updated 2 years ago
- GPU-scheduler-for-deep-learning☆202Updated 4 years ago
- ☆75Updated 2 years ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆58Updated 10 months ago
- HierarchicalKV is a part of NVIDIA Merlin and provides hierarchical key-value storage to meet RecSys requirements. The key capability of…☆140Updated last week
- A tool for bandwidth measurements on NVIDIA GPUs.☆385Updated last month
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆71Updated last year
- ☆58Updated 4 years ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆127Updated 2 years ago
- Magnum IO community repo☆84Updated last month
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆60Updated 9 months ago
- Pytorch process group third-party plugin for UCC☆20Updated 10 months ago
- Splits single Nvidia GPU into multiple partitions with complete compute and memory isolation (wrt to performace) between the partitions☆157Updated 5 years ago
- Hooked CUDA-related dynamic libraries by using automated code generation tools.☆149Updated last year
- An experimental parallel training platform☆54Updated 11 months ago
- ☆23Updated 3 years ago
- Code for "Heterogenity-Aware Cluster Scheduling Policies for Deep Learning Workloads", which appeared at OSDI 2020☆126Updated 7 months ago