paperg / NCCL_GPLinks
Separate from hardware and used to learn some NCCL mechanisms
☆24Updated last year
Alternatives and similar repositories for NCCL_GP
Users that are interested in NCCL_GP are comparing it to the libraries listed below
Sorting:
- ☆212Updated 2 years ago
- ☆227Updated last week
- example code for using DC QP for providing RDMA READ and WRITE operations to remote GPU memory☆148Updated last year
- ☆30Updated last year
- DeepSeek-V3/R1 inference performance simulator☆169Updated 8 months ago
- Repository for MLCommons Chakra schema and tools☆142Updated last month
- RDMA and SHARP plugins for nccl library☆217Updated 3 weeks ago
- ☆47Updated last year
- NCCL Profiling Kit☆149Updated last year
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆78Updated 2 years ago
- ☆39Updated 3 years ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆65Updated last year
- ☆90Updated 8 months ago
- Thunder Research Group's Collective Communication Library☆43Updated 5 months ago
- Venus Collective Communication Library, supported by SII and Infrawaves.☆123Updated this week
- ☆42Updated last year
- ATLAHS: An Application-centric Network Simulator Toolchain for AI, HPC, and Distributed Storage☆56Updated last week
- Artifact from "Hardware Compute Partitioning on NVIDIA GPUs". THIS IS A FORK OF BAKITAS REPO. I AM NOT ONE OF THE AUTHORS OF THE PAPER.☆44Updated 2 weeks ago
- Synthesizer for optimal collective communication algorithms☆121Updated last year
- An interference-aware scheduler for fine-grained GPU sharing☆154Updated 2 weeks ago
- This is an RDMA program written in Python, based on the Pyverbs provided by the rdma-core(https://github.com/linux-rdma/rdma-core) reposi…☆34Updated 3 years ago
- Microsoft Collective Communication Library☆375Updated 2 years ago
- ☆21Updated 10 months ago
- Hooked CUDA-related dynamic libraries by using automated code generation tools.☆172Updated 2 years ago
- ☆71Updated 7 months ago
- RDMA exmaple☆227Updated 3 years ago
- ASTRA-sim2.0: Modeling Hierarchical Networks and Disaggregated Systems for Large-model Training at Scale☆483Updated last week
- ☆45Updated 4 months ago
- A lightweight design for computation-communication overlap.☆194Updated 2 months ago
- Simulating Distributed Training at Scale☆14Updated 2 months ago