paperg / NCCL_GPLinks
Separate from hardware and used to learn some NCCL mechanisms
☆22Updated last year
Alternatives and similar repositories for NCCL_GP
Users that are interested in NCCL_GP are comparing it to the libraries listed below
Sorting:
- ☆210Updated 3 months ago
- ☆203Updated 2 years ago
- example code for using DC QP for providing RDMA READ and WRITE operations to remote GPU memory☆143Updated last year
- RDMA and SHARP plugins for nccl library☆203Updated this week
- ☆41Updated 10 months ago
- ☆63Updated 4 months ago
- NCCL Profiling Kit☆143Updated last year
- ☆18Updated 7 months ago
- ☆30Updated last year
- DeepSeek-V3/R1 inference performance simulator☆165Updated 5 months ago
- Repository for MLCommons Chakra schema and tools☆125Updated last month
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆74Updated 2 years ago
- Hooked CUDA-related dynamic libraries by using automated code generation tools.☆166Updated last year
- RDMA exmaple☆220Updated 3 years ago
- Ultra and Unified CCL☆530Updated this week
- ☆84Updated 5 months ago
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆121Updated last year
- Artifacts for our NSDI'23 paper TGS☆85Updated last year
- ☆36Updated 3 years ago
- Microsoft Collective Communication Library☆360Updated last year
- This is an RDMA program written in Python, based on the Pyverbs provided by the rdma-core(https://github.com/linux-rdma/rdma-core) reposi…☆33Updated 3 years ago
- Synthesizer for optimal collective communication algorithms☆116Updated last year
- Injecting Adrenaline into LLM Serving: Boosting Resource Utilization and Throughput via Attention Disaggregation☆30Updated 2 months ago
- ASTRA-sim2.0: Modeling Hierarchical Networks and Disaggregated Systems for Large-model Training at Scale☆426Updated last week
- ☆67Updated 3 years ago
- ☆46Updated 9 months ago
- An interference-aware scheduler for fine-grained GPU sharing☆145Updated 7 months ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆63Updated last year
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆108Updated 3 months ago
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆100Updated 2 years ago