ParCoreLab / ComScribeLinks
ComScribe is a tool to identify communication among all GPU-GPU and CPU-GPU pairs in a single-node multi-GPU system.
☆27Updated 2 years ago
Alternatives and similar repositories for ComScribe
Users that are interested in ComScribe are comparing it to the libraries listed below
Sorting:
- NCCL Profiling Kit☆147Updated last year
- Multi-GPU communication profiler and visualizer☆36Updated last year
- Thunder Research Group's Collective Communication Library☆42Updated 4 months ago
- Microsoft Collective Communication Library☆66Updated last year
- ☆26Updated 9 months ago
- A hierarchical collective communications library with portable optimizations☆36Updated 11 months ago
- ☆38Updated 4 years ago
- ☆44Updated 4 years ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆65Updated last year
- High performance RDMA-based distributed feature collection component for training GNN model on EXTREMELY large graph☆55Updated 3 years ago
- ☆83Updated 2 years ago
- Analysis for the traces from byteprofile☆32Updated 2 years ago
- RDMA and SHARP plugins for nccl library☆212Updated last month
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆43Updated 3 years ago
- Synthesizer for optimal collective communication algorithms☆120Updated last year
- Fine-grained GPU sharing primitives☆147Updated 3 months ago
- GPUDirect Async support for IB Verbs☆133Updated 3 years ago
- ☆47Updated 11 months ago
- ☆24Updated 2 years ago
- Tacker: Tensor-CUDA Core Kernel Fusion for Improving the GPU Utilization while Ensuring QoS☆32Updated 9 months ago
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆122Updated 2 years ago
- [ACM EuroSys 2023] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆57Updated 3 months ago
- ☆53Updated 11 months ago
- Tartan: Evaluating Modern GPU Interconnect via a Multi-GPU Benchmark Suite☆66Updated 7 years ago
- Magnum IO community repo☆104Updated 3 months ago
- SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training☆35Updated 2 years ago
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆63Updated last year
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆126Updated 3 years ago
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆56Updated last year
- An experimental parallel training platform☆56Updated last year