Bruce-Lee-LY / cuda_hook
Hooked CUDA-related dynamic libraries by using automated code generation tools.
☆145Updated last year
Alternatives and similar repositories for cuda_hook:
Users that are interested in cuda_hook are comparing it to the libraries listed below
- An efficient GPU resource sharing system with fine-grained control for Linux platforms.☆76Updated 10 months ago
- qCUDA: GPGPU Virtualization at a New API Remoting Method with Para-virtualization☆117Updated 3 years ago
- cricket is a virtualization solution for GPUs☆179Updated this week
- Artifacts for our NSDI'23 paper TGS☆72Updated 8 months ago
- This repository is an archive. Refer to https://github.com/gvirtus/GVirtuS☆40Updated 3 years ago
- RDMA and SHARP plugins for nccl library☆175Updated 3 weeks ago
- GPU-scheduler-for-deep-learning☆202Updated 4 years ago
- NCCL Profiling Kit☆127Updated 7 months ago
- HAMi-core compiles libvgpu.so, which ensures hard limit on GPU in container☆133Updated 3 weeks ago
- An interference-aware scheduler for fine-grained GPU sharing☆122Updated 2 weeks ago
- Intercepting CUDA runtime calls with LD_PRELOAD☆38Updated 10 years ago
- NVIDIA NCCL Tests for Distributed Training☆78Updated 2 weeks ago
- A kubernetes plugin which enables dynamically add or remove GPU resources for a running Pod☆122Updated 2 years ago
- example code for using DC QP for providing RDMA READ and WRITE operations to remote GPU memory☆115Updated 6 months ago
- ☆41Updated 5 months ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆58Updated 9 months ago
- Magnum IO community repo☆84Updated 3 weeks ago
- ☆323Updated 9 months ago
- ☆516Updated 8 months ago
- Repository for MLCommons Chakra schema and tools☆84Updated 2 weeks ago
- Code for "Heterogenity-Aware Cluster Scheduling Policies for Deep Learning Workloads", which appeared at OSDI 2020☆126Updated 6 months ago
- ☆57Updated 4 years ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆294Updated this week
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆115Updated last year
- ☆219Updated this week
- ☆35Updated 4 years ago
- Fast OS-level support for GPU checkpoint and restore☆152Updated last month
- Microsoft Collective Communication Library☆332Updated last year
- Splits single Nvidia GPU into multiple partitions with complete compute and memory isolation (wrt to performace) between the partitions☆157Updated 5 years ago
- Fine-grained GPU sharing primitives☆141Updated 4 years ago