neurokernel / gpu-cluster-config
How to Configure a GPU Cluster Running Ubuntu Linux
☆54Updated 7 years ago
Related projects ⓘ
Alternatives and complementary repositories for gpu-cluster-config
- Scheduling GPU cluster workloads with Slurm☆73Updated 6 years ago
- Steps to create a small slurm cluster with GPU enabled nodes☆263Updated last year
- Instructions for setting up a SLURM cluster using Ubuntu 18.04.3 with GPUs.☆135Updated 3 years ago
- This repository contains the results and code for the MLPerf™ Training v0.5 benchmark.☆35Updated last year
- Tools to deploy GPU clusters in the Cloud☆30Updated last year
- Container plugin for Slurm Workload Manager☆288Updated this week
- Tools and extensions for CUDA profiling☆63Updated 4 years ago
- Python bindings for NVTX☆66Updated last year
- This repository contains the results and code for the MLPerf™ Training v0.6 benchmark.☆42Updated last year
- Deep Learning Benchmarking Suite☆130Updated last year
- Reference implementations of MLPerf™ HPC training benchmarks☆41Updated 5 months ago
- oneCCL Bindings for Pytorch*☆86Updated last week
- PyProf2: PyTorch Profiling tool☆83Updated 4 years ago
- HPC Container Maker☆458Updated last week
- This repository contains the results and code for the MLPerf™ Training v0.7 benchmark.☆56Updated last year
- ☆26Updated last year
- Personal collection of references for high performance mixed precision training.☆41Updated 5 years ago
- Microway's improved version of GPU Burn☆86Updated 2 months ago
- NGC Container Replicator☆28Updated last year
- Issues related to MLPerf™ training policies, including rules and suggested changes☆92Updated last month
- Code examples for CUDA and OpenACC☆34Updated 2 months ago
- ☆313Updated 6 months ago
- High Performance Linpack for GPUs (Using OpenCL, CUDA, CAL)☆88Updated 9 years ago
- SLURM Example Scripts☆69Updated 5 years ago
- ☆32Updated 7 years ago
- Slurm on Google Cloud Platform☆181Updated last month
- Convert nvprof profiles into about:tracing compatible JSON files☆67Updated 3 years ago
- Examples demonstrating available options to program multiple GPUs in a single node or a cluster☆554Updated last week
- Intel® Optimization for Chainer*, a Chainer module providing numpy like API and DNN acceleration using MKL-DNN.☆163Updated this week
- Now hosted on GitLab.☆312Updated last month