NVIDIA / ncclLinks
Optimized primitives for collective multi-GPU communication
☆3,744Updated this week
Alternatives and similar repositories for nccl
Users that are interested in nccl are comparing it to the libraries listed below
Sorting:
- NCCL Tests☆1,125Updated 3 weeks ago
- Collective communications library with various primitives for multi-machine training.☆1,305Updated last week
- A fast GPU memory copy library based on NVIDIA GPUDirect RDMA technology☆1,105Updated 2 months ago
- Reference implementations of MLPerf™ training benchmarks☆1,673Updated 2 weeks ago
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,342Updated this week
- [ARCHIVED] Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl☆1,747Updated last year
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…☆5,407Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Bla…☆2,435Updated last week
- oneAPI Deep Neural Network Library (oneDNN)☆3,796Updated this week
- CUDA Templates for Linear Algebra Subroutines☆7,603Updated this week
- PyTorch extensions for high performance and large scale training.☆3,322Updated last month
- TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.☆945Updated this week
- ☆583Updated 7 years ago
- An Open Source Machine Learning Framework for Everyone☆1,143Updated 8 months ago
- CUDA Core Compute Libraries☆1,662Updated this week
- Open MPI main development repository☆2,348Updated this week
- A benchmark framework for Tensorflow☆1,153Updated last year
- Benchmarking Deep Learning operations on different hardware☆1,087Updated 4 years ago
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,762Updated this week
- common in-memory tensor structure☆1,002Updated 2 weeks ago
- Transformer related optimization, including BERT, GPT☆6,173Updated last year
- A PyTorch Native LLM Training Framework☆811Updated 5 months ago
- Samples for CUDA Developers which demonstrates features in CUDA Toolkit☆7,500Updated last week
- Reference implementations of MLPerf™ inference benchmarks☆1,386Updated this week
- Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.☆14,494Updated last month
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,078Updated 2 months ago
- Examples demonstrating available options to program multiple GPUs in a single node or a cluster☆715Updated 3 months ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,796Updated last week
- Source code examples from the Parallel Forall Blog☆1,287Updated 10 months ago
- CUDA Library Samples☆1,956Updated this week