NVIDIA / nccl
Optimized primitives for collective multi-GPU communication
☆3,463Updated 3 weeks ago
Alternatives and similar repositories for nccl:
Users that are interested in nccl are comparing it to the libraries listed below
- Collective communications library with various primitives for multi-machine training.☆1,263Updated last week
- NCCL Tests☆996Updated 2 weeks ago
- CUDA Templates for Linear Algebra Subroutines☆6,233Updated last week
- A fast GPU memory copy library based on NVIDIA GPUDirect RDMA technology☆958Updated 2 months ago
- oneAPI Deep Neural Network Library (oneDNN)☆3,726Updated this week
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,260Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs…☆2,182Updated this week
- [ARCHIVED] Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl☆1,719Updated last year
- Reference implementations of MLPerf™ training benchmarks☆1,646Updated 2 weeks ago
- ☆579Updated 6 years ago
- common in-memory tensor structure☆942Updated last week
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…☆5,278Updated this week
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆12,028Updated this week
- Transformer related optimization, including BERT, GPT☆6,025Updated 10 months ago
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,677Updated this week
- PyTorch extensions for high performance and large scale training.☆3,260Updated last month
- CUDA Core Compute Libraries☆1,468Updated this week
- Source code examples from the Parallel Forall Blog☆1,260Updated 6 months ago
- Low-precision matrix multiplication☆1,792Updated last year
- Reference implementations of MLPerf™ inference benchmarks☆1,315Updated this week
- Open MPI main development repository☆2,265Updated this week
- Benchmarking Deep Learning operations on different hardware☆1,081Updated 3 years ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆974Updated 5 months ago
- Enabling PyTorch on XLA Devices (e.g. Google TPU)☆2,529Updated this week
- A high performance and generic framework for distributed DNN training☆3,662Updated last year
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆2,963Updated this week
- CUDA Library Samples☆1,776Updated this week
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆11,184Updated 2 weeks ago
- CUDA Python: Performance meets Productivity☆1,103Updated this week
- ATen: A TENsor library for C++11☆691Updated 5 years ago