Collective communications library with various primitives for multi-machine training.
☆1,399Feb 12, 2026Updated 2 weeks ago
Alternatives and similar repositories for gloo
Users that are interested in gloo are comparing it to the libraries listed below
Sorting:
- Optimized primitives for collective multi-GPU communication☆4,474Updated this week
- ☆601Apr 6, 2018Updated 7 years ago
- Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.☆14,675Dec 1, 2025Updated 2 months ago
- A high performance and generic framework for distributed DNN training☆3,716Oct 3, 2023Updated 2 years ago
- NCCL Tests☆1,441Feb 9, 2026Updated 2 weeks ago
- Compiler for Neural Network hardware accelerators☆3,326May 11, 2024Updated last year
- A tensor-aware point-to-point communication primitive for machine learning☆283Dec 17, 2025Updated 2 months ago
- Acceleration package for neural networks on multi-core CPUs☆1,701Jun 11, 2024Updated last year
- A domain specific language to express machine learning workloads.☆1,765Apr 28, 2023Updated 2 years ago
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,534Updated this week
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,926Updated this week
- oneAPI Collective Communications Library (oneCCL)☆256Feb 4, 2026Updated 3 weeks ago
- A fast GPU memory copy library based on NVIDIA GPUDirect RDMA technology☆1,345Dec 17, 2025Updated 2 months ago
- CUDA Templates and Python DSLs for High-Performance Linear Algebra☆9,315Updated this week
- Open Machine Learning Compiler Framework☆13,142Updated this week
- PyTorch extensions for high performance and large scale training.☆3,400Apr 26, 2025Updated 10 months ago
- ☆371Oct 23, 2017Updated 8 years ago
- RDMA and SHARP plugins for nccl library☆223Jan 12, 2026Updated last month
- Unified Communication X (mailing list - https://elist.ornl.gov/mailman/listinfo/ucx-group)☆1,581Updated this week
- common in-memory tensor structure☆1,169Jan 26, 2026Updated last month
- Reliable Allreduce and Broadcast Interface for distributed machine learning☆514Nov 5, 2020Updated 5 years ago
- Ongoing research training transformer models at scale☆15,242Feb 21, 2026Updated last week
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,861Feb 20, 2026Updated last week
- ☆387Apr 23, 2024Updated last year
- Development repository for the Triton language and compiler☆18,460Feb 22, 2026Updated last week
- ☆1,655Sep 11, 2018Updated 7 years ago
- oneAPI Deep Neural Network Library (oneDNN)☆3,956Updated this week
- A lightweight parameter server interface☆1,560Jan 11, 2023Updated 3 years ago
- ATen: A TENsor library for C++11☆717Nov 20, 2019Updated 6 years ago
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…☆5,634Feb 19, 2026Updated last week
- Unified Collective Communication Library☆293Feb 19, 2026Updated last week
- [ARCHIVED] Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl☆1,820Oct 9, 2023Updated 2 years ago
- Reference implementations of MLPerf® training benchmarks☆1,741Feb 20, 2026Updated last week
- [DEPRECATED] Moved to ROCm/rocm-systems repo☆411Updated this week
- Resource-adaptive cluster scheduler for deep learning training.☆454Mar 5, 2023Updated 2 years ago
- PyTorch elastic training☆728Jun 15, 2022Updated 3 years ago
- Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Juli…☆20,829Oct 25, 2023Updated 2 years ago
- Tutorial code on how to build your own Deep Learning System in 2k Lines☆2,016Oct 4, 2018Updated 7 years ago
- Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.☆41,516Updated this week