mlcommons / training
Reference implementations of MLPerf™ training benchmarks
☆1,653Updated this week
Alternatives and similar repositories for training:
Users that are interested in training are comparing it to the libraries listed below
- Reference implementations of MLPerf™ inference benchmarks☆1,331Updated last week
- A benchmark framework for Tensorflow☆1,149Updated last year
- Collective communications library with various primitives for multi-machine training.☆1,277Updated this week
- Benchmarking Deep Learning operations on different hardware☆1,081Updated 3 years ago
- A domain specific language to express machine learning workloads.☆1,756Updated last year
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,277Updated this week
- TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.☆915Updated this week
- A performant and modular runtime for TensorFlow☆759Updated last month
- ☆580Updated 6 years ago
- nGraph has moved to OpenVINO☆1,350Updated 4 years ago
- common in-memory tensor structure☆963Updated last week
- Dive into Deep Learning Compiler☆647Updated 2 years ago
- Low-precision matrix multiplication☆1,794Updated last year
- oneAPI Deep Neural Network Library (oneDNN)☆3,747Updated this week
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,538Updated 5 years ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆979Updated 6 months ago
- The Tensor Algebra SuperOptimizer for Deep Learning☆704Updated 2 years ago
- Compiler for Neural Network hardware accelerators☆3,271Updated 10 months ago
- TensorFlow/TensorRT integration☆739Updated last year
- Optimized primitives for collective multi-GPU communication☆3,564Updated this week
- ☆408Updated this week
- "Multi-Level Intermediate Representation" Compiler Infrastructure☆1,738Updated 3 years ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,773Updated this week
- NCCL Tests☆1,040Updated last week
- ☆1,658Updated 6 years ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs…☆2,293Updated this week
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆830Updated this week
- ☆372Updated 7 years ago
- HugeCTR is a high efficiency GPU framework designed for Click-Through-Rate (CTR) estimating training☆979Updated this week
- Mesh TensorFlow: Model Parallelism Made Easier☆1,601Updated last year