mlcommons / inferenceLinks
Reference implementations of MLPerf™ inference benchmarks
☆1,397Updated last week
Alternatives and similar repositories for inference
Users that are interested in inference are comparing it to the libraries listed below
Sorting:
- Reference implementations of MLPerf™ training benchmarks☆1,683Updated last month
- SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,430Updated this week
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,383Updated this week
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆873Updated last week
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆989Updated 9 months ago
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆873Updated 5 months ago
- The Tensor Algebra SuperOptimizer for Deep Learning☆715Updated 2 years ago
- A performant and modular runtime for TensorFlow☆762Updated 2 months ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,051Updated last year
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Bla…☆2,491Updated this week
- NVIDIA Data Center GPU Manager (DCGM) is a project for gathering telemetry and measuring the health of NVIDIA GPUs☆525Updated last month
- ONNX Optimizer☆721Updated last week
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,803Updated this week
- ☆416Updated this week
- NCCL Tests☆1,149Updated 2 weeks ago
- Dive into Deep Learning Compiler☆644Updated 3 years ago
- The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.☆1,562Updated last week
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆477Updated last week
- A CPU+GPU Profiling library that provides access to timeline traces and hardware performance counters.☆821Updated this week
- common in-memory tensor structure☆1,014Updated last week
- TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.☆948Updated 2 weeks ago
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆718Updated last week
- Collective communications library with various primitives for multi-machine training.☆1,315Updated this week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,783Updated this week
- ☆391Updated 2 years ago
- A unified library of state-of-the-art model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. …☆995Updated this week
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,046Updated this week
- oneAPI Deep Neural Network Library (oneDNN)☆3,813Updated this week
- Optimized primitives for collective multi-GPU communication☆3,789Updated 3 weeks ago
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,340Updated this week