mlcommons / inferenceLinks
Reference implementations of MLPerf® inference benchmarks
☆1,525Updated this week
Alternatives and similar repositories for inference
Users that are interested in inference are comparing it to the libraries listed below
Sorting:
- Reference implementations of MLPerf® training benchmarks☆1,739Updated last month
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,525Updated this week
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆973Updated this week
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆1,006Updated last year
- A CPU+GPU Profiling library that provides access to timeline traces and hardware performance counters.☆921Updated this week
- TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.☆1,012Updated this week
- [DEPRECATED] Moved to ROCm/rocm-libraries repo. NOTE: develop branch is maintained as a read-only mirror☆518Updated this week
- The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.☆1,742Updated this week
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆728Updated this week
- NCCL Tests☆1,423Updated this week
- SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, …☆2,577Updated last week
- A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresse…☆1,925Updated this week
- A performant and modular runtime for TensorFlow☆753Updated 5 months ago
- cudnn_frontend provides a c++ wrapper for the cudnn backend API and samples on how to use it☆682Updated last week
- ☆422Updated last month
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,245Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,132Updated this week
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆916Updated last year
- ONNX Optimizer☆795Updated this week
- A tool for bandwidth measurements on NVIDIA GPUs.☆617Updated 9 months ago
- NVIDIA Data Center GPU Manager (DCGM) is a project for gathering telemetry and measuring the health of NVIDIA GPUs☆658Updated 2 months ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,859Updated last week
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆504Updated this week
- The Tensor Algebra SuperOptimizer for Deep Learning☆740Updated 3 years ago
- ☆392Updated 3 years ago
- Backward compatible ML compute opset inspired by HLO/MHLO☆601Updated 3 weeks ago
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,552Updated this week
- Examples demonstrating available options to program multiple GPUs in a single node or a cluster☆864Updated 4 months ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,072Updated last year
- Dive into Deep Learning Compiler☆645Updated 3 years ago