mlcommons / inference
Reference implementations of MLPerf™ inference benchmarks
☆1,238Updated this week
Related projects ⓘ
Alternatives and complementary repositories for inference
- Reference implementations of MLPerf™ training benchmarks☆1,617Updated last month
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,210Updated this week
- The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.☆1,355Updated this week
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆767Updated this week
- Actively maintained ONNX Optimizer☆647Updated 8 months ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆962Updated 2 months ago
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆816Updated this week
- A performant and modular runtime for TensorFlow☆756Updated last month
- NCCL Tests☆898Updated 2 weeks ago
- The Tensor Algebra SuperOptimizer for Deep Learning☆692Updated last year
- Dive into Deep Learning Compiler☆643Updated 2 years ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs…☆1,979Updated this week
- common in-memory tensor structure☆912Updated last month
- Collective communications library with various primitives for multi-machine training.☆1,227Updated this week
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,011Updated 7 months ago
- TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.☆875Updated this week
- Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators☆313Updated this week
- ☆399Updated this week
- A CPU+GPU Profiling library that provides access to timeline traces and hardware performance counters.☆734Updated this week
- Optimized primitives for collective multi-GPU communication☆3,253Updated 2 months ago
- SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,227Updated this week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,597Updated this week
- Low-precision matrix multiplication☆1,780Updated 9 months ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆433Updated last week
- Backward compatible ML compute opset inspired by HLO/MHLO☆412Updated this week
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,528Updated 5 years ago
- A model compilation solution for various hardware☆378Updated last week
- TensorRT Plugin Autogen Tool☆367Updated last year
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆683Updated this week
- TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization, pruning, distillati…☆567Updated this week