intel / ai-reference-modelsLinks
Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Intel® Data Center GPUs
☆718Updated this week
Alternatives and similar repositories for ai-reference-models
Users that are interested in ai-reference-models are comparing it to the libraries listed below
Sorting:
- A scalable inference server for models optimized with OpenVINO™☆783Updated this week
 - Reference implementations of MLPerf® inference benchmarks☆1,478Updated this week
 - 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆503Updated this week
 - Computation using data flow graphs for scalable machine learning☆68Updated this week
 - OpenVINO™ integration with TensorFlow☆178Updated last year
 - Intel® Extension for TensorFlow*☆346Updated 7 months ago
 - TensorFlow/TensorRT integration☆744Updated last year
 - Explainable AI Tooling (XAI). XAI is used to discover and explain a model's prediction in a way that is interpretable to the user. Releva…☆39Updated last month
 - Reference implementations of MLPerf® training benchmarks☆1,721Updated last week
 - Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,095Updated this week
 - A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆1,986Updated this week
 - Issues related to MLPerf™ training policies, including rules and suggested changes☆95Updated last month
 - Inference Model Manager for Kubernetes☆46Updated 6 years ago
 - A profiling and performance analysis tool for machine learning☆443Updated this week
 - FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,468Updated this week
 - SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,517Updated last week
 - A performant and modular runtime for TensorFlow☆760Updated last month
 - Examples for using ONNX Runtime for model training.☆352Updated last year
 - Reference models for Intel(R) Gaudi(R) AI Accelerator☆165Updated last month
 - Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆365Updated this week
 - Dockerfiles and scripts for ONNX container images☆138Updated 3 years ago
 - oneCCL Bindings for Pytorch*☆102Updated 2 months ago
 - oneAPI Collective Communications Library (oneCCL)☆244Updated last week
 - cudnn_frontend provides a c++ wrapper for the cudnn backend API and samples on how to use it☆631Updated 3 weeks ago
 - Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆495Updated this week
 - Samples for Intel® oneAPI Toolkits☆1,097Updated this week
 - TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.☆991Updated this week
 - Common utilities for ONNX converters☆283Updated last month
 - Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆926Updated this week
 - ☆126Updated this week