microsoft / onnxruntimeLinks
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
☆17,188Updated this week
Alternatives and similar repositories for onnxruntime
Users that are interested in onnxruntime are comparing it to the libraries listed below
Sorting:
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆11,828Updated this week
- Open standard for machine learning interoperability☆19,234Updated this week
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆12,446Updated this week
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆9,443Updated this week
- A collection of pre-trained, state-of-the-art models in the ONNX format☆8,793Updated 2 weeks ago
- Visualizer for neural network, deep learning and machine learning models☆30,866Updated this week
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,443Updated last week
- Simplify your onnx model☆4,114Updated 10 months ago
- Tutorials for creating and using ONNX models☆3,570Updated last year
- Examples for using ONNX Runtime for machine learning inferencing.☆1,422Updated this week
- Development repository for the Triton language and compiler☆16,114Updated this week
- Olive: Simplify ML Model Finetuning, Conversion, Quantization, and Optimization for CPUs, GPUs and NPUs.☆1,996Updated this week
- TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is …☆4,544Updated 2 months ago
- Serve, optimize and scale PyTorch models in production☆4,339Updated last week
- Transformer related optimization, including BERT, GPT☆6,238Updated last year
- oneAPI Deep Neural Network Library (oneDNN)☆3,837Updated this week
- ONNX-TensorRT: TensorRT backend for ONNX☆3,107Updated 3 weeks ago
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆39,299Updated this week
- A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Auto…☆15,075Updated this week
- Compiler for Neural Network hardware accelerators☆3,312Updated last year
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆21,773Updated this week
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,060Updated this week
- A retargetable MLIR-based machine learning compiler and runtime toolkit.☆3,209Updated this week
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,655Updated 3 months ago
- OpenVINO™ is an open source toolkit for optimizing and deploying AI inference☆8,559Updated this week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,800Updated this week
- Ongoing research training transformer models at scale☆12,835Updated this week
- SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,449Updated this week
- DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning. DirectML provides GPU acceleration for comm…☆2,487Updated 3 weeks ago
- Simple, safe way to store and distribute tensors☆3,345Updated last week