microsoft / onnxruntimeLinks
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
☆17,862Updated this week
Alternatives and similar repositories for onnxruntime
Users that are interested in onnxruntime are comparing it to the libraries listed below
Sorting:
- Open standard for machine learning interoperability☆19,582Updated last week
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,125Updated last week
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆12,613Updated this week
- Simplify your onnx model☆4,165Updated 2 weeks ago
- Tutorials for creating and using ONNX models☆3,599Updated last year
- Serve, optimize and scale PyTorch models in production☆4,348Updated last month
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,473Updated this week
- Visualizer for neural network, deep learning and machine learning models☆31,377Updated this week
- OpenVINO™ is an open source toolkit for optimizing and deploying AI inference☆8,809Updated this week
- A collection of pre-trained, state-of-the-art models in the ONNX format☆8,993Updated 2 months ago
- ONNX-TensorRT: TensorRT backend for ONNX☆3,151Updated last week
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆22,039Updated this week
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆9,755Updated this week
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,113Updated this week
- Development repository for the Triton language and compiler☆16,831Updated this week
- Olive: Simplify ML Model Finetuning, Conversion, Quantization, and Optimization for CPUs, GPUs and NPUs.☆2,101Updated this week
- oneAPI Deep Neural Network Library (oneDNN)☆3,884Updated this week
- An easy to use PyTorch to TensorRT converter☆4,810Updated last year
- Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.☆30,113Updated this week
- Tensor library for machine learning☆13,134Updated last week
- Compiler for Neural Network hardware accelerators☆3,311Updated last year
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,857Updated this week
- Transformer related optimization, including BERT, GPT☆6,300Updated last year
- Unsupervised text tokenizer for Neural Network-based text generation.☆11,246Updated last week
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,075Updated last week
- 💥 Fast State-of-the-Art Tokenizers optimized for Research and Production☆10,060Updated 2 weeks ago
- Fast and memory-efficient exact attention☆19,471Updated this week
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,795Updated last week
- ONNXMLTools enables conversion of models to ONNX☆1,111Updated 3 months ago
- Google Brain AutoML☆6,395Updated 6 months ago