microsoft / onnxruntimeLinks
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
☆19,207Updated this week
Alternatives and similar repositories for onnxruntime
Users that are interested in onnxruntime are comparing it to the libraries listed below
Sorting:
- Open standard for machine learning interoperability☆20,269Updated this week
- Development repository for the Triton language and compiler☆18,319Updated this week
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,672Updated this week
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆10,298Updated last week
- Open Machine Learning Compiler Framework☆13,096Updated this week
- Simplify your onnx model☆4,288Updated last week
- A collection of pre-trained, state-of-the-art models in the ONNX format☆9,376Updated 4 months ago
- Tutorials for creating and using ONNX models☆3,657Updated last year
- Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more☆34,794Updated this week
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i …☆9,486Updated this week
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,277Updated 3 weeks ago
- OpenVINO™ is an open source toolkit for optimizing and deploying AI inference☆9,638Updated this week
- 💥 Fast State-of-the-Art Tokenizers optimized for Research and Production☆10,445Updated this week
- Examples for using ONNX Runtime for machine learning inferencing.☆1,601Updated this week
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,514Updated 4 months ago
- Olive: Simplify ML Model Finetuning, Conversion, Quantization, and Optimization for CPUs, GPUs and NPUs.☆2,246Updated last week
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆3,956Updated this week
- Serve, optimize and scale PyTorch models in production☆4,358Updated 6 months ago
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,245Updated this week
- Tensor library for machine learning☆13,907Updated last week
- Fast and memory-efficient exact attention☆22,113Updated this week
- Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.☆30,803Updated this week
- Visualizer for neural network, deep learning and machine learning models☆32,340Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,509Updated this week
- A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Auto…☆16,686Updated this week
- 🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal model…☆156,173Updated this week
- A retargetable MLIR-based machine learning compiler and runtime toolkit.☆3,591Updated this week
- Tensors and Dynamic neural networks in Python with strong GPU acceleration☆97,130Updated this week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,002Updated 2 weeks ago
- Transformer related optimization, including BERT, GPT☆6,392Updated last year