onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime
☆463Apr 23, 2026Updated last week
Alternatives and similar repositories for onnxruntime-extensions
Users that are interested in onnxruntime-extensions are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆438Apr 23, 2026Updated last week
- Common utilities for ONNX converters☆297Dec 16, 2025Updated 4 months ago
- Examples for using ONNX Runtime for machine learning inferencing.☆1,634Feb 24, 2026Updated 2 months ago
- ONNX Optimizer☆807Updated this week
- Generative AI extensions for onnxruntime☆1,014Updated this week
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- ONNXMLTools enables conversion of models to ONNX☆1,160Updated this week
- Olive: Simplify ML Model Finetuning, Conversion, Quantization, and Optimization for CPUs, GPUs and NPUs.☆2,305Updated this week
- Simplify your onnx model☆4,328Updated this week
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆20,355Updated this week
- The Triton backend for the ONNX Runtime.☆174Apr 24, 2026Updated last week
- Examples for using ONNX Runtime for model training.☆364Oct 23, 2024Updated last year
- Tutorial on how to convert machine learned models into ONNX☆14Mar 11, 2023Updated 3 years ago
- ☆19Mar 15, 2024Updated 2 years ago
- ☆10Jul 18, 2024Updated last year
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆1,005Updated this week
- A tool to modify ONNX models in a visualization fashion, based on Netron and Flask.☆1,623Nov 19, 2025Updated 5 months ago
- ONNX Serving is a project written with C++ to serve onnx-mlir compiled models with GRPC and other protocols.Benefiting from C++ implement…☆26Sep 17, 2025Updated 7 months ago
- transformer tokenizers (e.g. BERT tokenizer) in C++ (WIP)☆18Apr 7, 2022Updated 4 years ago
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,528Apr 2, 2026Updated 3 weeks ago
- A tool for parsing, editing, optimizing, and profiling ONNX models.☆485Apr 21, 2026Updated last week
- Scailable ONNX python tools☆98Oct 25, 2024Updated last year
- ONNX-TensorRT: TensorRT backend for ONNX☆3,204Mar 25, 2026Updated last month
- Useful tensorrt plugin. For pytorch and mmdetection model conversion.☆165Oct 8, 2024Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Open standard for machine learning interoperability☆20,709Updated this week
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,363Apr 15, 2026Updated 2 weeks ago
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,968Updated this week
- A collection of pre-trained, state-of-the-art models in the ONNX format☆9,561Mar 9, 2026Updated last month
- Model compression for ONNX☆101Mar 1, 2026Updated last month
- QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX☆184Mar 25, 2026Updated last month
- Use safetensors with ONNX 🤗☆89Apr 14, 2026Updated 2 weeks ago
- Sample projects for InferenceHelper, a Helper Class for Deep Learning Inference Frameworks: TensorFlow Lite, TensorRT, OpenCV, ncnn, MNN,…☆22Mar 27, 2022Updated 4 years ago
- ONNX Runtime Server: The ONNX Runtime Server is a server that provides TCP and HTTP/HTTPS REST APIs for ONNX inference.☆185Apr 21, 2026Updated last week
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Productionize machine learning predictions, with ONNX or without☆66Jan 11, 2024Updated 2 years ago
- SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, …☆2,628Updated this week
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,687Oct 23, 2024Updated last year
- ☆26Dec 1, 2020Updated 5 years ago
- A tool convert TensorRT engine/plan to a fake onnx☆41Nov 22, 2022Updated 3 years ago
- Convert ONNX models to PyTorch.☆734Oct 14, 2025Updated 6 months ago
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,947Apr 13, 2026Updated 2 weeks ago