A tool for parsing, editing, optimizing, and profiling ONNX models.
☆488Apr 26, 2026Updated 2 weeks ago
Alternatives and similar repositories for onnx-tool
Users that are interested in onnx-tool are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A tool to modify ONNX models in a visualization fashion, based on Netron and Flask.☆1,625Nov 19, 2025Updated 5 months ago
- Simplify your onnx model☆4,331Apr 29, 2026Updated last week
- Count number of parameters / MACs / FLOPS for ONNX models.☆95Oct 26, 2024Updated last year
- PPL Quantization Tool (PPQ) is a powerful offline neural network quantization tool.☆1,795Mar 28, 2024Updated 2 years ago
- Common utilities for ONNX converters☆297Dec 16, 2025Updated 4 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ONNX Optimizer☆809May 3, 2026Updated last week
- Utility scripts for editing or modifying onnx models. Utility scripts to summarize onnx model files along with visualization for loop ope…☆80Sep 15, 2021Updated 4 years ago
- A set of simple tools for splitting, merging, OP deletion, size compression, rewriting attributes and constants, OP generation, change op…☆304Apr 22, 2024Updated 2 years ago
- MegCC是一个运行时超轻量,高效,移植简单的深度学习模型编译器☆482Oct 23, 2024Updated last year
- Scailable ONNX python tools☆98Oct 25, 2024Updated last year
- A Toolkit to Help Optimize Large Onnx Model☆164Oct 26, 2025Updated 6 months ago
- Convert ONNX models to PyTorch.☆734Oct 14, 2025Updated 6 months ago
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆439Updated this week
- Model compression for ONNX☆101May 1, 2026Updated last week
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,614Updated this week
- LLaMa/RWKV onnx models, quantization and testcase☆367Jul 6, 2023Updated 2 years ago
- llm-export can export llm model to onnx.☆350Oct 24, 2025Updated 6 months ago
- A primitive library for neural network☆1,368Nov 24, 2024Updated last year
- QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX☆184Mar 25, 2026Updated last month
- Model Quantization Benchmark☆865Apr 20, 2025Updated last year
- BEVFormer inference on TensorRT, including INT8 Quantization and Custom TensorRT Plugins (float/half/half2/int8).☆571Nov 20, 2023Updated 2 years ago
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆1,012Updated this week
- PyTorch Neural Network eXchange☆706Updated this week
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Large Language Model Onnx Inference Framework☆35Nov 25, 2025Updated 5 months ago
- Using pattern matcher in onnx model to match and replace subgraphs.☆81Feb 7, 2024Updated 2 years ago
- llm deploy project based onnx.☆49Oct 9, 2024Updated last year
- A simple tool that can generate TensorRT plugin code quickly.☆240Jul 11, 2023Updated 2 years ago
- Image Visualization Tools for C++☆14Oct 6, 2021Updated 4 years ago
- ONNX-TensorRT: TensorRT backend for ONNX☆3,204Mar 25, 2026Updated last month
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆463Updated this week
- A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresse…☆2,636Updated this week
- ppl.cv is a high-performance image processing library of openPPL supporting various platforms.☆515Oct 30, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- NVIDIA DLA-SW, the recipes and tools for running deep learning workloads on NVIDIA DLA cores for inference applications.☆233Jun 10, 2024Updated last year
- Tengine 管子是用来快速生产 demo 的辅助工具☆12Jul 15, 2021Updated 4 years ago
- Machine learning compiler based on MLIR for Sophgo TPU.☆913Apr 29, 2026Updated last week
- Examples for using ONNX Runtime for machine learning inferencing.☆1,642Feb 24, 2026Updated 2 months ago
- micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantiz…☆2,274May 6, 2025Updated last year
- Simple samples for TensorRT programming☆1,659Updated this week
- Converts CLIP models to ONNX☆11Jan 17, 2023Updated 3 years ago