daquexian / onnx-simplifier
Simplify your onnx model
☆3,865Updated 2 months ago
Related projects ⓘ
Alternatives and complementary repositories for onnx-simplifier
- ONNX-TensorRT: TensorRT backend for ONNX☆2,953Updated 2 weeks ago
- An easy to use PyTorch to TensorRT converter☆4,612Updated 3 months ago
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,597Updated this week
- A tool to modify ONNX models in a visualization fashion, based on Netron and Flask.☆1,346Updated 2 weeks ago
- Actively maintained ONNX Optimizer☆647Updated 8 months ago
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,327Updated 2 months ago
- Simple samples for TensorRT programming☆1,519Updated 2 weeks ago
- OpenMMLab Model Deployment Framework☆2,777Updated last month
- PPL Quantization Tool (PPQ) is a powerful offline neural network quantization tool.☆1,558Updated 7 months ago
- CV-CUDA™ is an open-source, GPU accelerated library for cloud-scale image processing and computer vision.☆2,381Updated last month
- Implementation of popular deep learning networks with TensorRT network definition API☆7,016Updated 3 weeks ago
- Deploy your model with TensorRT quickly.☆762Updated 11 months ago
- NanoDet-Plus⚡Super fast and lightweight anchor-free object detection model. 🔥Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphone…☆5,771Updated 3 months ago
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,148Updated this week
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆10,820Updated 2 weeks ago
- Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distille…☆4,351Updated last year
- Examples for using ONNX Runtime for machine learning inferencing.☆1,212Updated 2 weeks ago
- Based on yolo's ultra-lightweight universal target detection algorithm, the calculation amount is only 250mflops, the ncnn model size is…☆2,010Updated 3 years ago
- micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantiz…☆2,219Updated 3 years ago
- [CVPR 2023] DepGraph: Towards Any Structural Pruning☆2,724Updated this week
- yolort is a runtime stack for yolov5 on specialized accelerators such as tensorrt, libtorch, onnxruntime, tvm and ncnn.☆721Updated this week
- Tensorflow Backend for ONNX☆1,284Updated 7 months ago
- RepVGG: Making VGG-style ConvNets Great Again☆3,333Updated last year
- [ICLR 2020] Once for All: Train One Network and Specialize it for Efficient Deployment☆1,884Updated 11 months ago
- ☆706Updated last year
- TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is …☆4,416Updated last month
- PyTorch ,ONNX and TensorRT implementation of YOLOv4☆4,480Updated 5 months ago
- A parser, editor and profiler tool for ONNX models.☆400Updated this week
- TensorRT MODNet, YOLOv4, YOLOv3, SSD, MTCNN, and GoogLeNet☆1,751Updated 3 months ago
- Count the MACs / FLOPs of your PyTorch model.☆4,888Updated 4 months ago