onnx / neural-compressor
Model compression for ONNX
☆84Updated 2 months ago
Alternatives and similar repositories for neural-compressor:
Users that are interested in neural-compressor are comparing it to the libraries listed below
- New operators for the ReferenceEvaluator, new kernels for onnxruntime, CPU, CUDA☆32Updated 4 months ago
- A Toolkit to Help Optimize Onnx Model☆113Updated 2 weeks ago
- The Triton backend for the ONNX Runtime.☆138Updated this week
- A tool convert TensorRT engine/plan to a fake onnx☆37Updated 2 years ago
- A faster implementation of OpenCV-CUDA that uses OpenCV objects, and more!☆48Updated this week
- Common utilities for ONNX converters☆257Updated 2 months ago
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆316Updated this week
- Simple tool for partial optimization of ONNX. Further optimize some models that cannot be optimized with onnx-optimizer and onnxsim by se…☆19Updated 9 months ago
- The Triton backend for TensorRT.☆68Updated this week
- A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB,…☆16Updated 9 months ago
- Count number of parameters / MACs / FLOPS for ONNX models.☆90Updated 3 months ago
- A Toolkit to Help Optimize Large Onnx Model☆153Updated 8 months ago
- Scailable ONNX python tools☆96Updated 3 months ago
- The Triton backend for the PyTorch TorchScript models.☆143Updated this week
- A set of simple tools for splitting, merging, OP deletion, size compression, rewriting attributes and constants, OP generation, change op…☆285Updated 9 months ago
- ☆9Updated 2 years ago
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters i…☆178Updated 2 months ago
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆158Updated this week
- ☆18Updated this week
- Convert tflite to JSON and make it editable in the IDE. It also converts the edited JSON back to tflite binary.☆27Updated last year
- A safetensors extension to efficiently store sparse quantized tensors on disk☆73Updated this week
- A block oriented training approach for inference time optimization.☆32Updated 5 months ago
- ONNX implementation of Whisper. PyTorch free.☆92Updated 2 months ago
- Accelerate PyTorch models with ONNX Runtime☆358Updated 5 months ago
- ☆157Updated last year
- Nsight Systems In Docker☆20Updated last year
- AI Edge Quantizer: flexible post training quantization for LiteRT models.☆23Updated this week
- A very simple tool that compresses the overall size of the ONNX model by aggregating duplicate constant values as much as possible.☆52Updated 2 years ago
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆253Updated 10 months ago