onnx / neural-compressorLinks
Model compression for ONNX
☆99Updated last year
Alternatives and similar repositories for neural-compressor
Users that are interested in neural-compressor are comparing it to the libraries listed below
Sorting:
- New operators for the ReferenceEvaluator, new kernels for onnxruntime, CPU, CUDA☆35Updated last week
- AI Edge Quantizer: flexible post training quantization for LiteRT models.☆81Updated 3 weeks ago
- A faster implementation of OpenCV-CUDA that uses OpenCV objects, and more!☆54Updated 3 weeks ago
- A Toolkit to Help Optimize Onnx Model☆267Updated last week
- Zero-copy multimodal vector DB with CUDA and CLIP/SigLIP☆63Updated 7 months ago
- Common utilities for ONNX converters☆288Updated 3 months ago
- C++ implementations for various tokenizers (sentencepiece, tiktoken etc).☆43Updated this week
- A Toolkit to Help Optimize Large Onnx Model☆162Updated last month
- The Triton backend for the ONNX Runtime.☆168Updated last week
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆300Updated last year
- Use safetensors with ONNX 🤗☆76Updated 2 months ago
- A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB,…☆17Updated 2 months ago
- ONNX Command-Line Toolbox☆35Updated last year
- Convert tflite to JSON and make it editable in the IDE. It also converts the edited JSON back to tflite binary.☆28Updated 2 years ago
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆412Updated this week
- This library empowers users to seamlessly port pretrained models and checkpoints on the HuggingFace (HF) hub (developed using HF transfor…☆84Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆214Updated this week
- A tool convert TensorRT engine/plan to a fake onnx☆41Updated 3 years ago
- The Triton backend for TensorRT.☆80Updated 3 weeks ago
- Simple tool for partial optimization of ONNX. Further optimize some models that cannot be optimized with onnx-optimizer and onnxsim by se…☆19Updated last year
- Visualize ONNX models with model-explorer☆64Updated last month
- Mobile App Open☆64Updated this week
- Exports the ONNX file to a JSON file and JSON dict.☆33Updated 2 years ago
- Scailable ONNX python tools☆97Updated last year
- ☆70Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated last year
- A very simple tool that compresses the overall size of the ONNX model by aggregating duplicate constant values as much as possible.☆52Updated 3 years ago
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters i…☆182Updated 3 months ago
- Count number of parameters / MACs / FLOPS for ONNX models.☆95Updated last year
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆183Updated 8 months ago