openvinotoolkit / nncfLinks
Neural Network Compression Framework for enhanced OpenVINO™ inference
☆1,123Updated this week
Alternatives and similar repositories for nncf
Users that are interested in nncf are comparing it to the libraries listed below
Sorting:
- ONNX Optimizer☆795Updated this week
- A parser, editor and profiler tool for ONNX models.☆478Updated 3 months ago
- Common utilities for ONNX converters☆294Updated last month
- Convert ONNX models to PyTorch.☆725Updated 3 months ago
- A tool to modify ONNX models in a visualization fashion, based on Netron and Flask.☆1,606Updated 2 months ago
- SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, …☆2,581Updated this week
- Model Quantization Benchmark☆857Updated 9 months ago
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,939Updated this week
- ☆342Updated 2 years ago
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆432Updated this week
- A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresse…☆1,925Updated this week
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,552Updated this week
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆420Updated this week
- Transform ONNX model to PyTorch representation☆345Updated 3 months ago
- TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.☆863Updated last month
- Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.☆453Updated 2 years ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆532Updated this week
- Brevitas: neural network quantization in PyTorch☆1,482Updated this week
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆441Updated this week
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆956Updated 9 months ago
- Reference implementations of MLPerf® inference benchmarks☆1,525Updated this week
- A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.☆364Updated last year
- OTOv1-v3, NeurIPS, ICLR, TMLR, DNN Training, Compression, Structured Pruning, Erasing Operators, CNN, Diffusion, LLM☆310Updated last year
- Simplify your onnx model☆4,288Updated last week
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆973Updated this week
- ⚡ Useful scripts when using TensorRT☆237Updated 5 years ago
- TensorFlow/TensorRT integration☆743Updated 2 years ago
- OpenVINO™ Explainable AI (XAI) Toolkit: Visual Explanation for OpenVINO Models☆36Updated 4 months ago
- Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massiv…☆919Updated last week
- ONNX-TensorRT: TensorRT backend for ONNX☆3,184Updated this week