quic / aimetLinks
AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
☆2,318Updated this week
Alternatives and similar repositories for aimet
Users that are interested in aimet are comparing it to the libraries listed below
Sorting:
- ☆324Updated last year
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,762Updated this week
- Simplify your onnx model☆4,088Updated 8 months ago
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,034Updated this week
- A tool to modify ONNX models in a visualization fashion, based on Netron and Flask.☆1,506Updated 3 months ago
- ONNX Optimizer☆715Updated this week
- [ICLR 2020] Once for All: Train One Network and Specialize it for Efficient Deployment☆1,916Updated last year
- SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,419Updated this week
- A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are co…☆2,109Updated 2 months ago
- Model Quantization Benchmark☆804Updated last month
- A parser, editor and profiler tool for ONNX models.☆436Updated this week
- PPL Quantization Tool (PPQ) is a powerful offline neural network quantization tool.☆1,689Updated last year
- ONNX-TensorRT: TensorRT backend for ONNX☆3,079Updated 2 weeks ago
- TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.☆826Updated this week
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,342Updated this week
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…☆5,407Updated this week
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,426Updated 3 months ago
- Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distille…☆4,394Updated 2 years ago
- Reference implementations of MLPerf™ inference benchmarks☆1,386Updated this week
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆986Updated 8 months ago
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,541Updated 5 years ago
- Tensorflow Backend for ONNX☆1,305Updated last year
- Awesome machine learning model compression research papers, quantization, tools, and learning material.☆524Updated 8 months ago
- Mobile vision models and code☆911Updated 2 months ago
- A CPU+GPU Profiling library that provides access to timeline traces and hardware performance counters.☆818Updated this week
- A primitive library for neural network☆1,343Updated 6 months ago
- An Open-Source Library for Training Binarized Neural Networks☆720Updated 9 months ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,046Updated last year
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆947Updated last month
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆871Updated 5 months ago