Xilinx / brevitas
Brevitas: neural network quantization in PyTorch
☆1,274Updated this week
Alternatives and similar repositories for brevitas:
Users that are interested in brevitas are comparing it to the libraries listed below
- Dataflow compiler for QNN inference on FPGAs☆795Updated this week
- Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.☆428Updated last year
- PyTorch implementation for the APoT quantization (ICLR 2020)☆271Updated 3 months ago
- Machine learning on FPGAs using HLS☆1,401Updated this week
- ☆430Updated 9 months ago
- QKeras: a quantization deep learning library for Tensorflow Keras☆559Updated last month
- Quantization of Convolutional Neural networks.☆244Updated 7 months ago
- Dataflow QNN inference accelerator examples on FPGAs☆207Updated 2 months ago
- [CVPR 2019, Oral] HAQ: Hardware-Aware Automated Quantization with Mixed Precision☆380Updated 4 years ago
- Model Quantization Benchmark☆793Updated 2 months ago
- Low Precision Arithmetic Simulation in PyTorch☆272Updated 10 months ago
- QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX☆141Updated this week
- HLS based Deep Neural Network Accelerator Library for Xilinx Ultrascale+ MPSoCs☆325Updated 5 years ago
- Tutorial notebooks for hls4ml☆325Updated this week
- ☆238Updated 2 years ago
- PyTorch implementation of Data Free Quantization Through Weight Equalization and Bias Correction.☆260Updated last year
- Unofficial implementation of LSQ-Net, a neural network quantization framework☆289Updated 10 months ago
- [CVPR'20] ZeroQ: A Novel Zero Shot Quantization Framework☆276Updated last year
- Binarized Neural Network (BNN) for pytorch☆509Updated last year
- PyTorch library to facilitate development and standardized evaluation of neural network pruning methods.☆428Updated last year
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,249Updated last week
- Vitis AI is Xilinx’s development stack for AI inference on Xilinx hardware platforms, including both edge devices and Alveo cards.☆1,559Updated 6 months ago
- [ICML'21 Oral] I-BERT: Integer-only BERT Quantization☆241Updated 2 years ago
- Summary, Code for Deep Neural Network Quantization☆546Updated 5 months ago
- A simple network quantization demo using pytorch from scratch.☆521Updated last year
- ☆314Updated last year
- [ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices☆440Updated last year
- collection of works aiming at reducing model sizes or the ASIC/FPGA accelerator for machine learning☆557Updated last year
- Vitis HLS Library for FINN☆191Updated last week
- An Open Source Deep Learning Inference Engine Based on FPGA☆159Updated 4 years ago