zhutmost / neuralzip
A Out-of-box PyTorch Scaffold for Neural Network Quantization-Aware-Training (QAT) Research. Website: https://github.com/zhutmost/neuralzip
☆26Updated 2 years ago
Alternatives and similar repositories for neuralzip:
Users that are interested in neuralzip are comparing it to the libraries listed below
- BitSplit Post-trining Quantization☆49Updated 3 years ago
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆40Updated 4 years ago
- Neural Network Quantization With Fractional Bit-widths☆12Updated 4 years ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- ☆19Updated 3 years ago
- ☆28Updated 3 years ago
- ☆18Updated 3 years ago
- Pytorch implementation of our paper accepted by ICCV 2021 -- ReCU: Reviving the Dead Weights in Binary Neural Networks http://arxiv.org/a…☆39Updated 3 years ago
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆34Updated last year
- Official implementation of "Searching for Winograd-aware Quantized Networks" (MLSys'20)☆27Updated last year
- This repository containts the pytorch scripts to train mixed-precision networks for microcontroller deployment, based on the memory contr…☆49Updated 9 months ago
- [ICASSP'20] DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architecture…☆23Updated 2 years ago
- [ICML 2021] "Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators" by Yonggan Fu, Yonga…☆15Updated 3 years ago
- DeiT implementation for Q-ViT☆24Updated 2 years ago
- TQT's pytorch implementation.☆21Updated 3 years ago
- ☆17Updated 2 years ago
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆34Updated last year
- ☆75Updated 2 years ago
- LSQ+ or LSQplus☆63Updated last month
- Official implementation for paper LIMPQ, "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance", ECCV 2022☆51Updated last year
- [TCAD 2021] Block Convolution: Towards Memory-Efficient Inference of Large-Scale CNNs on FPGA☆16Updated 2 years ago
- [ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization☆94Updated 2 years ago
- Quantize pytorch model, support post-training quantization and quantization aware training methods☆13Updated last year
- ☆21Updated 2 years ago
- [FPGA'21] CoDeNet is an efficient object detection model on PyTorch, with SOTA performance on VOC and COCO based on CenterNet and Co-Desi…☆25Updated 2 years ago
- My name is Fang Biao. I'm currently pursuing my Master degree with the college of Computer Science and Engineering, Si Chuan University, …☆44Updated 2 years ago
- This is an implementation of YOLO using LSQ network quantization method.☆23Updated 2 years ago