jnulzl / PyTorch-QATLinks
PyTorch Quantization Aware Training(QAT,量化感知训练)
☆31Updated last year
Alternatives and similar repositories for PyTorch-QAT
Users that are interested in PyTorch-QAT are comparing it to the libraries listed below
Sorting:
- A set of examples around MegEngine☆31Updated last year
- TensorRT 2022 亚军方案,tensorrt加速mobilevit模型☆67Updated 2 years ago
- TensorRT 2022复赛方案: 首个基于Transformer的图像重建模型MST++的TensorRT模型推断优化☆139Updated 2 years ago
- Offline Quantization Tools for Deploy.☆128Updated last year
- base quantization methods including: QAT, PTQ, per_channel, per_tensor, dorefa, lsq, adaround, omse, Histogram, bias_correction.etc☆45Updated 2 years ago
- Make RepVGG Greater Again: A Quantization-aware Approach☆23Updated last year
- ☆34Updated 2 years ago
- ☆42Updated 3 years ago
- 该项目实现了图像超分辨率算法ELAN的TensorRT版本。☆30Updated 2 years ago
- Slides with modifications for a course at Tsinghua University.☆59Updated 2 years ago
- PyTorch Quantization Aware Training Example☆136Updated last year
- An onnx-based quantitation tool.☆71Updated last year
- 该代码与B站上的视频 https://www.bilibili.com/video/BV18L41197Uz/?spm_id_from=333.788&vd_source=eefa4b6e337f16d87d87c2c357db8ca7 相关联。☆68Updated last year
- A simple tutorial of SNPE.☆172Updated 2 years ago
- algorithm-cpp projects☆80Updated 2 years ago
- This project aims to explore the deployment of Swin-Transformer based on TensorRT, including the test results of FP16 and INT8.☆166Updated 2 years ago
- Official repo of RepOptimizers and RepOpt-VGG☆264Updated 2 years ago
- This is 8-bit quantization sample for yolov5. Both PTQ, QAT and Partial Quantization have been implemented, and present the results based…☆102Updated 2 years ago
- ☆138Updated last year
- RepGhost: A Hardware-Efficient Ghost Module via Re-parameterization☆176Updated last year
- EasyNN是一个面向教学而开发的神经网络推理框架,旨在让大家0基础也能自主完成推理框架编写!☆28Updated 9 months ago
- An Improved One millisecond Mobile Backbone☆145Updated 2 years ago
- Quantize,Pytorch,Vgg16,MobileNet☆42Updated 4 years ago
- ONNX2Pytorch☆162Updated 4 years ago
- trt-hackathon-2022 三等奖方案☆10Updated 2 years ago
- ☆24Updated last year
- Model Compression 1. Pruning(BN Pruning) 2. Knowledge Distillation (Hinton) 3. Quantization (MNN) 4. Deployment (MNN)☆79Updated 4 years ago
- ☆44Updated 2 years ago
- Post-Training Quantization for Vision transformers.☆218Updated 2 years ago
- pytorch AutoSlim tools,支持三行代码对pytorch模型进行剪枝压缩☆39Updated 4 years ago