AldrichZeng / Graduation-DesignLinks
基于剪枝的神经网络压缩与加速
☆23Updated 6 years ago
Alternatives and similar repositories for Graduation-Design
Users that are interested in Graduation-Design are comparing it to the libraries listed below
Sorting:
- 模型压缩demo(剪枝、量化、知识蒸馏)☆77Updated 5 years ago
- ☆37Updated 6 years ago
- B站Efficient-Neural-Network学习分享的配套代码☆301Updated 3 years ago
- 基于提前退出部分样本原理而实现的带分支网络(supported by chainer)☆45Updated 6 years ago
- OpenPose uses Pytorch for static quantization, saving, and loading of models☆89Updated 4 years ago
- Quantize,Pytorch,Vgg16,MobileNet☆42Updated 4 years ago
- In this repository using the sparse training, group channel pruning and knowledge distilling for YOLOV4,☆32Updated 2 years ago
- Quantize pytorch model, support post-training quantization and quantization aware training methods☆14Updated 2 years ago
- base quantization methods including: QAT, PTQ, per_channel, per_tensor, dorefa, lsq, adaround, omse, Histogram, bias_correction.etc☆47Updated 2 years ago
- Pruned model: VGG & ResNet-50☆18Updated 6 years ago
- 卷积神经网络CNN在cifar10上的应用☆27Updated 6 years ago
- ☆17Updated 6 years ago
- Neural Network Quantization & Low-Bit Fixed Point Training For Hardware-Friendly Algorithm Design☆160Updated 4 years ago
- An 8bit automated quantization conversion tool for the pytorch (Post-training quantization based on KL divergence)☆32Updated 5 years ago
- yolov3_tiny implement on tensoeflow for int8 quantization (tflite)☆29Updated 6 years ago
- 对yolov4进行通道剪枝☆15Updated 3 years ago
- DL quantization for pytorch☆26Updated 6 years ago
- PyTorch implementation of "Pruning Filters For Efficient ConvNets"☆16Updated 3 years ago
- tensorflow2_knowledge_distilling_example☆12Updated 3 years ago
- PyTorch implementation of "Pruning Filters For Efficient ConvNets"☆151Updated 2 years ago
- YOLOv3/YOLOv3-tiny/yolo-fasetest-xl从训练到部署☆22Updated 4 years ago
- Pruning and quantization for SSD. Model compression.☆30Updated 4 years ago
- model compression and optimization for deployment for Pytorch, including knowledge distillation, quantization and pruning.(知识蒸馏,量化,剪枝)☆18Updated 11 months ago
- 不使用任何框架,纯手写,利用C++实现卷积神经网络,适合初学者理解卷积神经网络的具体实现及原理☆33Updated 5 years ago
- Learning both Weights and Connections for Efficient Neural Networks https://arxiv.org/abs/1506.02626☆18Updated 4 years ago
- Training models with ternary quantized weights using PyTorch☆15Updated 6 years ago
- autoTVM神经网络推理代码优化搜索演示,基于tvm编译开源模型centerface,并使用autoTVM搜索最优推理代码, 最终部署编译为c++代码,演示平台是cuda,可以是其他平台,例如树莓派,安卓手机,苹果手机.Thi is a demonstration of …☆28Updated 4 years ago
- Model Compression 1. Pruning(BN Pruning) 2. Knowledge Distillation (Hinton) 3. Quantization (MNN) 4. Deployment (MNN)☆79Updated 4 years ago
- implementation of Iterative Pruning for Deep neural network [Han2015].☆40Updated 7 years ago
- At present, just an example to show how to map the detection algorithm YOLOv2 from model to FPGA☆31Updated 6 years ago