BBuf / model-compressionLinks
model compression based on pytorch (1、quantization: 8/4/2bits(dorefa)、ternary/binary value(twn/bnn/xnor-net);2、 pruning: normal、regular and group convolutional channel pruning;3、 group convolution structure;4、batch-normalization folding for binary value of feature(A))
☆170Updated 4 years ago
Alternatives and similar repositories for model-compression
Users that are interested in model-compression are comparing it to the libraries listed below
Sorting:
- 针对pytorch模型的自动化模型结构分析和修改工具集,包含自动分析模型结构的模型压缩算法库☆249Updated 2 years ago
- pytorch AutoSlim tools,支持三行代码对pytorch模型进行剪枝压缩☆39Updated 4 years ago
- A nnie quantization aware training tool on pytorch.