panchengl / yolov4_prune
☆14Updated 3 years ago
Related projects: ⓘ
- In this repository using the sparse training, group channel pruning and knowledge distilling for YOLOV4,☆29Updated last year
- Using model pruning method to obtain compact models Pruned-YOLOv5 based on YOLOv5.☆57Updated 3 years ago
- yolov5 onnx caffe☆111Updated 3 years ago
- Pruning and quantization for SSD. Model compression.☆28Updated 3 years ago
- yolov5 onnx caffe☆84Updated 3 years ago
- This repository provides a sample to run yolov3 on int8 mode in tensorRT☆27Updated 5 years ago
- ☆90Updated 2 years ago
- ☆14Updated this week
- MobileNetV3 based SSD-lite implementation in Pytorch☆97Updated 5 years ago
- MobileNetV3 SSD的简洁版本☆77Updated 4 months ago
- Deploy the pruned YOLOv3/v4/v4-tiny/v4-tiny-3l model on OpenVINO embedded devices☆52Updated 3 years ago
- Based of paper "Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference"☆61Updated 3 years ago
- yolov3 prune☆12Updated 4 years ago
- ☆62Updated 4 years ago
- this repo is forked from https://github.com/amdegroot/ssd.pytorch. Implemented by pytorch☆46Updated 4 years ago
- yolov5 5.0 version distillation || yolov5 5.0版本知识蒸馏,yolov5l >> yolov5s☆153Updated 3 years ago
- yolov5 version6.0☆24Updated 2 years ago
- ☆46Updated last year
- yolov5 pruning (SFP Pruning、Nework Slimming)☆18Updated 2 years ago
- Convert YOLOv3 and YOLOv3-tiny (PyTorch version) into TensorRT models.☆60Updated 4 years ago
- ☆25Updated 2 years ago
- ☆34Updated last year
- tensorrt int8 量化yolov5 onnx模型☆174Updated 3 years ago
- This is the implementation that supports yolov5s, yolov5m, yolov5l, yolov5x.☆34Updated 2 years ago
- ☆32Updated 3 years ago
- This is 8-bit quantization sample for yolov5. Both PTQ, QAT and Partial Quantization have been implemented, and present the results based…☆95Updated 2 years ago
- YOLOv3-training-prune☆59Updated 3 years ago
- Darknet(AB版)框架源码解析:详尽的中文注释(逐句)和原理分析!☆72Updated 3 years ago
- ppyolo in pytorch. 44.8% box mAP.☆105Updated 2 years ago
- 对 YOLOv3 做模型剪枝(network slimming),对于 oxford hand 数据集(因项目需要),模型剪枝后的参数量减少 80%,Infer 的速度达到原来 2 倍,mAP 基本不变☆12Updated 5 years ago