SKKU-ESLAB / Auto-CompressionLinks
Automatic DNN compression tool with various model compression and neural architecture search techniques
☆21Updated 2 months ago
Alternatives and similar repositories for Auto-Compression
Users that are interested in Auto-Compression are comparing it to the libraries listed below
Sorting:
- arm compute library implementation of efficient low precision neural network☆25Updated 4 years ago
- CNN functions for dense matrices resident in flash storage☆23Updated 5 years ago
- ANT framework's model database that provides DNN models for the various range of IoT devices☆17Updated last week
- Enhanced version of IoT.js for ANT Framework - Platform for Internet of Things with JavaScript☆15Updated 4 years ago
- Virtual Connection: Framework for P2P Communication Abstraction☆23Updated 4 years ago
- ANT (AI-based Networked Things) Framework☆27Updated 2 months ago
- IoT.js of ANT based on Tizen RT☆14Updated 4 years ago
- Official implementation of "Searching for Winograd-aware Quantized Networks" (MLSys'20)☆27Updated last year
- Codes for Binary Ensemble Neural Network: More Bits per Network or More Networks per Bit?☆31Updated 5 years ago
- Neural Network Acceleration using CPU/GPU, ASIC, FPGA☆60Updated 4 years ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆40Updated 4 years ago
- Torch-7 implementation of BinaryDuo (ICLR 2020).☆9Updated 4 years ago
- ☆36Updated 6 years ago
- nnq_cnd_study stands for Neural Network Quantization & Compact Networks Design Study☆13Updated 4 years ago
- research, experimentation and implementation of hardware-agnostic accelerated DL framework☆36Updated 3 weeks ago
- Official PyTorch Implementation of "Learning Architectures for Binary Networks" (ECCV2020)☆26Updated 4 years ago
- ProxQuant: Quantized Neural Networks via Proximal Operators☆29Updated 6 years ago
- Explore the energy-efficient dataflow scheduling for neural networks.☆225Updated 4 years ago
- ☆70Updated last month
- ☆18Updated 4 years ago
- Neural Network Acceleration such as ASIC, FPGA, GPU, and PIM☆52Updated 5 years ago
- Conditional channel- and precision-pruning on neural networks☆72Updated 5 years ago
- PyTorch implementation of "Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference"☆56Updated 5 years ago
- This is a collection of works on neural networks and neural accelerators.☆40Updated 6 years ago
- [CVPR'20] ZeroQ Mixed-Precision implementation (unofficial): A Novel Zero Shot Quantization Framework☆14Updated 4 years ago
- Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation☆27Updated 5 years ago
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆97Updated 4 years ago
- ☆17Updated 2 years ago
- BitSplit Post-trining Quantization☆50Updated 3 years ago