SKKU-ESLAB / Auto-Compression
Automatic DNN compression tool with various model compression and neural architecture search techniques
☆21Updated 11 months ago
Alternatives and similar repositories for Auto-Compression:
Users that are interested in Auto-Compression are comparing it to the libraries listed below
- CNN functions for dense matrices resident in flash storage☆23Updated 5 years ago
- ANT framework's model database that provides DNN models for the various range of IoT devices☆16Updated 4 years ago
- arm compute library implementation of efficient low precision neural network☆24Updated 4 years ago
- Virtual Connection: Framework for P2P Communication Abstraction☆23Updated 4 years ago
- Enhanced version of IoT.js for ANT Framework - Platform for Internet of Things with JavaScript☆15Updated 4 years ago
- ANT (AI-based Networked Things) Framework☆26Updated last year
- IoT.js of ANT based on Tizen RT☆14Updated 4 years ago
- Neural Network Acceleration using CPU/GPU, ASIC, FPGA☆60Updated 4 years ago
- ☆36Updated 5 years ago
- Neural Network Acceleration such as ASIC, FPGA, GPU, and PIM☆51Updated 4 years ago
- Study Group of Deep Learning Compiler☆156Updated 2 years ago
- Codes for Binary Ensemble Neural Network: More Bits per Network or More Networks per Bit?☆31Updated 5 years ago
- Official implementation of "Searching for Winograd-aware Quantized Networks" (MLSys'20)☆27Updated last year
- Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation☆27Updated 5 years ago
- Post-training sparsity-aware quantization☆34Updated last year
- Conditional channel- and precision-pruning on neural networks☆72Updated 4 years ago
- Accelerating CNN's convolution operation on GPUs by using memory-efficient data access patterns.☆14Updated 7 years ago
- ☆47Updated 2 years ago
- Modified version of PyTorch able to work with changes to GPGPU-Sim☆48Updated 2 years ago
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆38Updated 4 years ago
- This repository is a meta package to provide Samsung OneMCC (Memory Coupled Computing) infrastructure.☆27Updated last year
- The code for paper: Neuralpower: Predict and deploy energy-efficient convolutional neural networks☆21Updated 5 years ago
- Position-based Scaled Gradient for Model Quantization and Pruning Code (NeurIPS 2020)☆26Updated 4 years ago
- ☆45Updated 5 years ago
- Winograd-based convolution implementation in OpenCL☆28Updated 7 years ago
- Source code of the simulator used in the Mosaic paper from MICRO 2017: "Mosaic: A GPU Memory Manager with Application-Transparent Support…☆41Updated 6 years ago
- A Toy-Purpose TPU Simulator☆10Updated 7 months ago
- TVM stack: exploring the incredible explosion of deep-learning frameworks and how to bring them together☆64Updated 6 years ago
- Official implementation for paper LIMPQ, "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance", ECCV 2022☆50Updated last year