SKKU-ESLAB / Auto-Compression
Automatic DNN compression tool with various model compression and neural architecture search techniques
☆21Updated 2 weeks ago
Alternatives and similar repositories for Auto-Compression:
Users that are interested in Auto-Compression are comparing it to the libraries listed below
- CNN functions for dense matrices resident in flash storage☆23Updated 5 years ago
- ANT framework's model database that provides DNN models for the various range of IoT devices☆16Updated 2 weeks ago
- arm compute library implementation of efficient low precision neural network☆24Updated 4 years ago
- Virtual Connection: Framework for P2P Communication Abstraction☆23Updated 4 years ago
- Enhanced version of IoT.js for ANT Framework - Platform for Internet of Things with JavaScript☆15Updated 4 years ago
- ANT (AI-based Networked Things) Framework☆26Updated 2 weeks ago
- IoT.js of ANT based on Tizen RT☆14Updated 4 years ago
- ☆36Updated 6 years ago
- Escoin: Efficient Sparse Convolutional Neural Network Inference on GPUs☆16Updated 6 years ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- Codes for Binary Ensemble Neural Network: More Bits per Network or More Networks per Bit?☆31Updated 5 years ago
- Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation☆27Updated 5 years ago
- Official implementation of "Searching for Winograd-aware Quantized Networks" (MLSys'20)☆27Updated last year
- ☆46Updated 5 years ago
- ☆47Updated 3 years ago
- ☆19Updated 2 years ago
- Study Group of Deep Learning Compiler☆158Updated 2 years ago
- PyTorch implementation of "Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference"☆55Updated 5 years ago
- ☆25Updated 5 years ago
- This repository is a meta package to provide Samsung OneMCC (Memory Coupled Computing) infrastructure.☆27Updated last year
- Neural Network Acceleration using CPU/GPU, ASIC, FPGA☆60Updated 4 years ago
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆40Updated 4 years ago
- Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks☆69Updated 3 years ago
- BlockCIrculantRNN (LSTM and GRU) using TensorFlow☆14Updated 6 years ago
- ☆66Updated last month
- Quantization of Convolutional Neural networks.☆244Updated 8 months ago
- BitSplit Post-trining Quantization☆49Updated 3 years ago
- Accelerating CNN's convolution operation on GPUs by using memory-efficient data access patterns.☆14Updated 7 years ago
- A version of XRBench-MAESTRO used for MLSys 2023 publication☆23Updated last year
- CPrune: Compiler-Informed Model Pruning for Efficient Target-Aware DNN Execution☆16Updated last year