penpaperkeycode / nnq_cnd_study
nnq_cnd_study stands for Neural Network Quantization & Compact Networks Design Study
☆13Updated 4 years ago
Alternatives and similar repositories for nnq_cnd_study:
Users that are interested in nnq_cnd_study are comparing it to the libraries listed below
- Neural Network Acceleration using CPU/GPU, ASIC, FPGA☆60Updated 4 years ago
- Accelerating CNN's convolution operation on GPUs by using memory-efficient data access patterns.☆14Updated 7 years ago
- ☆47Updated 3 years ago
- FrostNet: Towards Quantization-Aware Network Architecture Search☆107Updated 10 months ago
- DL quantization for pytorch☆26Updated 6 years ago
- Neural Network Acceleration such as ASIC, FPGA, GPU, and PIM☆51Updated 4 years ago
- This repository containts the pytorch scripts to train mixed-precision networks for microcontroller deployment, based on the memory contr…☆49Updated 10 months ago
- Example for applying Gaussian and Laplace clipping on activations of CNN.☆34Updated 6 years ago
- ☆67Updated 5 years ago
- Reproduction of WAGE in PyTorch.☆41Updated 6 years ago
- ☆56Updated 4 years ago
- Graph Transforms to Quantize and Retrain Deep Neural Nets in TensorFlow☆168Updated 5 years ago
- DNN quantization with outlier channel splitting☆112Updated 5 years ago
- This repository represents training examples for the CVPR 2018 paper "SYQ:Learning Symmetric Quantization For Efficient Deep Neural Netwo…☆31Updated 5 years ago
- ☆213Updated 6 years ago
- ☆12Updated 4 years ago
- This repository contains the results and code for the MLPerf™ Inference v0.5 benchmark.☆55Updated last year
- Repository containing pruned models and related information☆37Updated 4 years ago
- ☆36Updated 6 years ago
- Class Project for 18663 - Implementation of FBNet (Hardware-Aware DNAS)☆34Updated 5 years ago
- ☆14Updated 5 years ago
- A PyTorch implementation of "Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights"☆167Updated 5 years ago
- ☆35Updated 5 years ago
- Fast NPU-aware Neural Architecture Search☆22Updated 3 years ago
- XNOR-Net, with binary gemm and binary conv2d kernels, support both CPU and GPU.☆85Updated 5 years ago
- This is the implementation for paper: AdaTune: Adaptive Tensor Program CompilationMade Efficient (NeurIPS 2020).☆13Updated 3 years ago
- Codes for Binary Ensemble Neural Network: More Bits per Network or More Networks per Bit?☆31Updated 5 years ago
- Study Group of Deep Learning Compiler☆157Updated 2 years ago
- Mayo: Auto-generation of hardware-friendly deep neural networks. Dynamic Channel Pruning: Feature Boosting and Suppression.☆114Updated 5 years ago
- Test scripts for exploring PyTorch JIT and quantization capability☆12Updated 4 years ago