McDonnell-Research-Lab / 1-bit-per-weightLinks
Training wide residual networks for deployment using a single bit for each weight - Official Code Repository for ICLR 2018 Published Paper
☆37Updated 5 years ago
Alternatives and similar repositories for 1-bit-per-weight
Users that are interested in 1-bit-per-weight are comparing it to the libraries listed below
Sorting:
- PyTorch implementation of Wide Residual Networks with 1-bit weights by McDonnell (ICLR 2018)☆126Updated 6 years ago
- Code for https://arxiv.org/abs/1810.04622☆141Updated 5 years ago
- Structured Bayesian Pruning, NIPS 2017☆74Updated 5 years ago
- ☆47Updated 5 years ago
- Prunable nn layers for pytorch.☆48Updated 7 years ago
- Code for "EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis" https://arxiv.org/abs/1905.05934☆113Updated 5 years ago
- [ECCV 2018] Sparsely Aggreagated Convolutional Networks https://arxiv.org/abs/1801.05895☆124Updated 6 years ago
- ☆23Updated 6 years ago
- Implementation of Progressive Neural Architecture Search in Keras and Tensorflow☆118Updated 6 years ago
- A tutorial on 'Soft weight-sharing for Neural Network compression' published at ICLR2017☆145Updated 8 years ago
- Code used to generate the results appearing in "Train longer, generalize better: closing the generalization gap in large batch training o…☆149Updated 8 years ago
- Training neural networks with 8-bit computations☆28Updated 9 years ago
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 6 years ago
- Path-Level Network Transformation for Efficient Architecture Search, in ICML 2018.☆112Updated 7 years ago
- Implementation of Data-free Knowledge Distillation for Deep Neural Networks (on arxiv!)☆81Updated 7 years ago
- An implementation of shampoo☆77Updated 7 years ago
- Reducing the size of convolutional neural networks☆112Updated 7 years ago
- Efficient forward propagation for BCNNs☆50Updated 8 years ago
- This is a PyTorch implementation of the Scalpel. Node pruning for five benchmark networks and SIMD-aware weight pruning for LeNet-300-100…☆41Updated 6 years ago
- ☆23Updated 9 years ago
- Net-Trim: Convex Pruning of Deep Neural Networks with Performance Guarantee☆60Updated 7 years ago
- Implementation for Trained Ternary Network.☆108Updated 8 years ago
- Reviewing recent advances in classification on CIFAR 10 and 100 datasets☆37Updated 7 years ago
- ☆57Updated 7 years ago
- Deep learning with a multiplication budget☆47Updated 7 years ago
- An Implementation of "Small steps and giant leaps: Minimal Newton solvers for Deep Learning" In pytorch☆21Updated 7 years ago
- Training Low-bits DNNs with Stochastic Quantization☆74Updated 8 years ago
- A PyTorch implementation of "Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights"☆167Updated 5 years ago
- Tensorflow codes for "Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers"☆30Updated 5 years ago
- Towards Automated Deep Learning: Efficient Joint Neural Architecture and Hyperparameter Search https://arxiv.org/abs/1807.06906☆49Updated 5 years ago