arashardakani / Learning-Recurrent-Binary-Ternary-Weights
Learning-Recurrent-Binary-Ternary-Weights
☆12Updated 5 years ago
Related projects ⓘ
Alternatives and complementary repositories for Learning-Recurrent-Binary-Ternary-Weights
- Sparse Recurrent Neural Networks -- Pruning Connections and Hidden Sizes (TensorFlow)☆73Updated 4 years ago
- Implementation of BinaryConnect on Pytorch☆36Updated 3 years ago
- Mayo: Auto-generation of hardware-friendly deep neural networks. Dynamic Channel Pruning: Feature Boosting and Suppression.☆113Updated 4 years ago
- custom cuda kernel for {2, 3}d relative attention with pytorch wrapper☆43Updated 4 years ago
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 5 years ago
- ☆53Updated 5 years ago
- Reducing the size of convolutional neural networks☆112Updated 6 years ago
- Implementation of NeurIPS 2019 paper "Normalization Helps Training of Quantized LSTM"☆30Updated 3 months ago
- Reproduction of WAGE in PyTorch.☆41Updated 5 years ago
- The collection of training tricks of binarized neural networks.☆72Updated 3 years ago
- Implementation of ICLR 2017 paper "Loss-aware Binarization of Deep Networks"☆18Updated 5 years ago
- Pytorch Implementation using Binary Weighs and activation.Accuracies are comparable .☆41Updated 4 years ago
- Code for BlockSwap (ICLR 2020).☆33Updated 3 years ago
- Quick tutorial to Deep Rewiring☆13Updated 5 years ago
- Implementation for the paper "Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization"☆73Updated 4 years ago
- Pytorch implementation of TRP☆44Updated 4 years ago
- XNOR-Net, with binary gemm and binary conv2d kernels, support both CPU and GPU.☆82Updated 5 years ago
- Training wide residual networks for deployment using a single bit for each weight☆36Updated 4 years ago
- ☆11Updated 3 years ago
- This is a PyTorch implementation of the Scalpel. Node pruning for five benchmark networks and SIMD-aware weight pruning for LeNet-300-100…☆40Updated 6 years ago
- A PyTorch implementation of "Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights"☆164Updated 4 years ago
- Percentile computation for pytorch☆20Updated 4 years ago
- Code for the paper "Training Binary Neural Networks with Bayesian Learning Rule☆37Updated 2 years ago
- ☆213Updated 5 years ago
- PyTorch implementation of Wide Residual Networks with 1-bit weights by McDonnell (ICLR 2018)☆124Updated 6 years ago
- A PyTorch implementation of the iterative pruning method described in Han et. al. (2015)☆40Updated 5 years ago
- Quantize weights and activations in Recurrent Neural Networks.☆94Updated 6 years ago
- Successfully training approximations to full-rank matrices for efficiency in deep learning.☆16Updated 3 years ago