tipt0p / SparseBayesianRNNLinks
☆16Updated 6 years ago
Alternatives and similar repositories for SparseBayesianRNN
Users that are interested in SparseBayesianRNN are comparing it to the libraries listed below
Sorting:
- Sparse Recurrent Neural Networks -- Pruning Connections and Hidden Sizes (TensorFlow)☆74Updated 5 years ago
- Quantize weights and activations in Recurrent Neural Networks.☆94Updated 7 years ago
- Stochastic Adaptive Neural Architecture Search☆65Updated 6 years ago
- Implementation of ICLR 2017 paper "Loss-aware Binarization of Deep Networks"☆18Updated 6 years ago
- Implementation of NeurIPS 2019 paper "Normalization Helps Training of Quantized LSTM"☆31Updated last year
- Implementation of ICLR 2018 paper "Loss-aware Weight Quantization of Deep Networks"☆26Updated 5 years ago
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 6 years ago
- A tutorial on "Bayesian Compression for Deep Learning" published at NIPS (2017).☆206Updated 6 years ago
- Compression of NMT transformer model with tensor methods☆48Updated 6 years ago
- custom cuda kernel for {2, 3}d relative attention with pytorch wrapper☆43Updated 5 years ago
- Reducing the size of convolutional neural networks☆111Updated 7 years ago
- ☆102Updated 7 years ago
- Pytorch implement WaveNet☆93Updated 7 years ago
- Prunable nn layers for pytorch.☆48Updated 7 years ago
- Implementation of a Quantized Transformer Model☆19Updated 6 years ago
- Code for https://arxiv.org/abs/1810.04622☆140Updated 6 years ago
- Structured Bayesian Pruning, NIPS 2017☆74Updated 5 years ago
- PyProf2: PyTorch Profiling tool☆82Updated 5 years ago
- This is a PyTorch implementation of the Scalpel. Node pruning for five benchmark networks and SIMD-aware weight pruning for LeNet-300-100…☆41Updated 6 years ago
- Training wide residual networks for deployment using a single bit for each weight - Official Code Repository for ICLR 2018 Published Pape…☆36Updated 5 years ago
- A pytorch implementation of the paper: "Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks"