noahgolmant / pytorch-lr-dropout
"Learning Rate Dropout" in PyTorch
☆34Updated 5 years ago
Alternatives and similar repositories for pytorch-lr-dropout:
Users that are interested in pytorch-lr-dropout are comparing it to the libraries listed below
- Implementation of tools to control and monitor layer rotation in different DL libraries☆40Updated 5 years ago
- Unofficial pytorch implementation of ReZero in ResNet☆23Updated 4 years ago
- Swish Activation - PyTorch CUDA Implementation☆37Updated 5 years ago
- Exploiting Uncertainty of Loss Landscape for Stochastic Optimization☆15Updated 5 years ago
- ☆34Updated 6 years ago
- ☆28Updated 4 years ago
- Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks☆18Updated 5 years ago
- Successfully training approximations to full-rank matrices for efficiency in deep learning.☆16Updated 4 years ago
- ☆33Updated 6 months ago
- diffGrad: An Optimization Method for Convolutional Neural Networks☆55Updated 2 years ago
- ☆23Updated 6 years ago
- Simple experiment of Apex (A PyTorch Extension)☆47Updated 5 years ago
- Implementation of Octave Convolution from Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convol…☆56Updated 5 years ago
- Advanced optimizer with Gradient-Centralization☆21Updated 4 years ago
- Implementation of our ECCV '18 paper " Deep Expander Networks: Efficient Deep Networks from Graph Theory".☆41Updated 5 years ago
- Filter Response Normalization tested on better ImageNet baselines.☆35Updated 4 years ago
- Various implementations and experimentation for deep neural network model compression☆24Updated 6 years ago
- Prunable nn layers for pytorch.☆48Updated 6 years ago
- An attempt at a PyTorch Implementation of "Zero-Shot" Super-Resolution using Deep Internal Learning by Shocher et al. CVPR 2018☆13Updated 6 years ago
- Simple implementation of the LSUV initialization in PyTorch☆58Updated last year
- Code for our paper: "Regularity Normalization: Neuroscience-Inspired Unsupervised Attention across Neural Network Layers".☆21Updated 3 years ago
- A new technique to increase receptive field in convolutional neural networks without losing so much information.☆30Updated 6 years ago
- Pretrained TorchVision models on CIFAR10 dataset (with weights)☆24Updated 4 years ago
- A Pytorch implementation of https://arxiv.org/abs/1810.12348.☆36Updated 5 years ago
- A pytorch implementation of https://arxiv.org/abs/1904.09925☆16Updated 5 years ago
- Implementation from the paper Attention Augmented Convolutional Networks in Tensorflow (https://arxiv.org/pdf/1904.09925v1.pdf)☆46Updated 5 years ago
- Code for reproducing results of the paper "Layer rotation: a surprisingly powerful indicator of generalization in deep networks?"☆50Updated 5 years ago
- ☆23Updated 4 years ago
- An implementation of shampoo☆74Updated 6 years ago
- ☆21Updated 4 years ago