noahgolmant / pytorch-lr-dropout
"Learning Rate Dropout" in PyTorch
☆34Updated 5 years ago
Alternatives and similar repositories for pytorch-lr-dropout:
Users that are interested in pytorch-lr-dropout are comparing it to the libraries listed below
- Exploiting Uncertainty of Loss Landscape for Stochastic Optimization☆15Updated 5 years ago
- Implementation of tools to control and monitor layer rotation in different DL libraries☆40Updated 5 years ago
- Unofficial pytorch implementation of ReZero in ResNet☆23Updated 5 years ago
- ☆34Updated 6 years ago
- Swish Activation - PyTorch CUDA Implementation☆37Updated 5 years ago
- Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks☆18Updated 5 years ago
- Implementation of Octave Convolution from Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convol…☆57Updated 5 years ago
- Simple implementation of the LSUV initialization in PyTorch☆58Updated last year
- ☆28Updated 4 years ago
- ☆23Updated 6 years ago
- Implementation of our ECCV '18 paper " Deep Expander Networks: Efficient Deep Networks from Graph Theory".☆41Updated 5 years ago
- Prunable nn layers for pytorch.☆48Updated 6 years ago
- Filter Response Normalization tested on better ImageNet baselines.☆35Updated 5 years ago
- Various implementations and experimentation for deep neural network model compression☆24Updated 6 years ago
- Simple experiment of Apex (A PyTorch Extension)☆47Updated 5 years ago
- Code for our paper: "Regularity Normalization: Neuroscience-Inspired Unsupervised Attention across Neural Network Layers".☆21Updated 3 years ago
- An implementation of shampoo☆74Updated 7 years ago
- ☆21Updated 5 years ago
- Successfully training approximations to full-rank matrices for efficiency in deep learning.☆17Updated 4 years ago
- Advanced optimizer with Gradient-Centralization☆21Updated 4 years ago
- PyTorch implementation of "EfficientNet", ICML 2019☆44Updated 5 years ago
- ☆11Updated 5 years ago
- Minimal API for receptive field calculation in PyTorch☆67Updated 2 years ago
- Code for the paper☆29Updated 5 years ago
- batchboost is a variation on MixUp that instead of mixing just two images, mixes many images together.☆44Updated 5 years ago
- Unofficial PyTorch implementation of "Filter Response Normalization Layer: Eliminating Batch Dependence in the Training of Deep Neural Ne…☆22Updated 5 years ago
- Pytorch implementation of Learning Rate Dropout.☆42Updated 5 years ago
- High-Level Training, Data Augmentation, and Utilities for Pytorch☆13Updated 6 years ago
- Partially Adaptive Momentum Estimation method in the paper "Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep …☆39Updated last year
- Implementation of the Budgeted Super Networks☆25Updated 6 years ago