tbachlechner / ReZero-examplesLinks
PyTorch Examples repo for "ReZero is All You Need: Fast Convergence at Large Depth"
☆62Updated last year
Alternatives and similar repositories for ReZero-examples
Users that are interested in ReZero-examples are comparing it to the libraries listed below
Sorting:
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 4 years ago
- A pytorch implementation for the LSTM experiments in the paper: Why Gradient Clipping Accelerates Training: A Theoretical Justification f…☆46Updated 5 years ago
- "Learning Rate Dropout" in PyTorch☆34Updated 6 years ago
- ☆47Updated 4 years ago
- ☆61Updated 2 years ago
- MTAdam: Automatic Balancing of Multiple Training Loss Terms☆36Updated 5 years ago
- ☆32Updated 6 years ago
- An implementation of shampoo☆78Updated 7 years ago
- Partially Adaptive Momentum Estimation method in the paper "Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep …☆39Updated 2 years ago
- Implementation of soft parameter sharing for neural networks☆70Updated 5 years ago
- Unsupervised Data Augmentation experiments in PyTorch☆59Updated 6 years ago
- Pytorch implementation of the hamburger module from the ICLR 2021 paper "Is Attention Better Than Matrix Decomposition"☆99Updated 4 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆153Updated 2 years ago
- ☆34Updated 7 years ago
- [NeurIPS'19] [PyTorch] Adaptive Regularization in NN☆68Updated 6 years ago
- [ICML 2020] code for the flooding regularizer proposed in "Do We Need Zero Training Loss After Achieving Zero Training Error?"☆95Updated 3 years ago
- Notes from NeurIPS 2019☆29Updated 6 years ago
- Pretrained TorchVision models on CIFAR10 dataset (with weights)☆24Updated 5 years ago
- diffGrad: An Optimization Method for Convolutional Neural Networks☆55Updated 3 years ago
- Implementation of the reversible residual network in pytorch☆106Updated 3 years ago
- "Layer-wise Adaptive Rate Scaling" in PyTorch☆87Updated 4 years ago
- Code publication to the paper "Normalized Attention Without Probability Cage"☆17Updated 4 years ago
- PyTorch Implementations of Dropout Variants☆88Updated 7 years ago
- ☆25Updated last year
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆111Updated 4 years ago
- SSL using PyTorch☆49Updated 5 years ago
- Loss Patterns of Neural Networks☆86Updated 4 years ago
- Reparameterize your PyTorch modules☆71Updated 4 years ago
- Implementation of Lie Transformer, Equivariant Self-Attention, in Pytorch☆97Updated 4 years ago
- Implementations of quasi-hyperbolic optimization algorithms.☆102Updated 5 years ago