sIncerass / powernormLinks
[ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845
☆120Updated 4 years ago
Alternatives and similar repositories for powernorm
Users that are interested in powernorm are comparing it to the libraries listed below
Sorting:
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆119Updated 3 years ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆225Updated 3 years ago
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆264Updated 3 years ago
- Code for the ICML'20 paper "Improving Transformer Optimization Through Better Initialization"☆89Updated 4 years ago
- Fully featured implementation of Routing Transformer☆295Updated 3 years ago
- Pytorch implementation of the hamburger module from the ICLR 2021 paper "Is Attention Better Than Matrix Decomposition"☆99Updated 4 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆152Updated 2 years ago
- Implementation of Fast Transformer in Pytorch☆175Updated 3 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆105Updated 4 years ago
- [ICML 2021 Oral] We show pure attention suffers rank collapse, and how different mechanisms combat it.☆165Updated 4 years ago
- ☆47Updated 4 years ago
- PyTorch Examples repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆62Updated 11 months ago
- Implementation of Sparsemax activation in Pytorch☆160Updated 5 years ago
- Parameter Efficient Transfer Learning with Diff Pruning☆73Updated 4 years ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 2 years ago
- Implementation of Feedback Transformer in Pytorch☆107Updated 4 years ago
- MTAdam: Automatic Balancing of Multiple Training Loss Terms☆36Updated 4 years ago
- An implementation of Transformer with Expire-Span, a circuit for learning which memories to retain☆34Updated 4 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆118Updated 4 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆101Updated 4 years ago
- Understanding the Difficulty of Training Transformers☆329Updated 3 years ago
- ☆165Updated 6 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆61Updated 3 years ago
- Code for the paper PermuteFormer☆42Updated 3 years ago
- Efficient reservoir sampling implementation for PyTorch☆106Updated 3 years ago
- ☆62Updated 5 years ago
- Fast Discounted Cumulative Sums in PyTorch☆96Updated 3 years ago
- Apollo: An Adaptive Parameter-wise Diagonal Quasi-Newton Method for Nonconvex Stochastic Optimization☆183Updated 3 years ago
- Unofficial PyTorch implementation of Fastformer based on paper "Fastformer: Additive Attention Can Be All You Need"."☆134Updated 3 years ago
- Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch☆58Updated 4 years ago