DeadAt0m / adafactor-pytorchLinks
A pytorch realization of adafactor (https://arxiv.org/pdf/1804.04235.pdf )
☆24Updated 6 years ago
Alternatives and similar repositories for adafactor-pytorch
Users that are interested in adafactor-pytorch are comparing it to the libraries listed below
Sorting:
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆119Updated 4 years ago
- Axial Positional Embedding for Pytorch☆83Updated 6 months ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 5 years ago
- Implementation of Fast Transformer in Pytorch☆175Updated 4 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron …☆33Updated 2 years ago
- Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch☆58Updated 4 years ago
- [NeurIPS 2020] Official Implementation: "SMYRF: Efficient Attention using Asymmetric Clustering".☆50Updated last year
- Implementation of Mogrifier LSTM in PyTorch☆34Updated 5 years ago
- Implementation of the retriever distillation procedure as outlined in the paper "Distilling Knowledge from Reader to Retriever"☆32Updated 4 years ago
- Unofficial PyTorch implementation of the paper "cosFormer: Rethinking Softmax In Attention".☆44Updated 3 years ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 5 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 4 years ago
- Pytorch implementation of the hamburger module from the ICLR 2021 paper "Is Attention Better Than Matrix Decomposition"☆99Updated 4 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 2 years ago
- Implementation of Multistream Transformers in Pytorch☆54Updated 4 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆62Updated 3 years ago
- PyTorch Examples repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆61Updated last year
- Official Pytorch Implementation for the paper 'SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients'☆17Updated 3 years ago
- Implementation of RealFormer using pytorch☆101Updated 4 years ago
- A variant of Transformer-XL where the memory is updated not with a queue, but with attention☆49Updated 5 years ago
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆51Updated 3 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆120Updated 4 years ago
- Implementation of the Remixer Block from the Remixer paper, in Pytorch☆36Updated 3 years ago
- Code for the paper "Query-Key Normalization for Transformers"☆47Updated 4 years ago
- Standalone Product Key Memory module in Pytorch - for augmenting Transformer models☆82Updated last year
- MTAdam: Automatic Balancing of Multiple Training Loss Terms☆36Updated 4 years ago
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 3 years ago
- Implements the SM3-II adaptive optimization algorithm for PyTorch.☆33Updated 11 months ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆105Updated 4 years ago