DeadAt0m / adafactor-pytorchLinks
A pytorch realization of adafactor (https://arxiv.org/pdf/1804.04235.pdf )
☆26Updated 6 years ago
Alternatives and similar repositories for adafactor-pytorch
Users that are interested in adafactor-pytorch are comparing it to the libraries listed below
Sorting:
- Implementation of Fast Transformer in Pytorch☆176Updated 4 years ago
- Axial Positional Embedding for Pytorch☆84Updated 11 months ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆120Updated 4 years ago
- Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch☆59Updated 4 years ago
- Another attempt at a long-context / efficient transformer by me☆38Updated 3 years ago
- Standalone Product Key Memory module in Pytorch - for augmenting Transformer models☆87Updated 3 months ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆63Updated 3 years ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 6 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆71Updated 5 years ago
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆51Updated 3 years ago
- Implementation of the Remixer Block from the Remixer paper, in Pytorch☆36Updated 4 years ago
- This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron …☆33Updated 2 years ago
- Implementation of Multistream Transformers in Pytorch☆54Updated 4 years ago
- A Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering☆43Updated 5 years ago
- MTAdam: Automatic Balancing of Multiple Training Loss Terms☆36Updated 5 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 4 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆49Updated 4 years ago
- Pytorch implementation of the hamburger module from the ICLR 2021 paper "Is Attention Better Than Matrix Decomposition"☆99Updated 5 years ago
- [NeurIPS 2020] Official Implementation: "SMYRF: Efficient Attention using Asymmetric Clustering".☆50Updated 2 years ago
- A variant of Transformer-XL where the memory is updated not with a queue, but with attention☆49Updated 5 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆126Updated 5 years ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆220Updated 2 years ago
- An open source implementation of CLIP.☆33Updated 3 years ago
- Unofficial PyTorch implementation of the paper "cosFormer: Rethinking Softmax In Attention".☆44Updated 4 years ago
- Graph neural network message passing reframed as a Transformer with local attention☆70Updated 3 years ago
- Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI☆98Updated 4 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 4 years ago
- High performance pytorch modules☆18Updated 3 years ago
- Implements the SM3-II adaptive optimization algorithm for PyTorch.☆33Updated last year