DeadAt0m / adafactor-pytorch
A pytorch realization of adafactor (https://arxiv.org/pdf/1804.04235.pdf )
☆24Updated 5 years ago
Related projects: ⓘ
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆116Updated 3 years ago
- Axial Positional Embedding for Pytorch☆61Updated 3 years ago
- Implementation of the retriever distillation procedure as outlined in the paper "Distilling Knowledge from Reader to Retriever"☆32Updated 3 years ago
- Implementation of Mogrifier LSTM in PyTorch☆35Updated 4 years ago
- Unofficial PyTorch implementation of the paper "cosFormer: Rethinking Softmax In Attention".☆43Updated 2 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆70Updated last year
- code for Explicit Sparse Transformer☆57Updated last year
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 4 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 2 years ago
- PyTorch implementation of Pay Attention to MLPs☆39Updated 3 years ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆59Updated 4 years ago
- This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron …☆32Updated last year
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆58Updated 2 years ago
- Code for the paper PermuteFormer☆43Updated 2 years ago
- The accompanying code for "Memory-efficient Transformers via Top-k Attention" (Ankit Gupta, Guy Dar, Shaya Goodman, David Ciprut, Jonatha…☆58Updated 3 years ago
- MTAdam: Automatic Balancing of Multiple Training Loss Terms☆34Updated 3 years ago
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 2 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆45Updated 3 years ago
- The implementation of multi-branch attentive Transformer (MAT).☆33Updated 4 years ago
- A variant of Transformer-XL where the memory is updated not with a queue, but with attention☆45Updated 4 years ago
- Official PyTorch implementation of Time-aware Large Kernel (TaLK) Convolutions (ICML 2020)☆29Updated 3 years ago
- Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch☆53Updated 3 years ago
- Sparse Attention with Linear Units☆17Updated 3 years ago
- Code for "Understanding and Improving Layer Normalization"☆44Updated 4 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆53Updated 3 years ago
- Pytorch implementation of Performer from the paper "Rethinking Attention with Performers".☆23Updated 3 years ago
- A Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering☆40Updated 3 years ago
- FlatNCE: A Novel Contrastive Representation Learning Objective☆83Updated 2 years ago
- Code for the ICML'20 paper "Improving Transformer Optimization Through Better Initialization"☆89Updated 3 years ago
- PyTorch Examples repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆62Updated last month