Rick-McCoy / Reformer-pytorch
Implements Reformer: The Efficient Transformer in pytorch.
☆85Updated 5 years ago
Alternatives and similar repositories for Reformer-pytorch:
Users that are interested in Reformer-pytorch are comparing it to the libraries listed below
- Implementation of RealFormer using pytorch☆100Updated 4 years ago
- Cascaded Text Generation with Markov Transformers☆129Updated 2 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 2 years ago
- Encoding position with the word embeddings.☆83Updated 6 years ago
- Code for the ICML'20 paper "Improving Transformer Optimization Through Better Initialization"☆88Updated 4 years ago
- Generative Flow based Sequence-to-Sequence Toolkit written in Python.☆245Updated 5 years ago
- Implementation of Stochastic Beam Search using Fairseq☆102Updated 5 years ago
- A variant of Transformer-XL where the memory is updated not with a queue, but with attention☆48Updated 4 years ago
- Implementation of Mixout with PyTorch☆75Updated 2 years ago
- Code for the paper PermuteFormer☆42Updated 3 years ago
- ☆63Updated 3 years ago
- ☆218Updated 4 years ago
- ☆83Updated 5 years ago
- Relative Positional Encoding for Transformers with Linear Complexity☆63Updated 3 years ago
- Code for "Understanding and Improving Layer Normalization"☆46Updated 5 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆101Updated 4 years ago
- The official code repository for MetricMT - a reward optimization method for NMT with learned metrics☆25Updated 4 years ago
- Efficient Transformers for research, PyTorch and Tensorflow using Locality Sensitive Hashing☆95Updated 5 years ago
- Transformers without Tears: Improving the Normalization of Self-Attention☆131Updated 11 months ago
- a Pytorch implementation of the Reformer Network (https://openreview.net/pdf?id=rkgNKkHtvB)☆53Updated 2 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆86Updated last year
- Implementation of Sparsemax activation in Pytorch☆159Updated 4 years ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch☆73Updated 2 years ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 5 years ago
- Adaptive Softmax implementation for PyTorch☆80Updated 6 years ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆251Updated 3 years ago
- Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.☆28Updated 4 years ago
- Code for reversible recurrent neural networks☆39Updated 6 years ago
- FairSeq repo with Apollo optimizer☆114Updated last year
- ☆13Updated 6 years ago