tnq177 / transformers_without_tearsLinks
Transformers without Tears: Improving the Normalization of Self-Attention
☆133Updated last year
Alternatives and similar repositories for transformers_without_tears
Users that are interested in transformers_without_tears are comparing it to the libraries listed below
Sorting:
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆102Updated 4 years ago
- Cascaded Text Generation with Markov Transformers☆129Updated 2 years ago
- FairSeq repo with Apollo optimizer☆114Updated last year
- ☆219Updated 5 years ago
- This repository contains the code for running the character-level Sandwich Transformers from our ACL 2020 paper on Improving Transformer …☆55Updated 4 years ago
- DisCo Transformer for Non-autoregressive MT☆77Updated 3 years ago
- Implementation of Mixout with PyTorch☆75Updated 2 years ago
- PyTorch implementation of BERT in "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"☆109Updated 6 years ago
- Code for the paper "Are Sixteen Heads Really Better than One?"☆172Updated 5 years ago
- Implementation of the GBST block from the Charformer paper, in Pytorch☆118Updated 4 years ago
- Code for the ICML'20 paper "Improving Transformer Optimization Through Better Initialization"☆89Updated 4 years ago
- ☆32Updated 3 years ago
- ICLR2019, Multilingual Neural Machine Translation with Knowledge Distillation☆70Updated 4 years ago
- LaNMT: Latent-variable Non-autoregressive Neural Machine Translation with Deterministic Inference☆79Updated 4 years ago
- ☆48Updated 5 years ago
- a Pytorch implementation of the Reformer Network (https://openreview.net/pdf?id=rkgNKkHtvB)☆53Updated 2 years ago
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Pro…☆62Updated this week
- ☆44Updated 5 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆147Updated 4 years ago
- Understanding the Difficulty of Training Transformers☆330Updated 3 years ago
- On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines☆137Updated 2 years ago
- Source code for the paper "Multilingual Neural Machine Translation with Soft Decoupled Encoding"☆29Updated 4 years ago
- ☆62Updated 3 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆85Updated 2 years ago
- Improving Neural Text Generation with Reinforcement Learning☆22Updated 4 years ago
- The implementation of "Neural Machine Translation without Embeddings", NAACL 2021☆33Updated 4 years ago
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆128Updated 4 years ago
- A PyTorch implementation of Transformer in "Attention is All You Need"☆106Updated 4 years ago
- Zero -- A neural machine translation system☆153Updated 2 years ago
- Checking the interpretability of attention on text classification models☆49Updated 6 years ago