layer6ai-labs / T-FixupLinks
Code for the ICML'20 paper "Improving Transformer Optimization Through Better Initialization"
☆89Updated 4 years ago
Alternatives and similar repositories for T-Fixup
Users that are interested in T-Fixup are comparing it to the libraries listed below
Sorting:
- DisCo Transformer for Non-autoregressive MT☆77Updated 2 years ago
- LaNMT: Latent-variable Non-autoregressive Neural Machine Translation with Deterministic Inference☆80Updated 3 years ago
- Cascaded Text Generation with Markov Transformers☆129Updated 2 years ago
- Codes for "Understanding and Improving Transformer From a Multi-Particle Dynamic System Point of View"☆148Updated 6 years ago
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆128Updated 4 years ago
- ☆218Updated 5 years ago
- This repository contains the code for running the character-level Sandwich Transformers from our ACL 2020 paper on Improving Transformer …☆55Updated 4 years ago
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆22Updated 2 years ago
- ☆32Updated 3 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆85Updated last year
- Implementation of Stochastic Beam Search using Fairseq☆104Updated 6 years ago
- FairSeq repo with Apollo optimizer☆114Updated last year
- Understanding the Difficulty of Training Transformers☆329Updated 3 years ago
- Code for the paper "Are Sixteen Heads Really Better than One?"☆172Updated 5 years ago
- Code for "Understanding and Improving Layer Normalization"☆46Updated 5 years ago
- ICLR2019, Multilingual Neural Machine Translation with Knowledge Distillation☆70Updated 4 years ago
- Implementation of the retriever distillation procedure as outlined in the paper "Distilling Knowledge from Reader to Retriever"☆32Updated 4 years ago
- ☆119Updated 6 years ago
- Transformers without Tears: Improving the Normalization of Self-Attention☆132Updated last year
- Generative Flow based Sequence-to-Sequence Toolkit written in Python.☆245Updated 5 years ago
- Checking the interpretability of attention on text classification models☆49Updated 5 years ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆251Updated 3 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆101Updated 4 years ago
- Adaptive Softmax implementation for PyTorch☆81Updated 6 years ago
- ☆10Updated 5 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 2 years ago
- ☆44Updated 4 years ago
- Language Model Baselines for PyTorch☆42Updated 4 years ago
- Source code for "Efficient Training of BERT by Progressively Stacking"☆112Updated 6 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆152Updated 2 years ago