ictnlp / awesome-transformer
A collection of transformer's guides, implementations and variants.
☆104Updated 5 years ago
Alternatives and similar repositories for awesome-transformer:
Users that are interested in awesome-transformer are comparing it to the libraries listed below
- Source Code for ACL2019 paper <Bridging the Gap between Training and Inference for Neural Machine Translation>☆41Updated 4 years ago
- DisCo Transformer for Non-autoregressive MT☆77Updated 2 years ago
- Transformers without Tears: Improving the Normalization of Self-Attention☆131Updated 10 months ago
- ICLR2019, Multilingual Neural Machine Translation with Knowledge Distillation☆70Updated 4 years ago
- Worth-reading papers and related resources on attention mechanism, Transformer and pretrained language model (PLM) such as BERT. 值得一读的注意力…☆133Updated 4 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆86Updated last year
- Data and code used in our NAACL'19 paper "Selective Attention for Context-aware Neural Machine Translation"☆30Updated 5 years ago
- This project attempts to maintain the SOTA performance in machine translation☆108Updated 4 years ago
- The implementation of "Learning Deep Transformer Models for Machine Translation"☆114Updated 9 months ago
- Implementation of ICLR 2020 paper "Revisiting Self-Training for Neural Sequence Generation"☆46Updated 2 years ago
- Implementation of "Glancing Transformer for Non-Autoregressive Neural Machine Translation"☆137Updated 2 years ago
- Cascaded Text Generation with Markov Transformers☆129Updated 2 years ago
- ☆119Updated 6 years ago
- Deeply Supervised, Layer-wise Prediction-aware (DSLP) Transformer for Non-autoregressive Neural Machine Translation☆43Updated last year
- ☆93Updated 3 years ago
- Code for ACL2021 paper: "GLGE: A New General Language Generation Evaluation Benchmark"☆58Updated 2 years ago
- Unicoder model for understanding and generation.☆89Updated last year
- ☆99Updated 2 years ago
- Some good(maybe) papers about NMT (Neural Machine Translation).☆84Updated 5 years ago
- Understanding the Difficulty of Training Transformers☆329Updated 2 years ago
- [ACL‘20] Highway Transformer: A Gated Transformer.☆32Updated 3 years ago
- Code for NeurIPS2020 "Incorporating BERT into Parallel Sequence Decoding with Adapters"☆32Updated 2 years ago
- Source code to reproduce the results in the ACL 2019 paper "Syntactically Supervised Transformers for Faster Neural Machine Translation"☆81Updated 2 years ago
- Zero -- A neural machine translation system☆150Updated last year
- Document-Level Neural Machine Translation with Hierarchical Attention Networks☆67Updated 2 years ago
- Code for the RecAdam paper: Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting.☆116Updated 4 years ago
- Tracking the progress in non-autoregressive generation (translation, transcription, etc.)☆307Updated 2 years ago
- Pytorch implementation of Bert and Pals: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning (https://arxiv.org/ab…☆82Updated 5 years ago
- PyTorch Implementation of "Non-Autoregressive Neural Machine Translation"☆269Updated 3 years ago
- Source code for "Efficient Training of BERT by Progressively Stacking"☆112Updated 5 years ago