ictnlp / awesome-transformerLinks
A collection of transformer's guides, implementations and variants.
☆105Updated 5 years ago
Alternatives and similar repositories for awesome-transformer
Users that are interested in awesome-transformer are comparing it to the libraries listed below
Sorting:
- Worth-reading papers and related resources on attention mechanism, Transformer and pretrained language model (PLM) such as BERT. 值得一读的注意力…☆132Updated 4 years ago
- A PyTorch implementation of Transformer in "Attention is All You Need"☆106Updated 4 years ago
- Code for the paper "Are Sixteen Heads Really Better than One?"☆172Updated 5 years ago
- ICLR2019, Multilingual Neural Machine Translation with Knowledge Distillation☆70Updated 4 years ago
- This project attempts to maintain the SOTA performance in machine translation☆108Updated 4 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆85Updated last year
- Unicoder model for understanding and generation.☆91Updated last year
- DisCo Transformer for Non-autoregressive MT☆77Updated 2 years ago
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆128Updated 4 years ago
- PyTorch implementation of BERT in "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"☆107Updated 6 years ago
- Code for the RecAdam paper: Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting.☆117Updated 4 years ago
- Source code for "Efficient Training of BERT by Progressively Stacking"☆112Updated 6 years ago
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆91Updated 3 years ago
- Source code to reproduce the results in the ACL 2019 paper "Syntactically Supervised Transformers for Faster Neural Machine Translation"☆81Updated 2 years ago
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).☆184Updated 2 years ago
- Source Code for ACL2019 paper <Bridging the Gap between Training and Inference for Neural Machine Translation>☆41Updated 4 years ago
- a simple yet complete implementation of the popular BERT model☆127Updated 5 years ago
- Transformers without Tears: Improving the Normalization of Self-Attention☆132Updated last year
- [ACL‘20] Highway Transformer: A Gated Transformer.☆33Updated 3 years ago
- Visualization for simple attention and Google's multi-head attention.☆67Updated 7 years ago
- Neutron: A pytorch based implementation of Transformer and its variants.☆63Updated last year
- Pytorch implementation of Bert and Pals: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning (https://arxiv.org/ab…☆82Updated 6 years ago
- ☆93Updated 3 years ago
- ☆96Updated 5 years ago
- ☆218Updated 5 years ago
- Data and code used in our NAACL'19 paper "Selective Attention for Context-aware Neural Machine Translation"☆30Updated 5 years ago
- ☆13Updated 6 years ago
- ☆93Updated 5 years ago
- Implementation of Dual Learning NMT on PyTorch☆163Updated 7 years ago
- Some good(maybe) papers about NMT (Neural Machine Translation).☆85Updated 5 years ago