TensorUI / relative-position-pytorchLinks
a pytorch implementation of self-attention with relative position representations
☆50Updated 4 years ago
Alternatives and similar repositories for relative-position-pytorch
Users that are interested in relative-position-pytorch are comparing it to the libraries listed below
Sorting:
- ☆84Updated 6 years ago
- Code for ACL2020 "Jointly Masked Sequence-to-Sequence Model for Non-Autoregressive Neural Machine Translation"☆39Updated 5 years ago
- Unicoder model for understanding and generation.☆92Updated 2 years ago
- Neural Machine Translation with universal Visual Representation (ICLR 2020)☆89Updated 5 years ago
- A PyTorch implementation of Google AI's BERT model provided with Google's pre-trained models, examples and utilities.☆35Updated 6 years ago
- Implementation of the paper Tree Transformer☆215Updated 5 years ago
- PyTorch implementation for Seq2Seq model with attention and Greedy Search / Beam Search for neural machine translation☆58Updated 4 years ago
- ICLR2019, Multilingual Neural Machine Translation with Knowledge Distillation☆70Updated 5 years ago
- Research code for ACL 2020 paper: "Distilling Knowledge Learned in BERT for Text Generation".☆129Updated 4 years ago
- Visualization for simple attention and Google's multi-head attention.☆68Updated 7 years ago
- Re-implement "QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension"☆120Updated 7 years ago
- Code for the paper Multimodal Transformer Networks for End-to-End Video-Grounded Dialogue Systems (ACL19)☆100Updated 3 years ago
- The official Keras implementation of ACL 2020 paper "Pre-train and Plug-in: Flexible Conditional Text Generation with Variational Auto-En…☆48Updated 3 years ago
- Densely Connected Graph Convolutional Networks for Graph-to-Sequence Learning (authors' MXNet implementation for the TACL19 paper)☆78Updated 4 years ago
- Contrastive Attention Mechanism for Abstractive Text Summarization☆40Updated 6 years ago
- Code for the RecAdam paper: Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting.☆120Updated 5 years ago
- ☆94Updated 5 years ago
- The source code of the paper "A Generative Model for Joint Natural Language Understanding and Generation" published at ACL 2020.☆32Updated last year
- Domain Adaptive Text Style Transfer, EMNLP 2019☆70Updated 6 years ago
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).☆185Updated 2 years ago
- ☆18Updated last year
- roBERTa training for SQuAD☆50Updated 5 years ago
- Source Code for ACL2019 paper <Bridging the Gap between Training and Inference for Neural Machine Translation>☆41Updated 5 years ago
- DisCo Transformer for Non-autoregressive MT☆77Updated 3 years ago
- ☆33Updated 5 years ago
- Code for the ACL2020 paper Character-Level Translation with Self-Attention☆31Updated 5 years ago
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆127Updated 4 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 3 years ago
- Code for the paper "Scheduled Sampling for Transformers"☆28Updated 6 years ago
- ☆53Updated 4 years ago