CharizardAcademy / convtransformerLinks
Code for the ACL2020 paper Character-Level Translation with Self-Attention
☆31Updated 4 years ago
Alternatives and similar repositories for convtransformer
Users that are interested in convtransformer are comparing it to the libraries listed below
Sorting:
- ☆22Updated 4 years ago
- DisCo Transformer for Non-autoregressive MT☆77Updated 3 years ago
- A visualizer to display attention weights on text☆23Updated 6 years ago
- Code for "Mixed Cross Entropy Loss for Neural Machine Translation"☆20Updated 4 years ago
- Neural Machine Translation with universal Visual Representation (ICLR 2020)☆90Updated 5 years ago
- How Does Selective Mechanism Improve Self-attention Networks?☆29Updated 4 years ago
- Code for EMNLP 2020 paper CoDIR☆41Updated 3 years ago
- ☆53Updated 3 years ago
- Official code for the paper "PADA: Example-based Prompt Learning for on-the-fly Adaptation to Unseen Domains".☆51Updated 3 years ago
- Official PyTorch implementation of Time-aware Large Kernel (TaLK) Convolutions (ICML 2020)☆29Updated 4 years ago
- Code for ACL2020 "Jointly Masked Sequence-to-Sequence Model for Non-Autoregressive Neural Machine Translation"☆39Updated 5 years ago
- ICLR2019, Multilingual Neural Machine Translation with Knowledge Distillation☆70Updated 5 years ago
- Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"☆48Updated 3 years ago
- [EACL'21] Non-Autoregressive with Pretrained Language Model☆62Updated 2 years ago
- Open-Retrieval Conversational Machine Reading: A new setting & OR-ShARC dataset☆13Updated 2 years ago
- [NAACL'22] TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning☆93Updated 3 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆57Updated 4 years ago
- For paper《Gaussian Transformer: A Lightweight Approach for Natural Language Inference》☆28Updated 5 years ago
- Source code for the EMNLP 2020 long paper <Token-level Adaptive Training for Neural Machine Translation>.☆20Updated 2 years ago
- Implementation of the retriever distillation procedure as outlined in the paper "Distilling Knowledge from Reader to Retriever"☆32Updated 4 years ago
- Code for the paper "Adaptive Transformers for Learning Multimodal Representations" (ACL SRW 2020)☆43Updated 2 years ago
- [ACL‘20] Highway Transformer: A Gated Transformer.☆32Updated 3 years ago
- Pytorch version of VidLanKD: Improving Language Understanding viaVideo-Distilled Knowledge Transfer (NeurIPS 2021))☆56Updated 2 years ago
- This repo provides the code for the ACL 2020 paper "Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEnco…☆55Updated 4 years ago
- Code and data to accompany the camera-ready version of "Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Tra…☆32Updated 4 years ago
- Domain Adaptive Text Style Transfer, EMNLP 2019☆70Updated 5 years ago
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 3 years ago
- code for paper "Improving Sequence-to-Sequence Learning via Optimal Transport"☆68Updated 6 years ago
- TGLS: Unsupervised Text Generation by Learning from Search☆25Updated 4 years ago
- Code for NeurIPS2020 "Incorporating BERT into Parallel Sequence Decoding with Adapters"☆32Updated 2 years ago