facebookresearch / Mask-Predict
A masked language modeling objective to train a model to predict any subset of the target words, conditioned on both the input text and a partially masked target translation.
☆242Updated 3 years ago
Alternatives and similar repositories for Mask-Predict:
Users that are interested in Mask-Predict are comparing it to the libraries listed below
- Tracking the progress in non-autoregressive generation (translation, transcription, etc.)☆307Updated 2 years ago
- PyTorch Implementation of "Non-Autoregressive Neural Machine Translation"☆269Updated 3 years ago
- This is a repository with the data and code for the ACL 2019 paper "When a Good Translation is Wrong in Context: ..." and the EMNLP 2019 …☆97Updated 4 years ago
- Neural Text Generation with Unlikelihood Training☆309Updated 3 years ago
- Zero -- A neural machine translation system☆150Updated last year
- This project attempts to maintain the SOTA performance in machine translation☆108Updated 4 years ago
- Implementation of "Glancing Transformer for Non-Autoregressive Neural Machine Translation"☆137Updated 2 years ago
- Generative Flow based Sequence-to-Sequence Toolkit written in Python.☆245Updated 5 years ago
- ☆119Updated 6 years ago
- Pytorch implementation of "A Probabilistic Formulation of Unsupervised Text Style Transfer" by He. et. al. at ICLR 2020☆163Updated 2 years ago
- Implementation of Dual Learning NMT on PyTorch☆163Updated 7 years ago
- LaNMT: Latent-variable Non-autoregressive Neural Machine Translation with Deterministic Inference☆79Updated 3 years ago
- Some good(maybe) papers about NMT (Neural Machine Translation).☆84Updated 5 years ago
- ☆363Updated 2 years ago
- DisCo Transformer for Non-autoregressive MT☆77Updated 2 years ago
- Code for our ACL2021 paper Neural Machine Translation with Monolingual Translation Memory☆81Updated last year
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).☆183Updated last year
- ☆120Updated 3 years ago
- Deeply Supervised, Layer-wise Prediction-aware (DSLP) Transformer for Non-autoregressive Neural Machine Translation☆43Updated last year
- Improving the Transformer translation model with document-level context☆170Updated 4 years ago
- Document-Level Neural Machine Translation with Hierarchical Attention Networks☆68Updated 2 years ago
- Code for NeurIPS2020 "Incorporating BERT into Parallel Sequence Decoding with Adapters"☆32Updated 2 years ago
- Optimus: the first large-scale pre-trained VAE language model☆385Updated last year
- Implementation of the paper Tree Transformer☆214Updated 4 years ago
- ☆93Updated 3 years ago
- Data and code used in our NAACL'19 paper "Selective Attention for Context-aware Neural Machine Translation"☆30Updated 5 years ago
- Code for "Controllable Unsupervised Text Attribute Transfer via Editing Entangled Latent Representation" (NeurIPS 2019)☆127Updated 5 years ago
- ☆112Updated 3 years ago
- Source code to reproduce the results in the ACL 2019 paper "Syntactically Supervised Transformers for Faster Neural Machine Translation"☆81Updated 2 years ago
- Code for ACL2020 "Jointly Masked Sequence-to-Sequence Model for Non-Autoregressive Neural Machine Translation"☆39Updated 4 years ago