microsoft / BANGLinks
BANG is a new pretraining model to Bridge the gap between Autoregressive (AR) and Non-autoregressive (NAR) Generation. AR and NAR generation can be uniformly regarded as to what extent previous tokens can be attended, and BANG bridges AR and NAR generation by designing a novel model structure for large-scale pretraining. The pretrained BANG mode…
☆28Updated 4 years ago
Alternatives and similar repositories for BANG
Users that are interested in BANG are comparing it to the libraries listed below
Sorting:
- Code for the paper "A Theoretical Analysis of the Repetition Problem in Text Generation" in AAAI 2021.☆57Updated 3 years ago
- ☆35Updated 3 years ago
- Pytorch implementation of paper "Efficient Nearest Neighbor Language Models" (EMNLP 2021)☆75Updated 4 years ago
- Source code for paper: Knowledge Inheritance for Pre-trained Language Models☆38Updated 3 years ago
- Few-shot NLP benchmark for unified, rigorous eval☆93Updated 3 years ago
- ☆45Updated 4 years ago
- Performance Prediction for NLP Tasks☆17Updated 5 years ago
- ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhi…☆49Updated 4 years ago
- [EACL'21] Non-Autoregressive with Pretrained Language Model☆61Updated 3 years ago
- ☆116Updated 3 years ago
- Code for ACL2021 paper: "GLGE: A New General Language Generation Evaluation Benchmark"☆57Updated 3 years ago
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining☆118Updated 2 years ago
- ☆39Updated last year
- Source code for the EMNLP 2020 long paper <Token-level Adaptive Training for Neural Machine Translation>.☆20Updated 3 years ago
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃☆116Updated 3 years ago
- Template Filling with Generative Transformers☆23Updated 4 years ago
- Code for our EMNLP 2020 Paper "AIN: Fast and Accurate Sequence Labeling with Approximate Inference Network"☆19Updated 3 years ago
- Unicoder model for understanding and generation.☆92Updated 2 years ago
- DisCo Transformer for Non-autoregressive MT☆77Updated 3 years ago
- [IJCAI'19] Code for "Self-attentive Biaffine Dependency Parsing"☆16Updated 6 years ago
- [NAACL 2021] Factual Probing Is [MASK]: Learning vs. Learning to Recall https://arxiv.org/abs/2104.05240☆168Updated 3 years ago
- Paradigm shift in natural language processing☆42Updated 3 years ago
- Code for "Mixed Cross Entropy Loss for Neural Machine Translation"☆20Updated 4 years ago
- EMNLP 2021 - CTC: A Unified Framework for Evaluating Natural Language Generation☆97Updated 2 years ago
- This repository contains the code for "How many data points is a prompt worth?"☆48Updated 4 years ago
- ☆57Updated 3 years ago
- ☆17Updated 2 years ago
- ICLR 2021: Pre-Training for Context Representation in Conversational Semantic Parsing☆31Updated 4 years ago
- Implementation of NeurIPS 20 paper: Latent Template Induction with Gumbel-CRFs☆56Updated 5 years ago
- EMNLP 2021: Single-dataset Experts for Multi-dataset Question-Answering☆68Updated 4 years ago