microsoft / BANG
BANG is a new pretraining model to Bridge the gap between Autoregressive (AR) and Non-autoregressive (NAR) Generation. AR and NAR generation can be uniformly regarded as to what extent previous tokens can be attended, and BANG bridges AR and NAR generation by designing a novel model structure for large-scale pretraining. The pretrained BANG mode…
☆28Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for BANG
- ☆42Updated 3 years ago
- Code for ACL2021 paper: "GLGE: A New General Language Generation Evaluation Benchmark"☆58Updated 2 years ago
- ☆36Updated 3 months ago
- ☆116Updated 2 years ago
- ☆33Updated 2 years ago
- ☆25Updated last year
- Source code for paper: Knowledge Inheritance for Pre-trained Language Models☆38Updated 2 years ago
- Code for the paper "A Theoretical Analysis of the Repetition Problem in Text Generation" in AAAI 2021.☆51Updated 2 years ago
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining☆118Updated last year
- ☆65Updated 2 years ago
- Code for EMNLP 2020 paper CoDIR☆41Updated 2 years ago
- [EACL'21] Non-Autoregressive with Pretrained Language Model☆62Updated 2 years ago
- Code for "Mixed Cross Entropy Loss for Neural Machine Translation"☆20Updated 3 years ago
- Paradigm shift in natural language processing☆42Updated 2 years ago
- [NAACL'22-Findings] Dataset for "Retrieval-Augmented Multilingual Keyphrase Generation with Retriever-Generator Iterative Training"☆18Updated 2 years ago
- Code for our EMNLP 2020 Paper "AIN: Fast and Accurate Sequence Labeling with Approximate Inference Network"☆18Updated last year
- Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"☆44Updated 2 years ago
- ICLR 2021: Pre-Training for Context Representation in Conversational Semantic Parsing☆30Updated 3 years ago
- [ICLR 2022] Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators☆24Updated last year
- Lite Self-Training☆29Updated last year
- [ACL 2022] Ditch the Gold Standard: Re-evaluating Conversational Question Answering☆45Updated 2 years ago
- Unicoder model for understanding and generation.☆88Updated 10 months ago
- This repository contains the code for "How many data points is a prompt worth?"☆49Updated 3 years ago
- reStructured Pre-training☆97Updated last year
- Code for ACL'2021 paper WARP 🌀 Word-level Adversarial ReProgramming. Outperforming `GPT-3` on SuperGLUE Few-Shot text classification. ht…☆83Updated 3 years ago
- Implementation of ICLR 2020 paper "Revisiting Self-Training for Neural Sequence Generation"☆47Updated 2 years ago
- EMNLP 2021: Single-dataset Experts for Multi-dataset Question-Answering☆70Updated 2 years ago
- Source code for "A Simple but Effective Pluggable Entity Lookup Table for Pre-trained Language Models"☆43Updated last year
- [EMNLP 2021] MuVER: Improving First-Stage Entity Retrieval with Multi-View Entity Representations☆31Updated 2 years ago
- ☆24Updated 2 years ago