microsoft / BANGLinks
BANG is a new pretraining model to Bridge the gap between Autoregressive (AR) and Non-autoregressive (NAR) Generation. AR and NAR generation can be uniformly regarded as to what extent previous tokens can be attended, and BANG bridges AR and NAR generation by designing a novel model structure for large-scale pretraining. The pretrained BANG mode…
☆28Updated 3 years ago
Alternatives and similar repositories for BANG
Users that are interested in BANG are comparing it to the libraries listed below
Sorting:
- Source code for paper: Knowledge Inheritance for Pre-trained Language Models☆38Updated 3 years ago
- Code for the paper "A Theoretical Analysis of the Repetition Problem in Text Generation" in AAAI 2021.☆54Updated 2 years ago
- Few-shot NLP benchmark for unified, rigorous eval☆92Updated 3 years ago
- [EACL'21] Non-Autoregressive with Pretrained Language Model☆62Updated 2 years ago
- Pytorch implementation of paper "Efficient Nearest Neighbor Language Models" (EMNLP 2021)☆73Updated 3 years ago
- Paradigm shift in natural language processing☆42Updated 3 years ago
- ☆45Updated 3 years ago
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃☆114Updated 2 years ago
- EMNLP 2021 - CTC: A Unified Framework for Evaluating Natural Language Generation☆98Updated 2 years ago
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining☆117Updated 2 years ago
- ☆117Updated 3 years ago
- Implementation of our paper "Self-training Sampling with Monolingual Data Uncertainty for Neural Machine Translation" to appear in ACL-20…☆31Updated 4 years ago
- ☆34Updated 3 years ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆58Updated 2 years ago
- This repository contains the code for "How many data points is a prompt worth?"☆48Updated 4 years ago
- This repository is the official implementation of our EMNLP 2022 paper ELMER: A Non-Autoregressive Pre-trained Language Model for Efficie…☆26Updated 2 years ago
- Pytorch implementation of models described in "Grounded compositional outputs for adaptive language modeling", EMNLP 2020.☆18Updated 3 years ago
- DisCo Transformer for Non-autoregressive MT☆77Updated 3 years ago
- A method for evaluating the high-level coherence of machine-generated texts. Identifies high-level coherence issues in transformer-based …☆11Updated 2 years ago
- ☆67Updated 3 years ago
- ☆38Updated last year
- Code for ACL2021 paper: "GLGE: A New General Language Generation Evaluation Benchmark"☆57Updated 2 years ago
- Code for "Mixed Cross Entropy Loss for Neural Machine Translation"☆20Updated 4 years ago
- Code for EMNLP 2020 paper CoDIR☆41Updated 2 years ago
- EMNLP 2021: Single-dataset Experts for Multi-dataset Question-Answering☆69Updated 3 years ago
- [NAACL'22] TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning☆93Updated 3 years ago
- Template Filling with Generative Transformers☆22Updated 4 years ago
- [NeurIPS 2022]MorphTE: Injecting Morphology in Tensorized Embeddings☆17Updated 2 years ago
- ENGINE: Energy-Based Inference Networks for Non-Autoregressive Machine Translation☆25Updated 4 years ago
- ☆41Updated 4 years ago