dinghanshen / Cutoff
The source code for the Cutoff data augmentation approach proposed in this paper: "A Simple but Tough-to-Beat Data Augmentation Approach for Natural Language Understanding and Generation".
☆62Updated 3 years ago
Related projects: ⓘ
- [ICLR 2021] Contrastive Learning with Adversarial Perturbations for Conditional Text Generation☆83Updated last year
- [NAACL'22] TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning☆91Updated 2 years ago
- ☆35Updated 3 years ago
- ☆53Updated last year
- The sources codes of the DR-BERT model and baselines☆37Updated 2 years ago
- Code and Models for the paper "End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering" (NeurIPS 20…☆107Updated 2 years ago
- Official repository for "PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long Text Generation"☆31Updated 3 years ago
- Codes for our ACL21 paper: Language Model as an Annotator: Exploring DialoGPT for Dialogue Summarization☆94Updated 3 years ago
- ☆65Updated 2 years ago
- [ACL 2022] Ditch the Gold Standard: Re-evaluating Conversational Question Answering☆45Updated 2 years ago
- Code for ACL 2021 main conference paper "Conversations Are Not Flat: Modeling the Dynamic Information Flow across Dialogue Utterances".☆93Updated 3 years ago
- Code for our ACL2021 paper Neural Machine Translation with Monolingual Translation Memory☆80Updated last year
- Code for the ACL 2022 paper "Contextual Representation Learning beyond Masked Language Modeling"☆33Updated last year
- Zero-shot dialogue state tracking (DST)☆82Updated 2 years ago
- Sequence-Level Mixed Sample Data Augmentation☆21Updated 3 years ago
- An original implementation of "Noisy Channel Language Model Prompting for Few-Shot Text Classification"☆130Updated 2 years ago
- ☆116Updated 2 years ago
- ☆18Updated 3 years ago
- Code and data accompanying our ACL 2020 paper, "Unsupervised Domain Clusters in Pretrained Language Models".☆59Updated 4 years ago
- ☆50Updated 3 years ago
- Deeply Supervised, Layer-wise Prediction-aware (DSLP) Transformer for Non-autoregressive Neural Machine Translation☆43Updated 8 months ago
- [EACL'21] Non-Autoregressive with Pretrained Language Model☆61Updated last year
- ☆57Updated 2 years ago
- ☆84Updated last year
- Code for our paper "Leveraging Document-Level Label Consistency for Named Entity Recognition" and "Uncertainty-Aware Sequence Labeling"☆19Updated 2 years ago
- Code for ACL2021 paper: "GLGE: A New General Language Generation Evaluation Benchmark"☆58Updated last year
- Code for ACL 2021 paper: Accelerating BERT Inference for Sequence Labeling via Early-Exit☆28Updated 2 years ago
- Lexically constrained text generation with CBART.☆47Updated last year
- Code for KE-Blender, EMNLP 2021☆19Updated 2 years ago
- Bert for CoQA based on PyTorch.☆43Updated last year