zhenwang9102 / coherence-boostingLinks
Coherence boosting: When your pretrained language model is not paying enough attention (ACL 2022) https://arxiv.org/abs/2110.08294
☆15Updated 2 years ago
Alternatives and similar repositories for coherence-boosting
Users that are interested in coherence-boosting are comparing it to the libraries listed below
Sorting:
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆85Updated 3 years ago
- Code for Editing Factual Knowledge in Language Models☆141Updated 3 years ago
- ☆46Updated last year
- NAACL 2022: Can Rationalization Improve Robustness? https://arxiv.org/abs/2204.11790☆27Updated 2 years ago
- ☆58Updated 3 years ago
- A unified approach to explain conditional text generation models. Pytorch. The code of paper "Local Explanation of Dialogue Response Gene…☆16Updated 3 years ago
- Code for ACL'2021 paper WARP 🌀 Word-level Adversarial ReProgramming. Outperforming `GPT-3` on SuperGLUE Few-Shot text classification. ht…☆83Updated 4 years ago
- TBC☆27Updated 2 years ago
- ☆92Updated 3 years ago
- ☆35Updated 3 years ago
- Detect hallucinated tokens for conditional sequence generation.☆64Updated 3 years ago
- Code for the ACL 2022 paper "Continual Sequence Generation with Adaptive Compositional Modules"☆39Updated 3 years ago
- ☆88Updated 3 years ago
- Resources for Retrieval Augmentation for Commonsense Reasoning: A Unified Approach. EMNLP 2022.☆22Updated 2 years ago
- The codes for our ACL'22 paper: PRBOOST: Prompt-Based Rule Discovery and Boosting for Interactive Weakly-Supervised Learning.☆35Updated 3 years ago
- [NeurIPS 2022] Generating Training Data with Language Models: Towards Zero-Shot Language Understanding☆69Updated 3 years ago
- This repository contains the code for "How many data points is a prompt worth?"☆48Updated 4 years ago
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆102Updated 2 years ago
- Findings of ACL'2023: Optimizing Test-Time Query Representations for Dense Retrieval☆30Updated last year
- The contrastive token loss function for reducing generative repetition of autoregressive neural language models.☆13Updated 3 years ago
- This respository contains the code for extracting the test samples we used in our paper: "A Multitask, Multilingual, Multimodal Evaluatio…☆78Updated last year
- Code for paper "CrossFit : A Few-shot Learning Challenge for Cross-task Generalization in NLP" (https://arxiv.org/abs/2104.08835)☆112Updated 3 years ago
- ☆117Updated 3 years ago
- Token-level Reference-free Hallucination Detection☆96Updated 2 years ago
- code associated with ACL 2021 DExperts paper☆117Updated 2 years ago
- Code associated with the paper: "Few-Shot Self-Rationalization with Natural Language Prompts"☆13Updated 3 years ago
- DEMix Layers for Modular Language Modeling☆54Updated 4 years ago
- [ICLR 2023] Code for our paper "Selective Annotation Makes Language Models Better Few-Shot Learners"☆111Updated 2 years ago
- Official Code for the papers: "Controlled Text Generation as Continuous Optimization with Multiple Constraints" and "Gradient-based Const…☆63Updated last year
- ☆23Updated 2 years ago