zhenwang9102 / coherence-boostingLinks
Coherence boosting: When your pretrained language model is not paying enough attention (ACL 2022) https://arxiv.org/abs/2110.08294
☆15Updated 2 years ago
Alternatives and similar repositories for coherence-boosting
Users that are interested in coherence-boosting are comparing it to the libraries listed below
Sorting:
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆85Updated 3 years ago
- ☆47Updated last year
- Code for Editing Factual Knowledge in Language Models☆142Updated 3 years ago
- Detect hallucinated tokens for conditional sequence generation.☆64Updated 3 years ago
- NAACL 2022: Can Rationalization Improve Robustness? https://arxiv.org/abs/2204.11790☆27Updated 3 years ago
- Code for the ACL 2022 paper "Continual Sequence Generation with Adaptive Compositional Modules"☆39Updated 3 years ago
- A unified approach to explain conditional text generation models. Pytorch. The code of paper "Local Explanation of Dialogue Response Gene…☆16Updated 3 years ago
- Code for paper "CrossFit : A Few-shot Learning Challenge for Cross-task Generalization in NLP" (https://arxiv.org/abs/2104.08835)☆113Updated 3 years ago
- ☆36Updated last year
- [NeurIPS 2022] Generating Training Data with Language Models: Towards Zero-Shot Language Understanding☆69Updated 3 years ago
- The Multitask Long Document Benchmark☆41Updated 3 years ago
- ☆117Updated 3 years ago
- ☆82Updated 2 years ago
- TBC☆27Updated 3 years ago
- Code for our paper: "GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models"☆57Updated 2 years ago
- This repository contains the code for "How many data points is a prompt worth?"☆48Updated 4 years ago
- ☆58Updated 3 years ago
- Token-level Reference-free Hallucination Detection☆96Updated 2 years ago
- Code associated with the paper: "Few-Shot Self-Rationalization with Natural Language Prompts"☆13Updated 3 years ago
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆103Updated 2 years ago
- ☆23Updated 2 years ago
- ☆50Updated 2 years ago
- ☆91Updated 3 years ago
- Resources for Retrieval Augmentation for Commonsense Reasoning: A Unified Approach. EMNLP 2022.☆23Updated 2 years ago
- DEMix Layers for Modular Language Modeling☆54Updated 4 years ago
- code associated with ACL 2021 DExperts paper☆118Updated 2 years ago
- EMNLP 2021 - Frustratingly Simple Pretraining Alternatives to Masked Language Modeling☆34Updated 4 years ago
- The code for lifelong few-shot language learning☆55Updated 3 years ago
- ☆89Updated 3 years ago
- ☆177Updated last year