JunShern / few-shot-adaptationLinks
Exploring Few-Shot Adaptation of Language Models with Tables
☆24Updated 3 years ago
Alternatives and similar repositories for few-shot-adaptation
Users that are interested in few-shot-adaptation are comparing it to the libraries listed below
Sorting:
- Suite of 500 procedurally-generated NLP tasks to study language model adaptability☆21Updated 3 years ago
- Few-shot NLP benchmark for unified, rigorous eval☆92Updated 3 years ago
- ☆29Updated 3 years ago
- ☆49Updated 2 years ago
- Official codebase accompanying our ACL 2022 paper "RELiC: Retrieving Evidence for Literary Claims" (https://relic.cs.umass.edu).☆20Updated 3 years ago
- Code and pre-trained models for "ReasonBert: Pre-trained to Reason with Distant Supervision", EMNLP'2021☆29Updated 2 years ago
- Code for Massive-scale Decoding for Text Generation using Lattices☆44Updated 3 years ago
- DEMix Layers for Modular Language Modeling☆54Updated 4 years ago
- Rationales for Sequential Predictions☆40Updated 3 years ago
- ☆22Updated 4 years ago
- ☆50Updated 3 years ago
- Explicit Alignment Objectives for Multilingual Bidirectional Encoders☆14Updated 4 years ago
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 3 years ago
- ☆45Updated 3 years ago
- ☆97Updated 2 years ago
- EMNLP 2021 Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections☆50Updated 4 years ago
- Pretraining summarization models using a corpus of nonsense☆13Updated 3 years ago
- EMNLP 2021 - CTC: A Unified Framework for Evaluating Natural Language Generation☆98Updated 2 years ago
- Repository for the Question Answering via Sentence Composition (QASC) dataset☆56Updated 2 years ago
- Code for Dissecting Generation Modes for Abstractive Summarization Models via Ablation and Attribution (ACL2021)☆13Updated 4 years ago
- Source code repo for paper "TLDR: Token Loss Dynamic Reweighting for Reducing Repetitive Utterance Generation"☆10Updated 2 years ago
- ☆38Updated 4 years ago
- ☆62Updated 3 years ago
- ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhi…☆49Updated 4 years ago
- Source Code for paper "Learning from Explanations with Neural Execution Tree", ICLR 2020☆18Updated 4 years ago
- EMNLP 2021 - Frustratingly Simple Pretraining Alternatives to Masked Language Modeling☆34Updated 3 years ago
- Code for our ACL '20 paper "Representation Engineering with Natural Language Explanations"☆29Updated 5 years ago
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer☆54Updated 2 years ago
- ☆48Updated 5 years ago
- ENGINE: Energy-Based Inference Networks for Non-Autoregressive Machine Translation☆25Updated 4 years ago