JunShern / few-shot-adaptation
Exploring Few-Shot Adaptation of Language Models with Tables
☆23Updated 2 years ago
Alternatives and similar repositories for few-shot-adaptation:
Users that are interested in few-shot-adaptation are comparing it to the libraries listed below
- Code and pre-trained models for "ReasonBert: Pre-trained to Reason with Distant Supervision", EMNLP'2021☆29Updated last year
- ☆22Updated 3 years ago
- ☆28Updated 2 years ago
- ☆48Updated last year
- Few-shot Learning with Auxiliary Data☆26Updated last year
- Code for the paper "UnNatural Language Inference" to appear at ACL 2021 (Long Paper)☆36Updated 3 years ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆56Updated 2 years ago
- Pretraining summarization models using a corpus of nonsense☆13Updated 3 years ago
- Suite of 500 procedurally-generated NLP tasks to study language model adaptability☆21Updated 2 years ago
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 2 years ago
- ☆16Updated last year
- Code for the paper "Modelling Latent Translations for Cross-Lingual Transfer"☆17Updated 3 years ago
- ☆13Updated last year
- EMNLP 2021 Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections☆50Updated 3 years ago
- Code for our ACL '20 paper "Representation Engineering with Natural Language Explanations"☆29Updated 4 years ago
- Code for Massive-scale Decoding for Text Generation using Lattices☆43Updated 2 years ago
- ☆42Updated 4 years ago
- Code for paper "Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?"☆20Updated 4 years ago
- We are creating a challenging new benchmark MultiReQA: A Cross-Domain Evaluation for Retrieval Question Answering Models. Retrieval quest…☆30Updated 4 years ago
- ☆38Updated 3 years ago
- Code & data for EMNLP 2020 paper "MOCHA: A Dataset for Training and Evaluating Reading Comprehension Metrics".☆16Updated 2 years ago
- A method for evaluating the high-level coherence of machine-generated texts. Identifies high-level coherence issues in transformer-based …☆11Updated last year
- Follow the Wisdom of the Crowd: Effective Text Generation via Minimum Bayes Risk Decoding☆18Updated 2 years ago
- Source Code for paper "Learning from Explanations with Neural Execution Tree", ICLR 2020☆18Updated 3 years ago
- ☆46Updated 2 years ago
- EMNLP 2021 - Frustratingly Simple Pretraining Alternatives to Masked Language Modeling☆31Updated 3 years ago
- [ICLR 2023] PyTorch code of Summarization Programs: Interpretable Abstractive Summarization with Neural Modular Trees☆23Updated last year
- The implementation of "Neural Machine Translation without Embeddings", NAACL 2021☆33Updated 3 years ago
- ☆16Updated 3 years ago