clinicalml / cotrain-promptingLinks
Code for co-training large language models (e.g. T0) with smaller ones (e.g. BERT) to boost few-shot performance
☆17Updated 3 years ago
Alternatives and similar repositories for cotrain-prompting
Users that are interested in cotrain-prompting are comparing it to the libraries listed below
Sorting:
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆85Updated 3 years ago
- EMNLP 2021 - Frustratingly Simple Pretraining Alternatives to Masked Language Modeling☆34Updated 4 years ago
- Code and pre-trained models for "ReasonBert: Pre-trained to Reason with Distant Supervision", EMNLP'2021☆29Updated 2 years ago
- ☆54Updated 3 years ago
- ☆35Updated 4 years ago
- Repository for Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts, EMNLP22☆19Updated 2 years ago
- ☆47Updated 2 years ago
- Code and datasets for the EMNLP 2020 paper "Calibration of Pre-trained Transformers"☆61Updated 2 years ago
- TBC☆28Updated 3 years ago
- Few-shot NLP benchmark for unified, rigorous eval☆93Updated 3 years ago
- In-BoXBART: Get Instructions into Biomedical Multi-task Learning☆14Updated 3 years ago
- Code for preprint: Summarizing Differences between Text Distributions with Natural Language☆43Updated 2 years ago
- Code associated with the paper: "Few-Shot Self-Rationalization with Natural Language Prompts"☆13Updated 3 years ago
- ☆24Updated 2 years ago
- Contrastive Fact Verification☆73Updated 3 years ago
- Implementation of the paper "FactGraph: Evaluating Factuality in Summarization with Semantic Graph Representations (NAACL 2022)"☆50Updated 2 years ago
- This repository contains some of the code used in the paper "Training Language Models with Langauge Feedback at Scale"☆27Updated 2 years ago
- Few-shot Learning with Auxiliary Data