clinicalml / cotrain-promptingLinks
Code for co-training large language models (e.g. T0) with smaller ones (e.g. BERT) to boost few-shot performance
☆17Updated 3 years ago
Alternatives and similar repositories for cotrain-prompting
Users that are interested in cotrain-prompting are comparing it to the libraries listed below
Sorting:
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆85Updated 3 years ago
- Code and pre-trained models for "ReasonBert: Pre-trained to Reason with Distant Supervision", EMNLP'2021☆29Updated 2 years ago
- ☆54Updated 2 years ago
- ☆47Updated last year
- In-BoXBART: Get Instructions into Biomedical Multi-task Learning☆14Updated 3 years ago
- EMNLP 2021 - Frustratingly Simple Pretraining Alternatives to Masked Language Modeling☆34Updated 4 years ago
- Few-shot NLP benchmark for unified, rigorous eval☆93Updated 3 years ago
- Code and datasets for the EMNLP 2020 paper "Calibration of Pre-trained Transformers"