bigscience-workshop / t-zeroLinks
Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)
☆463Updated 2 years ago
Alternatives and similar repositories for t-zero
Users that are interested in t-zero are comparing it to the libraries listed below
Sorting:
- Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"☆451Updated last year
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆180Updated 2 years ago
- Scalable training for dense retrieval models.☆292Updated 3 months ago
- ☆183Updated 2 years ago
- Crosslingual Generalization through Multitask Finetuning☆535Updated 8 months ago
- Expanding natural instructions☆995Updated last year
- A framework for few-shot evaluation of autoregressive language models.☆102Updated 2 years ago
- Code repository for supporting the paper "Atlas Few-shot Learning with Retrieval Augmented Language Models",(https//arxiv.org/abs/2208.03…☆537Updated last year
- Simple next-token-prediction for RLHF☆226Updated last year
- Ask Me Anything language model prompting☆546Updated last year
- ☆178Updated 2 years ago
- ☆159Updated 2 years ago
- Pipeline for pulling and processing online language model pretraining data from the web☆178Updated last year
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆182Updated 4 months ago
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆208Updated last year
- OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.☆561Updated last year
- Used for adaptive human in the loop evaluation of language and embedding models.☆308Updated 2 years ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆546Updated last year
- An original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi☆264Updated 2 years ago
- Original Implementation of Prompt Tuning from Lester, et al, 2021☆681Updated 2 months ago
- All available datasets for Instruction Tuning of Large Language Models☆250Updated last year
- The original implementation of Min et al. "Nonparametric Masked Language Modeling" (paper https//arxiv.org/abs/2212.01349)☆157Updated 2 years ago
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆865Updated last year
- UnifiedQA: Crossing Format Boundaries With a Single QA System☆437Updated 3 years ago
- Code and data for "Measuring and Narrowing the Compositionality Gap in Language Models"☆314Updated last year
- This repository contains the code for "Generating Datasets with Pretrained Language Models".☆188Updated 3 years ago
- ☆236Updated 2 years ago
- [NeurIPS'22 Spotlight] A Contrastive Framework for Neural Text Generation☆471Updated last year
- DSIR large-scale data selection framework for language model training☆249Updated last year
- Code and model release for the paper "Task-aware Retrieval with Instructions" by Asai et al.☆162Updated last year