LAION-AI / Open-Instruction-GeneralistLinks
Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks
☆210Updated last year
Alternatives and similar repositories for Open-Instruction-Generalist
Users that are interested in Open-Instruction-Generalist are comparing it to the libraries listed below
Sorting:
- ☆180Updated 2 years ago
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆182Updated 2 years ago
- Datasets for Instruction Tuning of Large Language Models☆255Updated last year
- ☆98Updated 2 years ago
- An experimental implementation of the retrieval-enhanced language model☆76Updated 2 years ago
- ☆159Updated 2 years ago
- A framework for few-shot evaluation of autoregressive language models.☆105Updated 2 years ago
- Simple next-token-prediction for RLHF☆227Updated 2 years ago
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆218Updated 2 years ago
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆110Updated 2 years ago
- ☆72Updated 2 years ago
- The original implementation of Min et al. "Nonparametric Masked Language Modeling" (paper https//arxiv.org/abs/2212.01349)☆158Updated 2 years ago
- ☆105Updated 2 years ago
- Code and model release for the paper "Task-aware Retrieval with Instructions" by Asai et al.☆163Updated last year
- Reverse Instructions to generate instruction tuning data with corpus examples☆215Updated last year
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆462Updated 2 years ago
- Pipeline for pulling and processing online language model pretraining data from the web☆177Updated 2 years ago
- MultilingualSIFT: Multilingual Supervised Instruction Fine-tuning☆94Updated 2 years ago
- DSIR large-scale data selection framework for language model training☆259Updated last year
- A Multilingual Replicable Instruction-Following Model☆95Updated 2 years ago
- This project studies the performance and robustness of language models and task-adaptation methods.☆153Updated last year
- ☆173Updated 2 years ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆78Updated last year
- Inspired by google c4, here is a series of colossal clean data cleaning scripts focused on CommonCrawl data processing. Including Chinese…☆131Updated 2 years ago
- Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467☆295Updated 7 months ago
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated 3 months ago
- ☆162Updated last year
- Scalable training for dense retrieval models.☆298Updated 3 months ago
- Scaling Data-Constrained Language Models☆342Updated 3 months ago
- Code for the arXiv paper: "LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond"☆59Updated 8 months ago