bigscience-workshop / promptsource
Toolkit for creating, sharing and using natural language prompts.
☆2,783Updated last year
Alternatives and similar repositories for promptsource:
Users that are interested in promptsource are comparing it to the libraries listed below
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,677Updated 7 months ago
- ☆1,502Updated 2 weeks ago
- Aligning pretrained language models with instruction data generated by themselves.☆4,294Updated last year
- ☆1,218Updated 10 months ago
- Expanding natural instructions☆980Updated last year
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,690Updated last year
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,618Updated last year
- An Open-Source Framework for Prompt-Learning.☆4,479Updated 7 months ago
- A central, open resource for data and tools related to chain-of-thought reasoning in large language models. Developed @ Samwald research …☆942Updated 2 months ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,588Updated last year
- AutoPrompt: Automatic Prompt Construction for Masked Language Models.☆614Updated 6 months ago
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆3,861Updated last month
- Official implementation for "Automatic Chain of Thought Prompting in Large Language Models" (stay tuned & more will be updated)☆1,736Updated 11 months ago
- A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)☆1,113Updated last year
- A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".☆2,010Updated last year
- A modular RL library to fine-tune language models to human preferences☆2,281Updated last year
- Implementation of Toolformer, Language Models That Can Use Tools, by MetaAI☆2,003Updated 7 months ago
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models☆2,979Updated 7 months ago
- Original Implementation of Prompt Tuning from Lester, et al, 2021☆669Updated 2 months ago
- Alpaca dataset from Stanford, cleaned and curated☆1,537Updated last year
- [ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddings☆1,918Updated last month
- 🤗 Evaluate: A library for easily evaluating machine learning models and datasets.☆2,131Updated last month
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,133Updated 11 months ago
- Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models (https://arxiv.org/abs/2211.09…☆2,086Updated this week
- [ACL 2023] Reasoning with Language Model Prompting: A Survey☆939Updated 2 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,667Updated 2 months ago
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,391Updated 2 months ago
- Instruction Tuning with GPT-4☆4,275Updated last year
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,326Updated last year
- SGPT: GPT Sentence Embeddings for Semantic Search☆863Updated last year