openai / summarize-from-feedbackLinks
Code for "Learning to summarize from human feedback"
☆1,037Updated last year
Alternatives and similar repositories for summarize-from-feedback
Users that are interested in summarize-from-feedback are comparing it to the libraries listed below
Sorting:
- Code for the paper Fine-Tuning Language Models from Human Preferences☆1,352Updated 2 years ago
- A modular RL library to fine-tune language models to human preferences☆2,333Updated last year
- Expanding natural instructions☆1,011Updated last year
- Implementation of ChatGPT RLHF (Reinforcement Learning with Human Feedback) on any generation model in huggingface's transformer (blommz-…☆562Updated last year
- ☆1,532Updated this week
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,770Updated last month
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways☆823Updated 2 years ago
- ☆1,231Updated 2 years ago
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆822Updated last year
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆462Updated 2 years ago
- Original Implementation of Prompt Tuning from Lester, et al, 2021☆689Updated 5 months ago
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆870Updated last year
- Tools to download and cleanup Common Crawl data☆1,022Updated 2 years ago
- A research project for natural language generation, containing the official implementations by MSRA NLC team.☆733Updated last year
- Open-source pre-training implementation of Google's LaMDA in PyTorch. Adding RLHF similar to ChatGPT.☆473Updated last year
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.☆1,007Updated last year
- Crosslingual Generalization through Multitask Finetuning☆537Updated 10 months ago
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpe…☆437Updated 2 years ago
- ☆1,589Updated 2 years ago
- Open clone of OpenAI's unreleased WebText dataset scraper. This version uses pushshift.io files instead of the API for speed.☆735Updated 2 years ago
- UnifiedQA: Crossing Format Boundaries With a Single QA System☆441Updated 3 years ago
- Plug and Play Language Model implementation. Allows to steer topic and attributes of GPT-2 models.☆1,149Updated last year
- Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).☆769Updated 2 years ago
- BLEURT is a metric for Natural Language Generation based on transfer learning.☆748Updated 2 years ago
- Code repository for supporting the paper "Atlas Few-shot Learning with Retrieval Augmented Language Models",(https//arxiv.org/abs/2208.03…☆544Updated last year
- Toolkit for creating, sharing and using natural language prompts.☆2,917Updated last year
- Ask Me Anything language model prompting☆547Updated 2 years ago
- Few-shot Learning of GPT-3☆353Updated last year
- A prize for finding tasks that cause large language models to show inverse scaling☆613Updated last year
- Mistral: A strong, northwesterly wind: Framework for transparent and accessible large-scale language model training, built with Hugging F…☆575Updated last year