tomekkorbak / pretraining-with-human-feedbackView external linksLinks
Code accompanying the paper Pretraining Language Models with Human Preferences
☆180Feb 13, 2024Updated 2 years ago
Alternatives and similar repositories for pretraining-with-human-feedback
Users that are interested in pretraining-with-human-feedback are comparing it to the libraries listed below
Sorting:
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,814Jun 17, 2025Updated 7 months ago
- ☆39Jul 25, 2024Updated last year
- A modular RL library to fine-tune language models to human preferences☆2,377Mar 1, 2024Updated last year
- ☆37May 7, 2023Updated 2 years ago
- ☆35Jan 29, 2023Updated 3 years ago
- Offline RL experiments☆15Oct 1, 2022Updated 3 years ago
- ☆12Jul 8, 2023Updated 2 years ago
- ☆156Aug 24, 2021Updated 4 years ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,742Jan 8, 2024Updated 2 years ago
- An original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi☆273Apr 15, 2023Updated 2 years ago
- This repository contains some of the code used in the paper "Training Language Models with Langauge Feedback at Scale"☆27Mar 30, 2023Updated 2 years ago
- Code for our TSD paper "TOKEN is a MASK: Few-shot Named Entity Recognition with Pre-trained Language Models"☆14Aug 19, 2022Updated 3 years ago
- ☆13Dec 12, 2025Updated 2 months ago
- ☆58May 30, 2024Updated last year
- A framework for human-readable prompt-based method with large language models. Specially designed for researchers. (Deprecated, check out…☆131Feb 25, 2023Updated 2 years ago
- Code for the paper Fine-Tuning Language Models from Human Preferences☆1,377Jul 25, 2023Updated 2 years ago
- Vision Large Language Models trained on M3IT instruction tuning dataset☆17Aug 16, 2023Updated 2 years ago
- RLHF implementation details of OAI's 2019 codebase☆197Jan 14, 2024Updated 2 years ago
- Experiments with generating opensource language model assistants☆97May 14, 2023Updated 2 years ago
- Code for EMNLP 2021 paper: Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting☆17Nov 30, 2021Updated 4 years ago
- The Intermediate Goal of the project is to train a GPT like architecture to learn to summarise reddit posts from human preferences, as th…☆12Jul 14, 2021Updated 4 years ago
- A dataset for realistic evaluation of noisy label methods☆14Dec 3, 2023Updated 2 years ago
- Self-Alignment with Principle-Following Reward Models☆169Sep 18, 2025Updated 4 months ago
- Code for Contrastive Preference Learning (CPL)☆178Nov 22, 2024Updated last year
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆192Jul 9, 2025Updated 7 months ago
- 사전에서 대화 예문만 추출한 데이터☆16Apr 24, 2023Updated 2 years ago
- Variational Reinforcement Learning☆17Jul 25, 2024Updated last year
- This project studies the performance and robustness of language models and task-adaptation methods.☆155May 18, 2024Updated last year
- A (somewhat) minimal library for finetuning language models with PPO on human feedback.☆90Nov 23, 2022Updated 3 years ago
- Simple next-token-prediction for RLHF☆229Sep 30, 2023Updated 2 years ago
- A curated list of reinforcement learning with human feedback resources (continually updated)☆4,296Dec 9, 2025Updated 2 months ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆217Feb 9, 2026Updated last week
- Sparse Backpropagation for Mixture-of-Expert Training☆29Jul 2, 2024Updated last year
- ☆282Jan 6, 2025Updated last year
- RewardBench: the first evaluation tool for reward models.☆687Jan 31, 2026Updated 2 weeks ago
- Convenient Text-to-Text Training for Transformers☆19Dec 10, 2021Updated 4 years ago
- Representation Learning in RL☆13Jun 1, 2022Updated 3 years ago
- ☆17May 19, 2023Updated 2 years ago
- ☆158Mar 18, 2023Updated 2 years ago