Dahoas / reward-modeling
☆96Updated last year
Alternatives and similar repositories for reward-modeling:
Users that are interested in reward-modeling are comparing it to the libraries listed below
- A (somewhat) minimal library for finetuning language models with PPO on human feedback.☆86Updated 2 years ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆79Updated 4 months ago
- ☆161Updated last year
- RLHF implementation details of OAI's 2019 codebase☆166Updated last year
- An experimental implementation of the retrieval-enhanced language model☆74Updated 2 years ago
- A repository for transformer critique learning and generation☆88Updated last year
- Simple next-token-prediction for RLHF☆222Updated last year
- Code accompanying the paper Pretraining Language Models with Human Preferences☆180Updated 11 months ago
- Self-Alignment with Principle-Following Reward Models☆150Updated 10 months ago
- Code for the paper Code for the paper InstructDial: Improving Zero and Few-shot Generalization in Dialogue through Instruction Tuning☆99Updated last year
- Code for ACL2023 paper: Pre-Training to Learn in Context☆109Updated 5 months ago
- Implementation of Reinforcement Learning from Human Feedback (RLHF)☆171Updated last year
- Unofficial implementation of AlpaGasus☆90Updated last year
- ☆124Updated this week
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆179Updated 2 years ago
- ☆265Updated last week
- ☆104Updated last year
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆128Updated last year
- [ICLR 2024] COLLIE: Systematic Construction of Constrained Text Generation Tasks☆52Updated last year
- contrastive decoding☆190Updated 2 years ago
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆110Updated 7 months ago
- Code for the arXiv paper: "LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond"☆59Updated 9 months ago
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆207Updated last year
- An original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi☆258Updated last year
- A framework for few-shot evaluation of autoregressive language models.☆102Updated last year
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆77Updated 5 months ago
- ☆93Updated 3 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆71Updated 7 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆129Updated 2 months ago
- All available datasets for Instruction Tuning of Large Language Models☆240Updated last year