vwxyzjn / lm-human-preference-details
RLHF implementation details of OAI's 2019 codebase
☆166Updated last year
Alternatives and similar repositories for lm-human-preference-details:
Users that are interested in lm-human-preference-details are comparing it to the libraries listed below
- ☆125Updated last month
- A (somewhat) minimal library for finetuning language models with PPO on human feedback.☆86Updated 2 years ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆233Updated 4 months ago
- A large-scale, fine-grained, diverse preference dataset (and models).☆325Updated last year
- ☆161Updated last year
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆124Updated 6 months ago
- ☆96Updated last year
- ☆265Updated last week
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆124Updated 9 months ago
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied wit…☆102Updated 6 months ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆272Updated 5 months ago
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆90Updated last week
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆167Updated last year
- RewardBench: the first evaluation tool for reward models.☆491Updated last week
- Simple next-token-prediction for RLHF☆222Updated last year
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆111Updated 2 months ago
- (ICML 2024) Alphazero-like Tree-Search can guide large language model decoding and training☆251Updated 7 months ago
- Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasets☆313Updated last year
- ☆295Updated last month
- ☆119Updated last month
- Self-Alignment with Principle-Following Reward Models☆150Updated 10 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆111Updated 4 months ago
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆110Updated 7 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆130Updated 3 months ago
- Explore what LLMs are really leanring over SFT☆28Updated 9 months ago
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆49Updated 7 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆94Updated this week
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆145Updated 8 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆145Updated last month
- DSIR large-scale data selection framework for language model training☆242Updated 9 months ago