Asap7772 / understanding-rlhfLinks
Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Our new work finds that approaches employing on-policy sampling or negative gradients outperform offline, maximum likelihood objectives.
☆32Updated last year
Alternatives and similar repositories for understanding-rlhf
Users that are interested in understanding-rlhf are comparing it to the libraries listed below
Sorting:
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated last year
- Directional Preference Alignment☆57Updated last year
- ☆104Updated last year
- NeurIPS 2024 tutorial on LLM Inference☆47Updated 11 months ago
- RL algorithm: Advantage induced policy alignment☆65Updated 2 years ago
- ☆154Updated 11 months ago
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆72Updated last year
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆47Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 7 months ago
- The repository contains code for Adaptive Data Optimization☆28Updated 11 months ago
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆68Updated 7 months ago
- Self-Supervised Alignment with Mutual Information☆21Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- Reinforcing General Reasoning without Verifiers☆91Updated 4 months ago
- Dateset Reset Policy Optimization☆31Updated last year
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆64Updated 6 months ago
- ☆33Updated 10 months ago
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆57Updated last year
- Advantage Leftover Lunch Reinforcement Learning (A-LoL RL): Improving Language Models with Advantage-based Offline Policy Gradients☆26Updated last year
- ☆13Updated 2 weeks ago
- Natural Language Reinforcement Learning☆99Updated 3 months ago
- Using FlexAttention to compute attention with different masking patterns☆47Updated last year
- Code for "Reasoning to Learn from Latent Thoughts"☆122Updated 7 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆60Updated last year
- Sotopia-RL: Reward Design for Social Intelligence☆43Updated 2 months ago
- ☆100Updated last year
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆67Updated 8 months ago
- Verlog: A Multi-turn RL framework for LLM agents☆64Updated last week
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- Can Language Models Solve Olympiad Programming?☆120Updated 10 months ago