tedmoskovitz / ConstrainedRL4LMs
A library for constrained RLHF.
☆13Updated last year
Alternatives and similar repositories for ConstrainedRL4LMs:
Users that are interested in ConstrainedRL4LMs are comparing it to the libraries listed below
- A repo for RLHF training and BoN over LLMs, with support for reward model ensembles.☆43Updated 3 months ago
- Rewarded soups official implementation☆57Updated last year
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆155Updated 6 months ago
- Direct preference optimization with f-divergences.☆13Updated 6 months ago
- ☆92Updated 10 months ago
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆167Updated 3 weeks ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆120Updated 8 months ago
- ☆30Updated 6 months ago
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆43Updated last year
- Code release for "Generating Code World Models with Large Language Models Guided by Monte Carlo Tree Search" published at NeurIPS '24.☆10Updated 2 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆76Updated 8 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆138Updated 2 months ago
- Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)☆28Updated last year
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆54Updated 11 months ago
- ☆138Updated 5 months ago
- ☆109Updated 3 months ago
- [NeurIPS 2023] Large Language Models Are Semi-Parametric Reinforcement Learning Agents☆35Updated last year
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆68Updated 4 months ago
- This is an official implementation of the paper ``Building Math Agents with Multi-Turn Iterative Preference Learning'' with multi-turn DP…☆25Updated 5 months ago
- ☆32Updated 4 months ago
- RLHF implementation details of OAI's 2019 codebase☆186Updated last year
- GenRM-CoT: Data release for verification rationales☆59Updated 6 months ago
- ☆24Updated last year
- Code for NeurIPS 2024 paper "Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs"☆33Updated 2 months ago
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆181Updated last year
- Implemention of the Decision-Pretrained Transformer (DPT) from the paper Supervised Pretraining Can Learn In-Context Reinforcement Learni…☆64Updated 11 months ago
- An index of algorithms for reinforcement learning from human feedback (rlhf))☆92Updated last year
- ☆66Updated 5 months ago
- (ICML 2024) Alphazero-like Tree-Search can guide large language model decoding and training☆267Updated 11 months ago
- ☆177Updated last year