microsoft / RLHF-APA
RL algorithm: Advantage induced policy alignment
☆62Updated last year
Related projects: ⓘ
- Building modular LMs with parameter-efficient fine-tuning.☆73Updated this week
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆35Updated 8 months ago
- ☆87Updated 2 months ago
- ☆98Updated 2 months ago
- ☆65Updated 2 months ago
- ☆158Updated last year
- Code accompanying the paper Pretraining Language Models with Human Preferences☆173Updated 7 months ago
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆49Updated 3 months ago
- A repository for transformer critique learning and generation☆84Updated 9 months ago
- We view Large Language Models as stochastic language layers in a network, where the learnable parameters are the natural language prompts…☆91Updated last month
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆45Updated 3 months ago
- Advantage Leftover Lunch Reinforcement Learning (A-LoL RL): Improving Language Models with Advantage-based Offline Policy Gradients☆24Updated last week
- ☆23Updated 5 months ago
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆63Updated last year
- Directional Preference Alignment☆44Updated 3 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆52Updated last month
- ☆94Updated last year
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆84Updated 5 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆89Updated 2 months ago
- For experiments involving instruct gpt. Currently used for documenting open research questions.☆71Updated last year
- RLHF implementation details of OAI's 2019 codebase☆144Updated 8 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆78Updated last week
- A (somewhat) minimal library for finetuning language models with PPO on human feedback.☆84Updated last year
- Rewarded soups official implementation☆43Updated 11 months ago
- Language models scale reliably with over-training and on downstream tasks☆91Updated 5 months ago
- ZYN: Zero-Shot Reward Models with Yes-No Questions☆33Updated last year
- ☆30Updated 7 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆46Updated 5 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆39Updated 7 months ago
- Dateset Reset Policy Optimization☆27Updated 5 months ago