vmicheli / lm-butlers
☆12Updated 3 years ago
Related projects: ⓘ
- Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)☆23Updated 9 months ago
- ☆25Updated 9 months ago
- Code accompanying the paper "Noise Contrastive Alignment of Language Models with Explicit Rewards"☆22Updated 2 months ago
- Implements the Messenger environment and EMMA model.☆22Updated last year
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆35Updated 8 months ago
- ☆19Updated 2 years ago
- Rewarded soups official implementation☆43Updated 11 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆78Updated last week
- ☆21Updated this week
- Code and data for "Inferring Rewards from Language in Context" [ACL 2022].☆15Updated 2 years ago
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆12Updated 10 months ago
- Directional Preference Alignment☆44Updated 3 months ago
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆49Updated 3 months ago
- ☆30Updated 7 months ago
- ☆35Updated 2 months ago
- ☆25Updated last year
- Dateset Reset Policy Optimization☆27Updated 5 months ago
- Domain-specific preference (DSP) data and customized RM fine-tuning.☆24Updated 6 months ago
- Super fast implementations of common benchmark text world games☆43Updated last month
- ☆40Updated last year
- ☆75Updated last month
- Offical code of the paper Large Language Models Are Implicitly Topic Models: Explaining and Finding Good Demonstrations for In-Context Le…☆66Updated 5 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆89Updated 2 months ago
- ☆28Updated 5 months ago
- [ICLR 2022 Spotlight] Multi-Stage Episodic Control for Strategic Exploration in Text Games☆13Updated 2 years ago
- ☆15Updated this week
- Advantage Leftover Lunch Reinforcement Learning (A-LoL RL): Improving Language Models with Advantage-based Offline Policy Gradients☆24Updated last week
- A reinforcement learning environment for the IGLU 2022 at NeurIPS☆32Updated last year
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆84Updated 5 months ago
- The code of paper "Toward Optimal LLM Alignments Using Two-Player Games".☆11Updated 2 months ago