abaheti95 / LoL-RLLinks
Advantage Leftover Lunch Reinforcement Learning (A-LoL RL): Improving Language Models with Advantage-based Offline Policy Gradients
☆26Updated last year
Alternatives and similar repositories for LoL-RL
Users that are interested in LoL-RL are comparing it to the libraries listed below
Sorting:
- Reinforcement Learning via Regressing Relative Rewards☆39Updated last year
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆32Updated last year
- Official Code Repository for EnvGen: Generating and Adapting Environments via LLMs for Training Embodied Agents (COLM 2024)☆40Updated last year
- Dateset Reset Policy Optimization☆31Updated last year
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆47Updated 2 years ago
- RL algorithm: Advantage induced policy alignment☆66Updated 2 years ago
- Self-Supervised Alignment with Mutual Information☆20Updated last year
- Code for the paper: Dense Reward for Free in Reinforcement Learning from Human Feedback (ICML 2024) by Alex J. Chan, Hao Sun, Samuel Holt…☆38Updated last year
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆40Updated last year
- Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)☆28Updated 2 years ago
- ☆21Updated last year
- Directional Preference Alignment☆58Updated last year
- Code for LaMPP: Language Models as Probabilistic Priors for Perception and Action☆37Updated 2 years ago
- ☆26Updated 2 years ago
- Rewarded soups official implementation☆62Updated 2 years ago
- ☆108Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆47Updated 9 months ago
- Verlog: A Multi-turn RL framework for LLM agents☆67Updated 3 weeks ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Updated 4 months ago
- Code for Contrastive Preference Learning (CPL)☆178Updated last year
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Updated 2 years ago
- ☆99Updated last year
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- Official implementation of "Direct Preference-based Policy Optimization without Reward Modeling" (NeurIPS 2023)☆42Updated last year
- Bayes-Adaptive RL for LLM Reasoning☆45Updated 8 months ago
- ☆33Updated last year
- Repository for Skill Set Optimization☆14Updated last year
- ☆86Updated last year
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆57Updated last year
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆29Updated last year