abaheti95 / LoL-RLLinks
Advantage Leftover Lunch Reinforcement Learning (A-LoL RL): Improving Language Models with Advantage-based Offline Policy Gradients
☆26Updated 10 months ago
Alternatives and similar repositories for LoL-RL
Users that are interested in LoL-RL are comparing it to the libraries listed below
Sorting:
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆30Updated last year
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆44Updated last year
- RL algorithm: Advantage induced policy alignment☆65Updated last year
- Dateset Reset Policy Optimization☆30Updated last year
- Self-Supervised Alignment with Mutual Information☆21Updated last year
- Reinforcement Learning via Regressing Relative Rewards☆34Updated 7 months ago
- Code for LaMPP: Language Models as Probabilistic Priors for Perception and Action☆37Updated 2 years ago
- Official Code Repository for EnvGen: Generating and Adapting Environments via LLMs for Training Embodied Agents (COLM 2024)☆34Updated last year
- ☆89Updated last year
- ☆27Updated 2 years ago
- Repository for Skill Set Optimization☆14Updated last year
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆54Updated 8 months ago
- Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)☆28Updated last year
- Directional Preference Alignment☆59Updated 10 months ago
- Code for Contrastive Preference Learning (CPL)☆174Updated 8 months ago
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆44Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- Code for the paper: Dense Reward for Free in Reinforcement Learning from Human Feedback (ICML 2024) by Alex J. Chan, Hao Sun, Samuel Holt…☆34Updated 11 months ago
- ☆45Updated last year
- ☆84Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences☆71Updated last year
- LLM Dynamic Planner - Combining LLM with PDDL Planners to solve an embodied task☆45Updated 7 months ago
- ☆34Updated 4 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 3 months ago
- NeurIPS 2024 tutorial on LLM Inference☆45Updated 7 months ago
- ☆99Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Updated last year
- ☆34Updated 7 months ago