abaheti95 / LoL-RLLinks
Advantage Leftover Lunch Reinforcement Learning (A-LoL RL): Improving Language Models with Advantage-based Offline Policy Gradients
☆26Updated 8 months ago
Alternatives and similar repositories for LoL-RL
Users that are interested in LoL-RL are comparing it to the libraries listed below
Sorting:
- Reinforcement Learning via Regressing Relative Rewards☆32Updated 5 months ago
- Repository for Skill Set Optimization☆13Updated 10 months ago
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆29Updated last year
- Code for LaMPP: Language Models as Probabilistic Priors for Perception and Action☆37Updated 2 years ago
- ☆15Updated 6 months ago
- Self-Supervised Alignment with Mutual Information☆19Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- implementation of dualformer☆17Updated 3 months ago
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆43Updated last year
- Official Code Repository for EnvGen: Generating and Adapting Environments via LLMs for Training Embodied Agents (COLM 2024)☆31Updated 10 months ago
- RL algorithm: Advantage induced policy alignment☆65Updated last year
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆50Updated 6 months ago
- Directional Preference Alignment☆56Updated 8 months ago
- Rewarded soups official implementation☆58Updated last year
- ☆17Updated last year
- This repository contains the code and data for the paper "VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception o…☆23Updated 2 months ago
- [ICLR 2025] "Training LMs on Synthetic Edit Sequences Improves Code Synthesis" (Piterbarg, Pinto, Fergus)☆19Updated 3 months ago
- A testbed for agents and environments that can automatically improve models through data generation.☆24Updated 3 months ago
- ☆32Updated 4 months ago
- ☆19Updated 10 months ago
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated 11 months ago
- ☆85Updated last year
- ☆27Updated 2 years ago
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Updated last year
- ☆93Updated 11 months ago
- Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)☆28Updated last year
- ☆34Updated 2 months ago
- ☆27Updated 9 months ago
- ☆16Updated 2 months ago
- [ICML 2024] Official code release accompanying the paper "diff History for Neural Language Agents" (Piterbarg, Pinto, Fergus)☆20Updated 9 months ago