ZhaolinGao / REFUELLinks
Regressing the Relative Future: Efficient Policy Optimization for Multi-turn RLHF
☆23Updated last year
Alternatives and similar repositories for REFUEL
Users that are interested in REFUEL are comparing it to the libraries listed below
Sorting:
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆108Updated 3 months ago
- official implementation of paper "Process Reward Model with Q-value Rankings"☆63Updated 8 months ago
- ☆50Updated 8 months ago
- ☆53Updated 8 months ago
- RL Scaling and Test-Time Scaling (ICML'25)☆111Updated 9 months ago
- ☆116Updated 9 months ago
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆85Updated 5 months ago
- [NeurIPS 2025 Spotlight] ReasonFlux-Coder: Open-Source LLM Coders with Co-Evolving Reinforcement Learning☆130Updated last month
- Sotopia-RL: Reward Design for Social Intelligence☆43Updated 2 months ago
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 6 months ago
- Natural Language Reinforcement Learning☆99Updated 3 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆59Updated last year
- Exploration of automated dataset selection approaches at large scales.☆48Updated 7 months ago
- ☆99Updated 5 months ago
- ☆46Updated 4 months ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆65Updated 8 months ago
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆57Updated last year
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- Interpretable Contrastive Monte Carlo Tree Search Reasoning☆48Updated 11 months ago
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆32Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆72Updated last year
- Directional Preference Alignment☆57Updated last year
- ☆69Updated last month
- ☆77Updated 2 months ago
- ☆28Updated 9 months ago
- ☆33Updated last year
- ☆122Updated 8 months ago
- ☆47Updated last year