jwhj / OREOLinks
☆114Updated 5 months ago
Alternatives and similar repositories for OREO
Users that are interested in OREO are comparing it to the libraries listed below
Sorting:
- RL Scaling and Test-Time Scaling (ICML'25)☆108Updated 5 months ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆100Updated last month
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆223Updated 2 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆154Updated 4 months ago
- Natural Language Reinforcement Learning☆90Updated 6 months ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆98Updated 4 months ago
- Critique-out-Loud Reward Models☆67Updated 8 months ago
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆166Updated last month
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆181Updated 2 months ago
- Interpretable Contrastive Monte Carlo Tree Search Reasoning☆49Updated 8 months ago
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆146Updated 8 months ago
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆82Updated last month
- official implementation of paper "Process Reward Model with Q-value Rankings"☆60Updated 5 months ago
- ☆98Updated last year
- ☆54Updated 2 weeks ago
- Code for Paper: Autonomous Evaluation and Refinement of Digital Agents [COLM 2024]☆138Updated 7 months ago
- Code for the paper: "Learning to Reason without External Rewards"☆319Updated this week
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆137Updated last week
- ☆199Updated 3 months ago
- Implementation of the ICML 2024 paper "Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning" pr…☆106Updated last year
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆110Updated 3 weeks ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆123Updated 10 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆163Updated last week
- 📖 This is a repository for organizing papers, codes, and other resources related to Latent Reasoning.☆86Updated this week
- MARFT stands for Multi-Agent Reinforcement Fine-Tuning. This repository implements an LLM-based multi-agent reinforcement fine-tuning fra…☆49Updated last month
- ☆174Updated last month
- This is the official implementation of the paper "S²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"☆67Updated 2 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆112Updated 3 months ago
- ☆144Updated 7 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆124Updated 3 months ago