gydpku / Data_Synthesis_RLLinks
☆47Updated 3 weeks ago
Alternatives and similar repositories for Data_Synthesis_RL
Users that are interested in Data_Synthesis_RL are comparing it to the libraries listed below
Sorting:
- Verifiers for LLM Reinforcement Learning☆60Updated 2 months ago
- ☆32Updated last month
- ☆53Updated last week
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆95Updated 2 weeks ago
- Code for RATIONALYST: Pre-training Process-Supervision for Improving Reasoning https://arxiv.org/pdf/2410.01044☆33Updated 8 months ago
- ☆115Updated 4 months ago
- Process Reward Models That Think☆41Updated 3 weeks ago
- ☆114Updated 5 months ago
- ☆17Updated 3 months ago
- ☆24Updated 9 months ago
- ☆36Updated 2 weeks ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆57Updated 9 months ago
- A testbed for agents and environments that can automatically improve models through data generation.☆24Updated 3 months ago
- Scaling Computer-Use Grounding via UI Decomposition and Synthesis☆79Updated last week
- ☆46Updated 4 months ago
- Resources for our paper: "EvoAgent: Towards Automatic Multi-Agent Generation via Evolutionary Algorithms"☆108Updated 8 months ago
- official implementation of paper "Process Reward Model with Q-value Rankings"☆59Updated 4 months ago
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆90Updated last month
- AgentRewardBench: Evaluating Automatic Evaluations of Web Agent Trajectories☆18Updated last month
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- Code for paper "Optima: Optimizing Effectiveness and Efficiency for LLM-Based Multi-Agent System"☆59Updated 7 months ago
- ☆48Updated 2 weeks ago
- ☆22Updated last month
- ☆65Updated 2 months ago
- CodeUltraFeedback: aligning large language models to coding preferences☆71Updated last year
- [ACL 2025] Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆93Updated 2 weeks ago
- Reinforcing General Reasoning without Verifiers☆60Updated last week
- Official Repo for InSTA: Towards Internet-Scale Training For Agents☆42Updated this week
- ☆27Updated 5 months ago
- RL Scaling and Test-Time Scaling (ICML'25)☆106Updated 5 months ago