WooooDyy / LLM-Reverse-Curriculum-RLLinks
Implementation of the ICML 2024 paper "Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning" presented by Zhiheng Xi et al.
☆113Updated last year
Alternatives and similar repositories for LLM-Reverse-Curriculum-RL
Users that are interested in LLM-Reverse-Curriculum-RL are comparing it to the libraries listed below
Sorting:
- ☆117Updated 10 months ago
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆159Updated last year
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆63Updated last year
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆198Updated 7 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆148Updated 9 months ago
- official implementation of paper "Process Reward Model with Q-value Rankings"☆65Updated 10 months ago
- MPO: Boosting LLM Agents with Meta Plan Optimization (EMNLP 2025 Findings)☆74Updated 3 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆133Updated 8 months ago
- Natural Language Reinforcement Learning☆100Updated 4 months ago
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆56Updated last year
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆116Updated 4 months ago
- ☆33Updated last year
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆82Updated 10 months ago
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆198Updated last year
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆112Updated 4 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆167Updated 8 months ago
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆56Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆124Updated last year
- Interpretable Contrastive Monte Carlo Tree Search Reasoning☆48Updated last year
- This my attempt to create Self-Correcting-LLM based on the paper Training Language Models to Self-Correct via Reinforcement Learning by g…☆37Updated 4 months ago
- Reasoning with Language Model is Planning with World Model☆180Updated 2 years ago
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆190Updated 10 months ago
- GenRM-CoT: Data release for verification rationales☆66Updated last year
- [NeurIPS 2024] Agent Planning with World Knowledge Model☆156Updated 11 months ago
- RL Scaling and Test-Time Scaling (ICML'25)☆112Updated 10 months ago
- ☆51Updated last year
- ☆53Updated 9 months ago
- On Memorization of Large Language Models in Logical Reasoning☆72Updated 8 months ago
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆31Updated last year
- Reinforced Multi-LLM Agents training☆60Updated 5 months ago