YifeiZhou02 / ArCHerLinks
Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"
☆196Updated 6 months ago
Alternatives and similar repositories for ArCHer
Users that are interested in ArCHer are comparing it to the libraries listed below
Sorting:
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆151Updated 11 months ago
- ☆103Updated last year
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆175Updated 5 months ago
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆57Updated last year
- Implementation of the ICML 2024 paper "Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning" pr…☆111Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆123Updated last year
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆327Updated last year
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆194Updated last year
- Reasoning with Language Model is Planning with World Model☆175Updated 2 years ago
- ☆49Updated 8 months ago
- (ICML 2024) Alphazero-like Tree-Search can guide large language model decoding and training☆283Updated last year
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆148Updated 8 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆164Updated 7 months ago
- ☆116Updated 9 months ago
- GenRM-CoT: Data release for verification rationales☆67Updated last year
- ☆210Updated 6 months ago
- Code for STaR: Bootstrapping Reasoning With Reasoning (NeurIPS 2022)☆214Updated 2 years ago
- AdaPlanner: Language Models for Decision Making via Adaptive Planning from Feedback☆120Updated 6 months ago
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆47Updated last year
- official implementation of paper "Process Reward Model with Q-value Rankings"☆64Updated 8 months ago
- This is an official implementation of the paper ``Building Math Agents with Multi-Turn Iterative Preference Learning'' with multi-turn DP…☆30Updated 10 months ago
- Self-playing Adversarial Language Game Enhances LLM Reasoning, NeurIPS 2024☆140Updated 8 months ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆112Updated 2 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆174Updated 5 months ago
- Critique-out-Loud Reward Models☆70Updated last year
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆188Updated 9 months ago
- An extensible benchmark for evaluating large language models on planning☆419Updated last month
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆130Updated 7 months ago
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆62Updated last year
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆30Updated last year