tmlr-group / ECONLinks
[ICML 2025] "From Debate to Equilibrium: Belief-Driven Multi-Agent LLM Reasoning via Bayesian Nash Equilibrium"
☆34Updated 2 months ago
Alternatives and similar repositories for ECON
Users that are interested in ECON are comparing it to the libraries listed below
Sorting:
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆95Updated last year
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆79Updated 7 months ago
- [ICML 2025] Official Implementation of GLIDER☆72Updated 3 months ago
- Reinforced Multi-LLM Agents training☆69Updated last week
- This my attempt to create Self-Correcting-LLM based on the paper Training Language Models to Self-Correct via Reinforcement Learning by g…☆38Updated 6 months ago
- The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning.☆414Updated 6 months ago
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆38Updated 6 months ago
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆152Updated 3 months ago
- [ICML 2025] "From Passive to Active Reasoning: Can Large Language Models Ask the Right Questions under Incomplete Information?"☆49Updated 3 months ago
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆53Updated 7 months ago
- Implementation of the MATRIX framework (ICML 2024)☆60Updated last year
- This is the official GitHub repository for our survey paper "Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language …☆169Updated 8 months ago
- ☆303Updated 6 months ago
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆191Updated last year
- ☆10Updated 9 months ago
- A Sober Look at Language Model Reasoning☆92Updated 2 months ago
- Improving Math reasoning through Direct Preference Optimization with Verifiable Pairs☆18Updated 10 months ago
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆29Updated last year
- [NeurIPS 2025] Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"☆157Updated 3 months ago
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆50Updated last year
- A Framework for LLM-based Multi-Agent Reinforced Training and Inference☆411Updated 2 months ago
- Code for NeurIPS 2024 paper "Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs"☆46Updated 11 months ago
- An index of algorithms for reinforcement learning from human feedback (rlhf))☆92Updated last year
- Code and data for the paper: Competing Large Language Models in Multi-Agent Gaming Environments☆94Updated this week
- A comprehensive collection of process reward models.☆134Updated 3 months ago
- ☆47Updated 10 months ago
- Principled Data Selection for Alignment: The Hidden Risks of Difficult Examples☆44Updated 6 months ago
- ☆10Updated last year
- ☆223Updated 10 months ago
- Resources and paper list for 'Scaling Environments for Agents'. This repository accompanies our survey on how environments contribute to …☆57Updated this week