tmlr-group / ECONLinks
[ICML 2025] "From Debate to Equilibrium: Belief-Driven Multi-Agent LLM Reasoning via Bayesian Nash Equilibrium"
☆28Updated 4 months ago
Alternatives and similar repositories for ECON
Users that are interested in ECON are comparing it to the libraries listed below
Sorting:
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆92Updated last year
- [ICML 2025] "From Passive to Active Reasoning: Can Large Language Models Ask the Right Questions under Incomplete Information?"☆47Updated last month
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆78Updated 5 months ago
- Improving Math reasoning through Direct Preference Optimization with Verifiable Pairs☆17Updated 8 months ago
- ☆284Updated 4 months ago
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆141Updated last month
- [ICML 2025] Official Implementation of GLIDER☆66Updated last month
- The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning.☆381Updated 4 months ago
- This my attempt to create Self-Correcting-LLM based on the paper Training Language Models to Self-Correct via Reinforcement Learning by g…☆37Updated 4 months ago
- Implementation of the MATRIX framework (ICML 2024)☆60Updated last year
- Benchmarking LLMs' Gaming Ability in Multi-Agent Environments☆88Updated 6 months ago
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆37Updated 4 months ago
- This is the official GitHub repository for our survey paper "Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language …☆139Updated 6 months ago
- A Framework for LLM-based Multi-Agent Reinforced Training and Inference☆348Updated this week
- Code for NeurIPS 2024 paper "Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs"☆42Updated 9 months ago
- Principled Data Selection for Alignment: The Hidden Risks of Difficult Examples☆44Updated 4 months ago
- ☆182Updated 6 months ago
- An index of algorithms for reinforcement learning from human feedback (rlhf))☆92Updated last year
- A comprehensive collection of process reward models.☆121Updated last month
- Reinforced Multi-LLM Agents training☆59Updated 5 months ago
- ☆55Updated 4 months ago
- ☆28Updated 2 months ago
- ☆56Updated 5 months ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆190Updated last week
- ☆195Updated 3 months ago
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆190Updated 10 months ago
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆56Updated 11 months ago
- Official Repository of "Learning to Reason under Off-Policy Guidance"☆368Updated last month
- [ACL'24, Outstanding Paper] Emulated Disalignment: Safety Alignment for Large Language Models May Backfire!☆38Updated last year
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆149Updated 9 months ago