Agent-E3 / ExACTLinks
☆20Updated 4 months ago
Alternatives and similar repositories for ExACT
Users that are interested in ExACT are comparing it to the libraries listed below
Sorting:
- ☆61Updated last week
- ☆114Updated 6 months ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆103Updated last week
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- Code for Paper: Autonomous Evaluation and Refinement of Digital Agents [COLM 2024]☆139Updated 8 months ago
- ☆27Updated 6 months ago
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆82Updated 2 months ago
- official implementation of paper "Process Reward Model with Q-value Rankings"☆60Updated 6 months ago
- Natural Language Reinforcement Learning☆92Updated last week
- WONDERBREAD benchmark + dataset for BPM tasks☆26Updated last week
- Middleware for LLMs: Tools Are Instrumental for Language Agents in Complex Environments (EMNLP'2024)☆37Updated 7 months ago
- RL Scaling and Test-Time Scaling (ICML'25)☆109Updated 6 months ago
- ☆85Updated 2 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- Process Reward Models That Think☆47Updated last month
- ☆27Updated last year
- ☆43Updated 5 months ago
- Interpretable Contrastive Monte Carlo Tree Search Reasoning☆48Updated 8 months ago
- Dialogue Action Tokens: Steering Language Models in Goal-Directed Dialogue with a Multi-Turn Planner☆26Updated last year
- Regressing the Relative Future: Efficient Policy Optimization for Multi-turn RLHF☆21Updated 9 months ago
- Official Repo for InSTA: Towards Internet-Scale Training For Agents☆52Updated 3 weeks ago
- ☆24Updated 10 months ago
- Repository for the paper Stream of Search: Learning to Search in Language☆149Updated 6 months ago
- ☆47Updated 5 months ago
- PreAct: Prediction Enhances Agent's Planning Ability (Coling2025)☆28Updated 7 months ago
- [ACL 2025] Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆99Updated last month
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated last year
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆102Updated 5 months ago
- Code for paper "Optima: Optimizing Effectiveness and Efficiency for LLM-Based Multi-Agent System"☆59Updated 8 months ago