zhangxjohn / LLM-Agent-Benchmark-ListLinks
A banchmark list for evaluation of large language models.
☆154Updated 4 months ago
Alternatives and similar repositories for LLM-Agent-Benchmark-List
Users that are interested in LLM-Agent-Benchmark-List are comparing it to the libraries listed below
Sorting:
- ☆242Updated last year
- augmented LLM with self reflection☆135Updated 2 years ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆113Updated 5 months ago
- Official Implementation of Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with Agent Team Optimization☆192Updated last year
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆254Updated 8 months ago
- This repository contains a LLM benchmark for the social deduction game `Resistance Avalon'☆136Updated 7 months ago
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆384Updated last year
- [NeurIPS 2024] Agent Planning with World Knowledge Model☆161Updated last year
- Code for Paper: Autonomous Evaluation and Refinement of Digital Agents [COLM 2024]☆147Updated last year
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆203Updated last year
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆159Updated last year
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models☆117Updated 5 months ago
- 🌍 AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and Interactive Coding Agent, ACL'24 Best Resource…☆354Updated 2 months ago
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆114Updated this week
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆132Updated last year
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied wit…☆150Updated last year
- Framework and toolkits for building and evaluating collaborative agents that can work together with humans.☆118Updated last month
- ☆220Updated 9 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆182Updated 7 months ago
- [ICLR 2025] Benchmarking Agentic Workflow Generation☆142Updated 10 months ago
- Critique-out-Loud Reward Models☆71Updated last year
- [ICLR 2024] MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use☆102Updated last year
- [COLM 2025] Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆225Updated 6 months ago
- ☆104Updated last year
- (ACL 2025 Main) Code for MultiAgentBench : Evaluating the Collaboration and Competition of LLM agents https://www.arxiv.org/pdf/2503.019…☆205Updated 2 months ago
- Code for STaR: Bootstrapping Reasoning With Reasoning (NeurIPS 2022)☆218Updated 2 years ago
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆119Updated last year
- FireAct: Toward Language Agent Fine-tuning☆289Updated 2 years ago
- Code for the paper 🌳 Tree Search for Language Model Agents☆218Updated last year
- Meta Agents Research Environments is a comprehensive platform designed to evaluate AI agents in dynamic, realistic scenarios. Unlike stat…☆410Updated 2 months ago