zhangxjohn / LLM-Agent-Benchmark-ListLinks
A banchmark list for evaluation of large language models.
☆152Updated 3 months ago
Alternatives and similar repositories for LLM-Agent-Benchmark-List
Users that are interested in LLM-Agent-Benchmark-List are comparing it to the libraries listed below
Sorting:
- ☆241Updated last year
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆112Updated 4 months ago
- augmented LLM with self reflection☆135Updated 2 years ago
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆368Updated last year
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆159Updated last year
- Official Implementation of Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with Agent Team Optimization☆181Updated last year
- Code for the paper 🌳 Tree Search for Language Model Agents☆216Updated last year
- 🌍 AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and Interactive Coding Agent, ACL'24 Best Resource…☆324Updated 3 weeks ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆253Updated 7 months ago
- A curated collection of LLM reasoning and planning resources, including key papers, limitations, benchmarks, and additional learning mate…☆305Updated 9 months ago
- Code for Paper: Autonomous Evaluation and Refinement of Digital Agents [COLM 2024]☆147Updated last year
- [NeurIPS 2024] Agent Planning with World Knowledge Model☆157Updated 11 months ago
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆113Updated last month
- This repository contains a LLM benchmark for the social deduction game `Resistance Avalon'☆130Updated 6 months ago
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆198Updated 7 months ago
- A Comprehensive Benchmark for Software Development.☆122Updated last year
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied wit…☆145Updated last year
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆202Updated last year
- ☆117Updated 10 months ago
- [COLM 2025] Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆204Updated 4 months ago
- Framework and toolkits for building and evaluating collaborative agents that can work together with humans.☆112Updated this week
- InfiAgent-DABench: Evaluating Agents on Data Analysis Tasks (ICML 2024)☆160Updated 6 months ago
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆134Updated last year
- (ACL 2025 Main) Code for MultiAgentBench : Evaluating the Collaboration and Competition of LLM agents https://www.arxiv.org/pdf/2503.019…☆190Updated last month
- ☆210Updated 6 months ago
- Critique-out-Loud Reward Models☆70Updated last year
- [ICLR 2024] MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use☆100Updated last year
- ☆158Updated last month
- Implementation of the ICML 2024 paper "Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning" pr…☆113Updated last year
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)☆264Updated last year