zhangxjohn / LLM-Agent-Benchmark-ListLinks
A banchmark list for evaluation of large language models.
☆159Updated 3 weeks ago
Alternatives and similar repositories for LLM-Agent-Benchmark-List
Users that are interested in LLM-Agent-Benchmark-List are comparing it to the libraries listed below
Sorting:
- augmented LLM with self reflection☆137Updated 2 years ago
- ☆242Updated last year
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆261Updated 9 months ago
- Official Implementation of Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with Agent Team Optimization☆193Updated last year
- 🌍 AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and Interactive Coding Agent, ACL'24 Best Resource…☆371Updated 2 months ago
- ☆117Updated last year
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆120Updated last week
- Framework and toolkits for building and evaluating collaborative agents that can work together with humans.☆121Updated 2 months ago
- [NeurIPS 2024] Agent Planning with World Knowledge Model☆162Updated last year
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆390Updated last year
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied wit…☆150Updated last year
- Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering☆63Updated last year
- [ICLR 2024] MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use☆108Updated last year
- Code for Paper: Autonomous Evaluation and Refinement of Digital Agents [COLM 2024]☆148Updated last year
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆205Updated last year
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆159Updated last year
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆133Updated last year
- (ACL 2025 Main) Code for MultiAgentBench : Evaluating the Collaboration and Competition of LLM agents https://www.arxiv.org/pdf/2503.019…☆217Updated 3 months ago
- [ICLR 2025] Benchmarking Agentic Workflow Generation☆144Updated 11 months ago
- InfiAgent-DABench: Evaluating Agents on Data Analysis Tasks (ICML 2024)☆181Updated 8 months ago
- This repository contains a LLM benchmark for the social deduction game `Resistance Avalon'☆139Updated 8 months ago
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆202Updated 9 months ago
- ☆224Updated 10 months ago
- ☆97Updated last year
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models☆119Updated 6 months ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆116Updated 6 months ago
- ☆169Updated 4 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆134Updated 10 months ago
- AWM: Agent Workflow Memory☆395Updated last month
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆183Updated 8 months ago