zhangxjohn / LLM-Agent-Benchmark-ListLinks
A banchmark list for evaluation of large language models.
β140Updated last week
Alternatives and similar repositories for LLM-Agent-Benchmark-List
Users that are interested in LLM-Agent-Benchmark-List are comparing it to the libraries listed below
Sorting:
- β239Updated last year
- π Repository for "AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agent", ACL'24 Best Resource Papβ¦β245Updated last month
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examplesβ106Updated last month
- Official Implementation of Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with Agent Team Optimizationβ167Updated last year
- augmented LLM with self reflectionβ132Updated last year
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]β346Updated last year
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)β149Updated 10 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasksβ243Updated 4 months ago
- This repository contains a LLM benchmark for the social deduction game `Resistance Avalon'β126Updated 3 months ago
- A curated collection of LLM reasoning and planning resources, including key papers, limitations, benchmarks, and additional learning mateβ¦β295Updated 6 months ago
- β116Updated 7 months ago
- Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineeringβ61Updated 9 months ago
- [NeurIPS 2024] Agent Planning with World Knowledge Modelβ148Updated 9 months ago
- [ICLR 2025] Benchmarking Agentic Workflow Generationβ126Updated 6 months ago
- "Improving Mathematical Reasoning with Process Supervision" by OPENAIβ113Updated 2 weeks ago
- Framework and toolkits for building and evaluating collaborative agents that can work together with humans.β97Updated 5 months ago
- β205Updated 3 months ago
- (ACL 2025 Main) Code for MultiAgentBench : Evaluating the Collaboration and Competition of LLM agents https://www.arxiv.org/pdf/2503.019β¦β157Updated last week
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"β171Updated 3 months ago
- β205Updated 5 months ago
- Code and example data for the paper: Rule Based Rewards for Language Model Safetyβ194Updated last year
- Code for Paper: Autonomous Evaluation and Refinement of Digital Agents [COLM 2024]β143Updated 9 months ago
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Modelsβ108Updated last month
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied witβ¦β137Updated last year
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"β189Updated 4 months ago
- β45Updated 6 months ago
- Critique-out-Loud Reward Modelsβ70Updated 10 months ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correctionβ79Updated 5 months ago
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Zihaβ¦β129Updated last year
- [ACL 2024] AutoAct: Automatic Agent Learning from Scratch for QA via Self-Planningβ229Updated 8 months ago