microsoft / AlympicsLinks
☆74Updated last year
Alternatives and similar repositories for Alympics
Users that are interested in Alympics are comparing it to the libraries listed below
Sorting:
- How to create rational LLM-based agents? Using game-theoretic workflows!☆92Updated 8 months ago
- [ACL 2024] Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View☆119Updated 8 months ago
- This repository contains a LLM benchmark for the social deduction game `Resistance Avalon'☆139Updated 8 months ago
- Hypothetical Minds is an autonomous LLM-based agent for diverse multi-agent settings, integrating a Theory of Mind module Theory of Mind …☆39Updated last year
- An OpenAI gym environment to evaluate the ability of LLMs (eg. GPT-4, Claude) in long-horizon reasoning and task planning in dynamic mult…☆73Updated 2 years ago
- We develop benchmarks and analysis tools to evaluate the causal reasoning abilities of LLMs.☆137Updated last year
- A benchmark for evaluating learning agents based on just language feedback☆94Updated 7 months ago
- 🤝 The code for "Can Large Language Model Agents Simulate Human Trust Behaviors?"☆109Updated 10 months ago
- ☆220Updated 2 years ago
- Official Implementation of Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with Agent Team Optimization☆192Updated last year
- AdaPlanner: Language Models for Decision Making via Adaptive Planning from Feedback☆125Updated 10 months ago
- ScienceWorld is a text-based virtual environment centered around accomplishing tasks from the standardized elementary science curriculum.☆336Updated 2 months ago
- WarAgent: LLM-based Multi-Agent Simulation of World Wars☆388Updated last year
- ☆144Updated last year
- Lamorel is a Python library designed for RL practitioners eager to use Large Language Models (LLMs).☆244Updated last month
- Causal Agent based on Large Language Model☆61Updated 5 months ago
- SmartPlay is a benchmark for Large Language Models (LLMs). Uses a variety of games to test various important LLM capabilities as agents. …☆145Updated last year
- ICML 2024: Improving Factuality and Reasoning in Language Models through Multiagent Debate☆504Updated 9 months ago
- (ACL 2025 Main) Code for MultiAgentBench : Evaluating the Collaboration and Competition of LLM agents https://www.arxiv.org/pdf/2503.019…☆217Updated 3 months ago
- Source code for our paper: "Put Your Money Where Your Mouth Is: Evaluating Strategic Planning and Execution of LLM Agents in an Auction A…☆49Updated 2 years ago
- ☆110Updated last year
- Framework and toolkits for building and evaluating collaborative agents that can work together with humans.☆120Updated 2 months ago
- Code for paper "Optima: Optimizing Effectiveness and Efficiency for LLM-Based Multi-Agent System"☆69Updated last year
- ☆328Updated last year
- A virtual environment for developing and evaluating automated scientific discovery agents.☆199Updated 10 months ago
- augmented LLM with self reflection☆137Updated 2 years ago
- Governance of the Commons Simulation (GovSim)☆64Updated last year
- ☆87Updated 2 years ago
- [NeurIPS 2024] GTBench: Uncovering the Strategic Reasoning Limitations of LLMs via Game-Theoretic Evaluations☆69Updated last year
- Reasoning with Language Model is Planning with World Model☆185Updated 2 years ago