camel-ai / agent-trustLinks
π€ The code for "Can Large Language Model Agents Simulate Human Trust Behaviors?"
β104Updated 8 months ago
Alternatives and similar repositories for agent-trust
Users that are interested in agent-trust are comparing it to the libraries listed below
Sorting:
- [ICML 2024 Oral] A framework for society simulation that supports complex simulation, for example: multi-scene.β83Updated last year
- Resources for our paper: "EvoAgent: Towards Automatic Multi-Agent Generation via Evolutionary Algorithms"β139Updated last year
- Code and Data for "MIRAI: Evaluating LLM Agents for Event Forecasting"β85Updated last year
- Official Implementation of Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with Agent Team Optimizationβ190Updated last year
- SiriuS: Self-improving Multi-agent Systems via Bootstrapped Reasoningβ87Updated 3 weeks ago
- [ACL 2024] Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology Viewβ119Updated 6 months ago
- How to create rational LLM-based agents? Using game-theoretic workflows!β88Updated 6 months ago
- [ICML 2025] ResearchTown: Simulator of Human Research Communityβ185Updated this week
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examplesβ113Updated 5 months ago
- Framework and toolkits for building and evaluating collaborative agents that can work together with humans.β114Updated 3 weeks ago
- ScoreFlow: Mastering LLM Agent Workflows via Score-based Preference Optimizationβ93Updated 7 months ago
- [ICML 2025] Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Searchβ109Updated 6 months ago
- This is a survey of research on AI scientists, AI researchers, AI engineers, and a series of AI-driven research studiesβ165Updated 2 months ago
- This repository contains a LLM benchmark for the social deduction game `Resistance Avalon'β132Updated 7 months ago
- (ACL 2025 Main) Code for MultiAgentBench : Evaluating the Collaboration and Competition of LLM agents https://www.arxiv.org/pdf/2503.019β¦β198Updated 2 months ago
- augmented LLM with self reflectionβ135Updated 2 years ago
- A banchmark list for evaluation of large language models.β153Updated 3 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasksβ254Updated 7 months ago
- β173Updated 2 months ago
- Dynamic Cheatsheet: Test-Time Learning with Adaptive Memoryβ230Updated 7 months ago
- [ICLR'25] ScienceAgentBench: Toward Rigorous Assessment of Language Agents for Data-Driven Scientific Discoveryβ117Updated 4 months ago
- The code implementation of MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Modelsβ¦β38Updated last year
- β75Updated last month
- AIRA-dojo: a framework for developing and evaluating AI research agentsβ121Updated last month
- β46Updated last year
- β142Updated 7 months ago
- [ACL 2025] Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systemsβ116Updated 6 months ago
- β226Updated 10 months ago
- [NeurIPS 2024] Personal Agentic AI for MultiAgent Cooperationβ87Updated last year
- β63Updated last year