lechmazur / pgg_benchLinks
Public Goods Game (PGG) Benchmark: Contribute & Punish is a multi-agent benchmark that tests cooperative and self-interested strategies among Large Language Models (LLMs) in a resource-sharing economic scenario. Our experiment extends the classic PGG with a punishment phase, allowing players to penalize free-riders or retaliate against others.
☆39Updated 7 months ago
Alternatives and similar repositories for pgg_bench
Users that are interested in pgg_bench are comparing it to the libraries listed below
Sorting:
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆145Updated 8 months ago
- GPT-4 Level Conversational QA Trained In a Few Hours☆65Updated last year
- Easy to use, High Performant Knowledge Distillation for LLMs☆95Updated 6 months ago
- entropix style sampling + GUI☆27Updated last year
- One Line To Build Zero-Data Classifiers in Minutes☆62Updated last year
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆91Updated 9 months ago
- Very minimal (and stateless) agent framework☆45Updated 9 months ago