lechmazur / pgg_benchLinks
Public Goods Game (PGG) Benchmark: Contribute & Punish is a multi-agent benchmark that tests cooperative and self-interested strategies among Large Language Models (LLMs) in a resource-sharing economic scenario. Our experiment extends the classic PGG with a punishment phase, allowing players to penalize free-riders or retaliate against others.
☆38Updated 4 months ago
Alternatives and similar repositories for pgg_bench
Users that are interested in pgg_bench are comparing it to the libraries listed below
Sorting:
- LLM based agents with proactive interactions, long-term memory, external tool integration, and local deployment capabilities.☆106Updated last month
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆92Updated 7 months ago
- GPT-4 Level Conversational QA Trained In a Few Hours☆64Updated last year
- ☆116Updated 8 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆146Updated 6 months ago
- Very minimal (and stateless) agent framework☆45Updated 7 months ago
- ☆48Updated 6 months ago
- Glyphs, acting as collaboratively defined symbols linking related concepts, add a layer of multidimensional semantic richness to user-AI …☆52Updated 6 months ago
- entropix style sampling + GUI☆27Updated 10 months ago
- ☆154Updated 4 months ago
- ☆169Updated 6 months ago
- Simple examples using Argilla tools to build AI☆55Updated 9 months ago
- ☆161Updated 3 weeks ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 7 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆85Updated 3 months ago
- ☆261Updated 2 months ago
- A simple tool that let's you explore different possible paths that an LLM might sample.☆185Updated 3 months ago
- ☆57Updated 6 months ago
- Easy to use, High Performant Knowledge Distillation for LLMs☆92Updated 3 months ago
- ☆60Updated last month
- frozen-in-time version of our Paper Finder agent for reproducing evaluation results☆152Updated 2 weeks ago
- Benchmark that evaluates LLMs using 651 NYT Connections puzzles extended with extra trick words☆136Updated last week
- Pivotal Token Search☆123Updated last month
- One Line To Build Zero-Data Classifiers in Minutes☆58Updated 11 months ago
- ☆102Updated last year
- ☆51Updated last year
- ☆40Updated 8 months ago
- A Python library to orchestrate LLMs in a neural network-inspired structure☆50Updated 10 months ago
- II-Thought-RL is our initial attempt at developing a large-scale, multi-domain Reinforcement Learning (RL) dataset☆27Updated 4 months ago
- ☆91Updated last month